------------------------------------------------------------------------ Abstract This month Peter turns a jaundiced eye toward the sorry state of software development. Specifically, what are the flaws that cause the industry to turn out program after program with security holes. What are companies like Sun doing to correct the problem? What should they be doing? The answer: Peter's own Software Development Security Design Methodology. Also this month: In The Buglist another summary from CERT on recently reported incidents and some more patches from Sun. Plus a few words on SANS 98 and The Hawk's compilation of security-related links. (2,100 words) ------------------------------------------------------------------------ All problems fall into one of two categories: Those that can and those that cannot be easily solved. For instance, some of the denial of service attacks that have surfaced recently are a result of the IP protocol's design. Short of implementing a new protocol (any volunteers?), not much can be done beyond stopgap measures that make particular attacks less effective. Other difficult problems include network sniffing and spoofing. These result from security-related information being sent in the clear over networks. Then there is the general authentication problem. The difficulty with authentication is that the lowest common denominator is user names and passwords, and that method is generally not sufficient. Unfortunately, solving these problems requires new hardware, new software, and user training, all of which may not be available to everyone. Over the longer term, protocols like IPV6 and IPsec will resolve many of these issues. Of course they may create new ones. Let's leave the difficult problems aside, because they're, well, difficult. The solvable problems are the result of poor planning, programming, and implementation. These can be solved by software vendors who spend the time and engineering effort needed to improve their coding methodologies. If we fixed these solvable problems, we would all have less work to do. Utopia Picture this world of the future. * System programmers can concentrate on writing new software rather than patching security problems. * Crackers especially benefit: They are free to follow other pursuits, using the time they save by not having to write all those security hole exploit scripts. Well, maybe that's not a good idea. * Technical support engineers have far fewer calls to handle. And the questions they answer harken back to the glory days of tech support -- i.e., "How do I print?" and "Why won't my mouse work?" * CERT and all the myriad response teams get to rest on their laurels, now that the incident rates and new bug reports have decreased. * Security consultants take time off to be with the family and discover the joys of reading something other than security literature. You know who you are. * System administrators spend far more time fine-tuning their systems, automating tasks, monitoring working machines, and watching their capacity get eaten up, rather than reading yet another bug report, finding the appropriate patch, installing it on all their systems, verifying that nothing is broken, and then, of course, finding the next revision of the patch. ------------------------------------------------------------------------ Is code getting better? You could assume that the security holes in operating systems are the result of poor coding way back when, and that new code and coding methods do not have the same problem. You would be wrong. Consider Windows NT and its sorry security state. Or look in our own back yard at Solaris. Bugs in admintool, NIS+, the volume manager, procfs, PPP, PAM (the pluggable authentication module), and the PCI bus drivers (no, I have nothing against bus drivers per se, just the ones that drive the PCI buses) prove the point. Recently released code has security holes. What about code that is currently in development? Can we count on improved quality and a new level of code integrity? In a word, no. A friend within Sun Microsystems Computer Company's engineering group seems as frustrated by the current state of affairs as the rest of us. I quote: The best guideline we could have would be: No idiots are allowed to write anything like setuid. More likely but still not in place now would be a policy where setuid and setgid couldn't be put back to the source base without being additionally reviewed by some panel of experts. Unfortunately, neither of these policies are currently in place within Sun. There are groups within the software development teams who do try to apply some methods to their code creation and modification. However, there is no official Sun methodology in place to try to reduce the number of security holes released upon the world. This should not be taken as a damnation of Sun. I'm sure that many of the other operating system vendors suffer from the same lack of internal controls. The question is, what can be done to improve the situation? Money talks, so buyers need to start speaking up and demanding higher quality code. If Sun and other OS vendors had buyers demanding that they use an improved methodology, they might start using one. That's the motivation behind SDSDM (Software Development Security Design Methodology). SDSDM SDSDM is not rocket science. If this simple methodology were applied to all appropriate programs, the world would be a better place. What are the appropriate programs? All setuid and setgid programs, and all daemons that accept network connections. Why that set? In the first case, these programs allow users to change their access rights or to increase their privileges. Security holes therein allow users to increase their access to the system. setuid root programs allow users to gain root privileges, which is the worst exploit. In the second case, users are allowed to access the system without first getting authenticated. A network daemon may answer a network request and process it under the daemon's privileges, not a user's. Therefore, this is another way for users to increase access, or even gain initial access, to the target system. Of course the kernel is a different case. If the kernel has security holes, no amount of checking of system programs is going to make the system secure from attack. However, relatively few kernel bugs are being found and exploited these days. Operating systems like Solaris have solid kernels that have slow-changing core facilities. Also, only the best engineers work on kernel code, reducing the chance for novice errors. Finally, in terms of security, kernels are relatively bug-free because of the limited interfaces available to attack. For instance, Solaris 2.5.1 only has 212 system calls (check /usr/include/sys/syscall.h). Compare that to the thousands of points a hacker has available to attack: sockets, files, devices, and programs. Clearly, it's easier to secure the kernel than the remainder of the system. ------------------------------------------------------------------------ The methodology Here is SDSDM. If you have additional suggestions for SDSDM, please send them along. Design the software with security in mind: * Ask "What privileges does the software need?" not "What privileges does the software want?" * Determine the minimum necessary to do the job * If previously tested code can be reused, do so Implement the software following good programming practice and secure software guidelines. Appropriate information on which programming techniques, system calls, and library calls to use and avoid is not readily available. The best I located is included in the book Practical Unix and Internet Security by Simson Garfinkle and Gene Spafford. Some of the information is abstracted here, but any programmer doing security-related work would be well-advised to read this book, or at least chapter 23. Here are the dos and don'ts. Dos * Check all command line arguments * Check all system call parameters and system call return code * Check arguments passed in environment parameters and don't depend on Unix environment variables * Be sure all buffers are bounded * Do bounds checking on every variable before the contents are copied to a local buffer * If creating a new file, use O_EXCL and O_CREAT flags to assure that the file does not already exist * Use lstat() to make sure a file is not a link, if appropriate * Use these library calls with great care: sprintf(), fscanf(), scanf(), vsprintf(), realpath(), getopt(), getpass(), streadd(), strecpy(), and strtrns() * Explicitly change directories (chdir()) to an appropriate directory at program start * Set limit values to disable creation of a core file if the program fails * If using temporary files, consider using tmpfile() or mktemp() system calls to create them (although most mktemp() library calls have problematic race conditions) * Have internal consistency-checking code * Include lots of logging, including date, time, uid and effective uid, gid and effective gid, terminal information, pid, command-line arguments, errors, and originating host * Make the program's critical portion as short and simple as possible * Always use full pathnames for any file arguments * Check user input to be sure it contains only "good" characters * Make good use of tools such as lint * Be aware of race conditions, including deadlock conditions and sequencing conditions * Place timeouts and load level limits on incoming network-oriented read request * Place timeouts on outgoing network-oriented write requests * Use session encryption to avoid session hijacking and hide authentication information Don'ts * Avoid routines that fail to check buffer boundaries when manipulating strings, particularly gets(), strcpy(), and strcat() * Never use system() and popen() system calls * Do not create files in world-writable directories * Generally, don't create setuid or setgid shell scripts * Don't make assumptions about port numbers, use getservbyname() instead * Don't assume connections from low-numbered ports are legitimate or trustworthy * Don't assume the source IP address is legitimate * Don't require clear-text authentication information Test the software using the same methods that crackers do: * Try to overflow every buffer in the package * Try to abuse command line options * Try to create every race condition conceivable * Have someone besides the designer and implementor review and test the code There are many more programming-level tips for networked and setuid applications available in the Garfinkle and Spafford book. Implementation of these steps would not only improve the software, it would decrease the number of security holes found, the number of patches released, and the amount of work by system administrators. It would also increase the reputation of the vendor. The Buglist CERT has released another summary to remind us that systems continue to be the victims of break-ins due to the failure of administrators to install patches. CERT Summary CS-98.03 reveals that weak systems are being cracked and then used to gain access to more secure machines. Crackers are installing Trojan-horse programs in place of standard system tools. These replacements are used to capture user names, passwords, and hostnames of machines that users access. The CERT summary also points to information on holes being exploited, detecting these break-ins, and recovering from them. Sun has released a patch for the vulnerability in the volrmmount program on Solaris 2.6. The volrmmount program is a suid root program that allows users to simulate an insertion or ejection of media. The vulnerability allows a cracker to view any file on the system. For more information see Sun Bulletin #0162. Also from Sun are patches for a vulnerability in the vacation program for all recent Solaris and SunOS versions. The bug allows attackers to access the account for which vacation is running. More information is available in Sun Bulletin #0163. Fitting neatly into this month's column theme, Sun announced patches for a vulnerability in the dtaction program for all recent Solaris releases. The vulnerability involves incorrect bounds checking on input arguments and can be locally exploited to gain root access. Details are in Sun Bulletin #0164. Finally, if you do nothing else for security on your 2.5.1 systems, consider installing the new version of the 2.5.1 jumbo kernel patch (103640-18). It addresses several security issues (and over 180 bugs). Break-ins Wired magazine reports that the Israeli hacker "Analyzer" tutored two California teenagers who allegedly broke into several unclassified U.S. military computers. Analyzer claims to have access to as many as 400 similar computers. Analyzer, by the way, was arrested in Israel last month. Conferences SANS 98 (System Administration Networking and Security) is shaping up to be the best Unix security event of the year, or at least until its sister conference NS 98 rolls into Orlando in October. This year this is also an NT SANS conference, running concurrently, that should provide high-quality tutorials and talks about system administration and security in the Windows NT space. SANS 98 starts May 9 in Monterey, California. Hope to see you there. In June, Usenix continues its string of strong conferences with its 1998 annual conference in New Orleans, LA. Details can be found at the Usenix Web page. The Bookstore If you are still looking to add to your online security reference pointers, you should check out "The Hawk's security links". The Hawk has put together quite an extensive set of links to security-related information. ------------------------------------------------------------------------ Resources Books and articles * Simson Garfinkle and Gene Spafford's Practical Unix and Internet Security http://www.amazon.com/exec/obidos/ISBN=1565921488/sunworldonlinea * The Wired story on Analyzer http://www.wired.com/news/news/technology/story/10730.html * Follow-up story on Analyzer's arrest, also in Wired http://www.wired.com/news/news/technology/story/11016.html Events * SANS 98 http://www.sans.org/sans98/invitation.htm * 1998 Usenix conference http://www.usenix.org/events/no98/ Security Links * The Hawk's security links http://www.dbnet.ece.ntua.gr/~george/security/ * CERT Summary CS-98.03 ftp://ftp.cert.org/pub/cert_summaries/ * Sun Bulletin #0162 (volrmmount) http://sunsolve.sun.com/sunsolve/secbulletins/security-alert-162.txt * Sun Bulletin #0163 (vacation) http://sunsolve.sun.com/sunsolve/secbulletins/security-alert-163.txt * Sun Bulletin #0164 (dtaction) http://sunsolve.sun.com/sunsolve/secbulletins/security-alert-164.txt SunWorld security resources * Full listing of Security columns in SunWorld http://www.sun.com/sunworldonline/common/swol-backissues-columns.html#security * Related network security stories in SunWorld's Site Index http://www.sun.com/sunworldonline/common/swol-siteindex.html#netsec * Peter Galvin's Solaris Security FAQ http://www.sun.com/sunworldonline/common/security-faq.html ------------------------------------------------------------------------ About the author Peter Galvin is chief technologist for Corporate Technologies Inc., a systems integrator and VAR. He is also adjunct system planner for the Computer Science Department at Brown University, and has been program chair for the past four SUG/SunWorld conferences. As a consultant and trainer, he has given talks and tutorials worldwide on the topics of system administration and security. He has written articles for Byte and Advanced Systems (SunWorld) magazines, and the Superuser newsletter. Peter is co-author of the best-selling Operating Systems Concepts textbook. Reach Peter at peter.galvin@sunworld.com URL: http://www.sunworld.com/swol-04-1998/swol-04-security.html [Image] -------------------------------------- [The material on this site is not endorsed, sponsored, provided, or on behalf of North Carolina State University]