Friday, February 28, 2014

Cross-browser vs. multi-browser

Cross-browser

Cross-browser refers to the ability of a website, web application, HTML construct or client-side script to function in environments that provide its required features and to bow out or degrade gracefully when features are absent or lacking.

Cross-browser vs. multi-browser

With regard to scripts, which is the most common usage, the term cross-browser is often confused with multi-browser (see jQuery). Multi-browser scripts can only be expected to work in environments where they have been demonstrated to work (due to assumptions based on observing a subset of browsers). Most publicly available libraries and frameworks are multi-browser scripts and list the environments (typically popular browsers in use at the time and in their default configurations) where they can be expected to work.
Multi-browser scripts virtually always approach obsolescence as new browsers are introduced, features are deprecated and removed, and the authors assumptions are invalidated; therefore, multi-browser scripts have always required periodic maintenance. As the number of browsers and configurations in use has grown, so has the frequency of such maintenance. Older (or otherwise lesser) browsers and browser versions are periodically dropped as supported environments, regardless of whether or not they are still in use and without concern for what the new scripts will do when exposed to these environments. A typical scenario has them fail (e.g. by throwing an exception during initialization) in ways that were never anticipated by the authors, possibly rendering the document's content inaccessible.
Scripts are categorized as cross-browser or multi-browser based on their logic. A script that uses cross-browser techniques (e.g. appropriate feature detection and testing) is cross-browser forever. Multi-browser scripts (which often rely on browser sniffing) remain multi-browser scripts until they fade away. No amount of testing can distinguish between cross-browser and multi-browser scripts; it is all in the code.
Scripted cross-browser documents and applications must have content that is accessible when scripting is disabled or unavailable, else there would be no usable fallback for the scripts. For some applications (e.g., word processors, games), the fallback content is often little more than a description of what the user would see if scripting were available, as opposed to an empty document or lone error message.

Examples of cross-browser JavaScript

History

Background

The history of cross-browser is involved with the history of the "browser wars" in the late 1990s between Netscape Navigator and Microsoft Internet Explorer as well as with that of JavaScript and JScript, the first scripting languages to be implemented in the web browsers. Netscape Navigator was the most widely used web browser at that time and Microsoft had licensed Mosaic to create Internet Explorer 1.0. New versions of Netscape Navigator and Internet Explorer were released at a rapid pace over the following few years. Due to the intense competition in the web browser market, the development of these browsers was fast-paced and new features were added without any coordination between vendors. The introduction of new features often took priority over bug fixes, resulting in unstable browsers, fickle web standards compliance, frequent crashes and many security holes.

Creation of W3C and Web standardization

The World Wide Web Consortium (W3C), founded in 1994 to promote open standards for the World Wide Web, pulled Netscape and Microsoft together with other companies to develop a standard for browser scripting languages called "ECMAScript". The first version of the standard was published in 1997. Subsequent releases of JavaScript and JScript would implement the ECMAScript standard for greater cross-browser compatibility. After the standardization of ECMAScript, W3C began work on the standardization of Document Object Model (DOM), which is a way of representing and interacting with objects in HTML, XHTML and XML documents. DOM Level 0 and DOM Level 1 were introduced in 1996 and 1997. Only limited supports of these were implemented by the browsers, as a result, non-conformant browsers such as Internet Explorer 4.x and Netscape 4.x were still widely used as late as 2000. DOM Standardization became popular since the introduction of DOM Level 2, which was published in 2000. It introduced the "getElementById" function as well as an event model and support for XML namespaces and CSS. DOM Level 3, the current release of the DOM specification, published in April 2004, added support for XPath and keyboard event handling, as well as an interface for serializing documents as XML. By 2005, large parts of W3C DOM were well-supported by common ECMAScript-enabled browsers, including Microsoft Internet Explorer, Opera, Safari and Gecko-based browsers (like Firefox, SeaMonkey and Camino).

This century

In the early part of the century, practices such as browser sniffing were deemed unusable for cross-browser scripting.[1] The term "multi-browser" was coined to describe applications that relied on browser sniffing or made otherwise invalid assumptions about run-time environments, which at the time were almost invariably Web browsers. The term "cross-browser" took on its currently accepted meaning at this time as applications that once worked in Internet Explorer 4 and Netscape Navigator 4 and had since become unusable in modern browsers could not reasonably be described as "cross-browser". Colloquially, such multi-browser applications, as well as frameworks and libraries are still referred to as cross-browser.

Web engineering

Web engineering

The web has become a major delivery platform for a variety of complex and sophisticated enterprise applications in several domains. In addition to their inherent multifaceted functionality, these web applications exhibit complex behavior and place some unique demands on their usability, performance, security and ability to grow and evolve. However, a vast majority of these applications continue to be developed in an ad-hoc way, contributing to problems of usability, maintainability, quality and reliability.[1][2] While web development can benefit from established practices from other related disciplines, it has certain distinguishing characteristics that demand special considerations. In recent years, there have been developments towards addressing these considerations.
As an emerging discipline, web engineering actively promotes systematic, disciplined and quantifiable approaches towards successful development of high-quality, ubiquitously usable web-based systems and applications.[3] In particular, web engineering focuses on the methodologies, techniques and tools that are the foundation of web application development and which support their design, development, evolution, and evaluation. Web application development has certain characteristics that make it different from traditional software, information system, or computer application development.
Web engineering is multidisciplinary and encompasses contributions from diverse areas: systems analysis and design, software engineering, hypermedia/hypertext engineering, requirements engineering, human-computer interaction, user interface, information engineering, information indexing and retrieval, testing, modelling and simulation, project management, and graphic design and presentation. Web engineering is neither a clone, nor a subset of software engineering, although both involve programming and software development. While web Engineering uses software engineering principles, it encompasses new approaches, methodologies, tools, techniques, and guidelines to meet the unique requirements of web-based applications.

As a discipline

Proponents of web engineering supported the establishment of web engineering as a discipline at an early stage of web. First Workshop on Web Engineering was held in conjunction with World Wide Web Conference held in Brisbane, Australia, in 1998. San Murugesan, Yogesh Deshpande, Steve Hansen and Athula Ginige, from University of Western Sydney, Australia formally promoted web engineering as a new discipline in the first ICSE workshop on Web Engineering in 1999.[3] Since then they published a series of papers in a number of journals, conferences and magazines to promote their view and got wide support. Major arguments for web engineering as a new discipline are:
  • Web-based Information Systems (WIS) development process is different and unique.[4]
  • Web engineering is multi-disciplinary; no single discipline (such as software engineering) can provide complete theory basis, body of knowledge and practices to guide WIS development.[5]
  • Issues of evolution and lifecycle management when compared to more 'traditional' applications.
  • Web based information systems and applications are pervasive and non-trivial. The prospect of web as a platform will continue to grow and it is worth being treated specifically.
However, it has been controversial, especially for people in other traditional disciplines such as software engineering, to recognize web engineering as a new field. The issue is how different and independent web engineering is, compared with other disciplines.
Main topics of Web engineering include, but are not limited to, the following areas:

Modeling disciplines

  • Business Processes for Applications on the Web
  • Process Modelling of Web applications.
  • Requirements Engineering for Web applications
B2B applications

Design disciplines, tools and methods

  • UML and the Web
  • Conceptual Modeling of Web Applications (aka. Web modeling)
  • Prototyping Methods and Tools
  • Web design methods
  • CASE Tools for Web Applications
  • Web Interface Design
  • Data Models for Web Information Systems

Implementation disciplines

  • Integrated Web Application Development Environments
  • Code Generation for Web Applications
  • Software Factories for/on the Web
  • Web 2.0, AJAX, E4X, ASP.NET, PHP and Other New Developments
  • Web Services Development and Deployment
  • Empirical Web Engineering

Testing disciplines

  • Testing and Evaluation of Web systems and Applications
  • Testing Automation, Methods and Tools

Applications categories disciplines

  • Semantic Web applications
  • Ubiquitous and Mobile Web Applications
  • Mobile Web Application Development
  • Device Independent Web Delivery
  • Localization and Internationalization Of Web Applications

Attributes

Web quality

Content-related

Education

See also


Sources

  • Robert L. Glass, "Who's Right in the Web Development Debate?" Cutter IT Journal, July 2001, Vol. 14, No.7, pp 6–10.
  • S. Ceri, P. Fraternali, A. Bongio, M. Brambilla, S. Comai, M. Matera. "Designing Data-Intensive Web Applications". Morgan Kaufmann Publisher, Dec 2002, ISBN 1-55860-843-5

Web engineering resources

Organizations

Web development

Web development

Web development is a broad term for the work involved in developing a web site for the Internet (World Wide Web) or an intranet (a private network). Web development can range from developing the simplest static single page of plain text to the most complex web-based internet applications, electronic businesses, and social network services. A more comprehensive list of tasks to which web development commonly refers, may include web design, web content development, client liaison, client-side/server-side scripting, web server and network security configuration, and e-commerce development. Among web professionals, "web development" usually refers to the main non-design aspects of building web sites: writing markup and coding.
For larger organizations and businesses, web development teams can consist of hundreds of people (web developers). Smaller organizations may only require a single permanent or contracting webmaster, or secondary assignment to related job positions such as a graphic designer and/or information systems technician. Web development may be a collaborative effort between departments rather than the domain of a designated department.

Web development as an industry

Since the commercialization of the web, web development has been a growing industry. The growth of this industry is being pushed especially by businesses wishing to sell products and services to online customers.[1]
For tools and platforms, the public can use many open source systems to aid in web development. A popular example, the LAMP (Linux, Apache, MySQL, PHP) stack is available for download online free of charge. This has kept the cost of learning web development to a minimum. Another contributing factor to the growth of the industry has been the rise of easy-to-use WYSIWYG web-development software, most prominently Adobe Dreamweaver, WebDev, and Microsoft Expression Studio. Using such software, virtually anyone can relatively quickly learn to develop a very basic web page. Knowledge of HyperText Markup Language (HTML) or of programming languages is still required to use such software, but the basics can be learned and implemented quickly with the help of help files, technical books, internet tutorials, or face-to-face training.
An ever growing set of tools and technologies have helped developers build more dynamic and interactive websites. Web developers now help to deliver applications as web services which were traditionally only available as applications on a desk-based computer.
Instead of running executable code on a local computer, users can interact with online applications to create new content. This has created new methods in communication[citation needed] and allowed for many opportunities to decentralize information and media distribution. Users can interact with applications from many locations, instead of being tied to a specific workstation for their application environment.
Examples of dramatic transformation in communication and commerce led by web development include e-commerce. Online auction-sites such as eBay have changed the way consumers find and purchase goods and services. Online retailers such as Amazon.com and Buy.com (among many others) have transformed the shopping and bargain-hunting experience for many consumers. Another good example of transformative communication led by web development is the blog. Web applications such as WordPress and Movable Type have created easily-implemented blog-environments for individual web sites. The popularity of open-source content management systems such as Joomla!, Drupal, XOOPS, and TYPO3 and enterprise content management systems such as Alfresco and eXo Platform have extended web development's impact at online interaction and communication.
Web development has also impacted personal networking and marketing. Websites are no longer simply tools for work or for commerce, but serve more broadly for communication and social networking. Websites such as Facebook and Twitter provide users with a platform to communicate and organizations with a more personal and interactive way to engage the public.

Typical areas

Web Development can be split into many areas and a typical and basic web development hierarchy might consist of:

Client side coding

  • Ajax Asynchronous JavaScript provides new methods of using JavaScript, and other languages to improve the user experience.
  • Flash Adobe Flash Player is a ubiquitous browser plugin ready for RIAs. Flex 2 is also deployed to the Flash Player (version 9+).
  • JavaScript JavaScript is a ubiquitous client side platform for creating and delivering rich web applications that can also run across a wide variety of devices. It is a dialect of the scripting language ECMAScript.
  • jQuery Cross-browser JavaScript library designed to simplify and speed up the client-side scripting of HTML.
  • Microsoft Silverlight Microsoft's browser plugin that enables animation, vector graphics and high-definition video playback, programmed using XAML and .NET programming languages.
  • HTML5 and CSS3 Latest HTML proposed standard combined with the latest proposed standard for CSS natively supports much of the client-side functionality provided by other frameworks such as Flash and Silverlight
Looking at these items from an "umbrella approach", client side coding such as XHTML is executed and stored on a local client (in a web browser) whereas server side code is not available to a client and is executed on a web server which generates the appropriate XHTML which is then sent to the client. The nature of client side coding allows one to alter the HTML on a local client and refresh the pages with updated content (locally), web designers must bear in mind the importance and relevance to security with their server side scripts. If a server side script accepts content from a locally modified client side script, the web development of that page is poorly sanitized with relation to security.

Server side coding


The World Wide Web has become a major delivery platform for web development a variety of complex and sophisticated enterprise applications in several domains. In addition to their inherent multifaceted functionality, these web applications exhibit complex behavior and place some unique demands on their usability, performance, security and ability to grow and evolve. However, a vast majority of these applications continue to be developed in an ad-hoc way, contributing to problems of usability, maintainability, quality and reliability.(1)(2) While web development can benefit from established practices from other related disciplines, it has certain distinguishing characteristics that demand special considerations. In recent years of web development there have been some developments towards addressing these problems and requirements. As an emerging discipline, web engineering actively promotes systematic, disciplined and quantifiable approaches towards successful development of high-quality, ubiquitously usable web-based systems and applications.(3)(4) In particular, web engineering focuses on the methodologies, techniques and tools that are the foundation of web application development and which support their design, development, evolution, and evaluation. Web application development has certain characteristics that make it different from traditional software, information system, or computer application development.
Web engineering is multidisciplinary and encompasses contributions from diverse areas: systems analysis and design, software engineering, hypermedia/hypertext engineering, requirements engineering, human-computer interaction, user interface, information engineering, information indexing and retrieval, testing, modelling and simulation, project management, and graphic design and presentation. Web engineering is neither a clone, nor a subset of software engineering, although both involve programming and software development. While web engineering uses software engineering principles, web development encompasses new approaches, methodologies, tools, techniques, and guidelines to meet the unique requirements for web-based applications.

Client side + server side

  • Google Web Toolkit provides tools to create and maintain complex JavaScript front-end applications in Java.
  • Dart provides tools to create and maintain complex JavaScript front-end applications as well as supporting server-side code in Dart (programming language).
  • Opa is a high-level language in which both the client and the server parts are implemented. The compiler then decides which parts run on the client (and are translated automatically to JavaScript) and which parts run on the server. The developer can tune those decisions with simple directives. (open source)
  • Pyjamas is a tool and framework for developing Ajax applications and Rich Internet Applications in Python.
  • Tersus is a platform for the development of rich web applications by visually defining user interface, client side behavior and server side processing. (open source)
However languages like Ruby and Python are often paired with database servers other than MySQL (the M in LAMP). Below are example of other databases currently in wide use on the web. For instance some developers prefer a LAPR (Linux/Apache/PostgreSQL/Ruby on Rails) setup for development.

Database technology

* open source / public domain

Practical web development

Basic

In practice, many web developers will have basic interdisciplinary skills / roles, including:
The above list is a simple website development hierarchy and can be extended to include all client side and server side aspects. It is still important to remember that web development is generally split up into client side coding, covering aspects such as the layout and design, and server side coding, which covers the website's functionality and back-end systems.

Advanced

Some more advanced web developers will also have these interdisciplinary skills / roles:
  • GUI (Graphic User Interface) design
  • Audio, Video and Animation processing and encoding (for web usage)
  • Flash Capabilities (animation, audio, video, scripting)
  • Web content management system Deployment and/or Content management infrastructure design, development and integration
  • Web applications development, integration and deployment
  • Web server stress testing (how much traffic can a web server running a specific application endure before collapsing)
  • Web site security analysis & testing
  • Web site code optimization (which is an important aspect of search engine optimization)
  • Project management, QA and other aspects common to IT development

Security considerations

Web development takes into account many security considerations, such as data entry error checking through forms, filtering output, and encryption.[2] Malicious practices such as SQL injection can be executed by users with ill intent yet with only primitive knowledge of web development as a whole. Scripts can be used to exploit websites by granting unauthorized access to malicious users that try to collect information such as email addresses, passwords and protected content like credit card numbers.
Some of this is dependent on the server environment (most commonly Apache or Microsoft IIS) on which the scripting language, such as PHP, Ruby, Python, Perl or ASP is running, and therefore is not necessarily down to the web developer themselves to maintain. However, stringent testing of web applications before public release is encouraged to prevent such exploits from occurring. If some contact form is provided in a website it should include a captcha field in it which prevents computer programs from automatically filling forms and also mail spamming.
Keeping a web server safe from intrusion is often called Server Port Hardening. Many technologies come into play to keep information on the internet safe when it is transmitted from one location to another. For instance Secure Socket Layer Encryption (SSL) Certificates are issued by certificate authorities to help prevent internet fraud. Many developers often employ different forms of encryption when transmitting and storing sensitive information. A basic understanding of information technology security concerns is often part of a web developer's knowledge.
Because new security holes are found in web applications even after testing and launch, security patch updates are frequent for widely used applications. It is often the job of web developers to keep applications up to date as security patches are released and new security concerns are discovered.

Timeline

Web development timeline.png

See also

Software developer

Software developer

From Wikipedia, the free encyclopedia
A software developer is a person concerned with facets of the software development process. Their work includes researching, designing, implementing, and testing software.[1] A software developer may take part in design, computer programming, or software project management. They may contribute to the overview of the project on the application level rather than component-level or individual programming tasks. Software developers are often still guided by lead programmers but the description also encompasses freelance software developers.

Description

In the US, a software developer is classified into one of 3 titles (all under the 15-0000 Computer and Mathematical Occupations Major Group):[2]
  1. 15-1131 Computer Programmers[3]
  2. 15-1132 Software Developers, Applications[4]
  3. 15-1133 Software Developers, Systems Software[5]
A person who develops stand-alone software (that is more than just a simple program) and gets involved with all phases of the development (design and code) is a software developer.[citation needed] Some of the notable software people include Peter Norton (developer of Norton Utilities), Richard Garriott (Ultima-series creator), and Philippe Kahn (Borland key founder), all of whom started as entrepreneurial individual or small-team software developers.
Other names which are often used in the same close context are programmer, software analyst, and software engineer. According to developer Eric Sink, the differences between system design, software development and programming are more apparent. Already in the current market place there can be found a segregation between programmers and developers,[dubious ] being that one who implements is not the same as the one who designs the class structure or hierarchy. Even more so that developers become systems architects, those who design the multi-leveled architecture or component interactions of a large software system.[6] (see also Debate over who is a software engineer)
Aspects of developer's job may include:
In a large company, there may be employees whose sole responsibility may consist of only one of the phases above. In smaller development environments, a few, or even a single individual might handle the complete process.

Web Developer

Web developer

A web developer is a programmer who specializes in, or is specifically engaged in, the development of World Wide Web applications, or distributed network applications that are run over HTTP from a web server to a web browser.

Nature of employment

Web developers can be found working in all types of organizations,including large corporations and governments, small and medium sized companies, or alone as freelancers. Some web developers work for one organization as a permanent full-time employee, while others may work as independent consultants, or as contractors for an employment agency

Type of work performed

Modern web applications often contain three or more tiers,[1] and depending on the size of the team a developer works on, he or she may specialize in one or more of these tiers - or may take a more interdisciplinary role.[2] For example, in a two person team, one developer may focus on the technologies sent to the client such as HTML, JavaScript, CSS, and on the server-side frameworks (such as Perl, Python, Ruby, PHP, Java, ASP, .NET, .NET MVC) used to deliver content and scripts to the client. Meanwhile the other developer might focus on the interaction between server-side frameworks, the web server, and a database system. Further, depending on the size of their organization, the aforementioned developers might work closely with a content creator/copy writer, marketing advisor, web designer, web producer, project manager, software architect, or database administrator - or they may be responsible for such tasks as web design and project management themselves.

Educational and licensure requirements

There are no formal educational or licensure requirements to become a web developer. However, many colleges and trade schools offer coursework in web development. There are also many tutorials and articles, which teach web development, freely available on the web - for example: http://en.wikiversity.org/wiki/Basic_JavaScript
Even though there are no formal educational requirements, dealing with web developing projects requires those who wish to be referred to as web developers to have advanced knowledge/skills in:

Web Development and Organized Crime

Many web development companies are fronts for the India IT mafia, an organized Criminal group based in the Bangalore, India and the United States, that uses control over a company's website to take control of a company. Example, running covert actions to misdirect emails and feeding misinformation and propaganda to US executives (example, block non-indian job canidates, and passing through Indian immigrants into core job functions at the executive level of the company.) Recently, there is a serious epidemic of staffing and job searching website secretly being run by the India IT mafia, to block US candidates and move them around to the wrong job markets to cause damage to them and to help them fill these positions with people from the Foreign Managers for the Advancement of India and China over America, anti-western, pro-asia cult.
When hiring a web developer its important to make sure its not an US native acting as a front to the Indian IT mafia with the entire operation running over in Bangalore. This is not about cheap labor, its about using a website to launch propaganda attacks on the people of United States. And stealing and reselling on proprietary data stored on company servers.

See also

Penetration test (Security)

Penetration test

A penetration test, or the short form pentest, is an attack on a computer system with the intention of finding security weaknesses, potentially gaining access to it, its functionality and data[1][not in citation given (See discussion.)]
The process involves identifying the target systems and the goal, then reviewing the information available and undertaking available means to attain the goal. A penetration test target may be a white box (where all background and system information is provided) or black box (where only basic or no information is provided except the company name). A penetration test will advise if a system is vulnerable to attack, if the defenses were sufficient and which defenses (if any) were defeated in the penetration test.[2]
A penetration can be likened to surveying a rabbit proof fence, which must be whole to keep the rabbits out. In surveying the fence the penetration tester may identify a single hole large enough for a rabbit (or themselves) to move through, once the defense is passed, any further review of that defense may not occur as the penetration tester moves on to the next security control. This means there may be several holes or vulnerabilities in the first line of defense and the penetration tester only identified the first one found as it was a successful exploit. This is where the difference lay between a vulnerability assessment and penetration test - the vulnerability assessment is everything that you may be susceptible to, the penetration test is based on if your defense can be defeated.[citation needed]
Security issues uncovered through the penetration test are presented to the system's owner.[citation needed] Effective penetration tests will couple this information with an accurate assessment of the potential impacts to the organization and outline a range of technical and procedural countermeasures to reduce risks.[citation needed]
Penetration tests are valuable for several reasons:[citation needed]
  1. Determining the feasibility of a particular set of attack vectors
  2. Identifying higher-risk vulnerabilities that result from a combination of lower-risk vulnerabilities exploited in a particular sequence
  3. Identifying vulnerabilities that may be difficult or impossible to detect with automated network or application vulnerability scanning software
  4. Assessing the magnitude of potential business and operational impacts of successful attacks
  5. Testing the ability of network defenders to successfully detect and respond to the attacks
  6. Providing evidence to support increased investments in security personnel and technology
Penetration tests are a component of a full security audit.[3][4] For example, the Payment Card Industry Data Security Standard (PCI DSS), and security and auditing standard, requires both annual and ongoing penetration testing (after system changes).[citation needed]

History

By the mid 1960s, the growing popularity of online time-sharing computer systems, which had made their resources accessible to users over communications lines, had created new concerns about system security. As the scholars Deborah Russell and G. T. Gangemi, Sr. explain, "the 1960s marked the true beginning of the age of computer security."[5] In June 1965, for example, several of the country's leading computer security experts held one of the first major conferences on system security, one that was hosted by the government contractor, the System Development Corporation (SDC). During the conference, it was noted that one SDC employee had been able to easily undermine the various system safeguards that had been added to SDC's AN/FSQ-32 time-sharing computer system. In the hopes that the further study of system security could be useful, the attendees requested "studies to be conducted in such areas as breaking security protection in the time-shared system." In other words, the conference participants initiated one of the first formal requests to use computer penetration as tool for studying system security.[6]
At the Spring 1967 Joint Computer Conference, many of the country's leading computer specialists met again to discuss their concerns about system security. During this conference, the computer security experts Willis Ware, Harold Petersen, and Rein Tern, all of the RAND Corporation, and Bernard Peters of the National Security Agency (NSA), all used the phrase "penetration" to describe an attack against a computer system. In a paper, Ware referred to the military's remotely accessible time-sharing systems, warning that "deliberate attempts to penetrate such computer systems must be anticipated." His colleagues Petersen and Turn shared the same concerns, observing that on-line communication systems "are vulnerable to threats to privacy," including "deliberate penetration". Bernard Peters of the NSA made the same point, insisting that computer input and output "could provide large amounts of information to a penetrating program." During the conference, computer penetration would become formally identified as a major threat to online computer systems.[7]
The threat posed by computer penetration was next outlined in a major report organized by the United States Department of Defense (DoD) in late 1967. Essentially, DoD officials turned to Willis Ware to lead a task force of experts from NSA, CIA, DoD, academia, and industry to formally assess the security of time-sharing computer systems. By relying on many of the papers that had been presented during the Spring 1967 Joint Computer Conference, the task force largely confirmed the threat to system security posed by computer penetration. Although Ware's report was initially classified, many of the country's leading computer experts quickly identified the study as the definitive document on computer security.[8] Jeffrey R. Yost of the Charles Babbage Institute has more recently described the Ware report as "by far the most important and thorough study on technical and operational issues regarding secure computing systems of its time period."[9] In effect, the Ware report reaffirmed the major threat posed by computer penetration to the new online time-sharing computer systems.
To get a better understanding of system weaknesses, the federal government and its contractors soon began organizing teams of penetrators, known as tiger teams, to use computer penetration as a means for testing system security. Deborah Russell and G. T. Gangemi, Sr. stated that during the 1970s "'tiger teams' first emerged on the computer scene. Tiger teams were government and industry sponsored teams of crackers who attempted to break down the defenses of computer systems in an effort to uncover, and eventually patch, security holes.".[10] One of the leading scholars on the history of computer security, Donald MacKenzie, similarly points out that "RAND had done some penetration studies (experiments in circumventing computer security controls) of early time-sharing systems on behalf of the government."[11] Jeffrey R. Yost of the Charles Babbage Institute, in his own work on the history of computer security, also acknowledges that both the RAND Corporation and the SDC had "engaged in some of the first so-called 'penetration studies' to try to infiltrate time-sharing systems in order to test their vulnerability."[12] In virtually all of these early studies, the tiger teams would succeed in breaking into their targeted computer systems, as the country's time-sharing systems had very poor defenses.
Of the earliest tiger team actions, the efforts at the RAND Corporation demonstrated the usefulness of penetration as a tool for assessing system security. At the time, one RAND analyst noted that the tests had "demonstrated the practicality of system-penetration as a tool for evaluating the effectiveness and adequacy of implemented data security safe-guards." In addition, a number of the RAND analysts insisted that the penetration test exercises all offered several benefits that justified its continued use. As they noted in one paper, "a penetrator seems to develop a diabolical frame of mind in his search for operating system weaknesses and incompleteness, which is difficult to emulate." For these reasons and others, many analysts at RAND recommended the continued study of penetration techniques for their usefulness in assessing system security.[13]
Perhaps the leading computer penetration expert during these formative years was James P. Anderson, who had worked with the NSA, RAND, and other government agencies to study system security. In early 1971, the U.S. Air Force contracted with Anderson's private company to study the security of its time-sharing system at the Pentagon. In his study, Anderson outlined a number of the major factors that were involved in computer penetration. The general attack sequence, as Anderson described it, involved a number of steps, including: "1. Find an exploitable vulnerability. 2. Design an attack around it. 3. Test the attack. 4. Seize a line in use... 5. Enter the attack. 6. Exploit the entry for information recovery.’’ Over time, Anderson's description of the general steps involved in computer penetration would help guide many other security experts, as they continued to rely on this technique to assess the security of time-sharing computer systems.[14]
In the following years, the use of computer penetration as a tool for security assessment would only become more refined and sophisticated. In the early 1980s, the journalist William Broad briefly summarized the ongoing efforts of tiger teams to assess system security. As Broad reported, the DoD-sponsored report by Willis Ware had "showed how spies could actively penetrate computers, steal or copy electronic files and subvert the devices that normally guard top-secret information. The study touched off more than a decade of quiet activity by elite groups of computer scientists working for the Government who tried to break into sensitive computers. They succeeded in every attempt."[15] While these various studies may have suggested that computer security in the U.S. remained a major problem, the scholar Edward Hunt has more recently made a broader point about the extensive study of computer penetration as a security tool. As Hunt suggests in a recent paper on the history of penetration testing, the defense establishment ultimately "created many of the tools used in modern day cyberwarfare," as it carefully defined and researched the many ways in which computer penetrators could hack into targeted systems.[16]

Standards and certification

The Information Assurance Certification Review Board (IACRB) manages a penetration testing certification known as the Certified Penetration Tester (CPT). The CPT requires that the exam candidate pass a traditional multiple choice exam, as well as pass a practical exam that requires the candidate to perform a penetration test against servers in a virtual machine environment.[17]

 

Tools

Specialized OS distributions

There are several operating system distributions, which are geared towards performing penetration testing.[18] Distributions typically contains pre-packaged and pre-configured set of tools. This is useful because the penetration tester does not have to hunt down a tool when it is required. This may in turn lead to further complications such as compile errors, dependencies issues, configuration errors, or simply acquiring additional tools may not be practical in the tester's context.
Popular examples are Kali Linux (replacing Backtrack as of December 2012) based on Debian Linux, Pentoo based on Gentoo Linux and WHAX based on Slackware Linux. There are many other specialized operating systems for penetration testing, each more or less dedicated to a specific field of penetration testing.

Software frameworks

Automated testing tools

The process of penetration testing may be simplified as two parts:
  • Discovering a combination of legal operations that will let the tester execute an illegal operation: unescaped SQL commands, unchanged salts in source-visible projects, human relationships, using old hash/crypto functions
A single flaw may not be enough to enable a critically serious exploit. Leveraging multiple known flaws and shaping the payload in a way that will be regarded as valid operation is almost always required. Metasploit provides a ruby library for common tasks and maintains a database of known exploits.
Under budget and time constraints, fuzzing is a common technique to discover vulnerabilities. What it aims to do is to get an unhandled error through random input. Random input allows the tester to use less often used code paths. Well-trodden code paths have usually been rid of errors. Errors are useful because they either expose more information, such as HTTP server crashes with full info tracebacks or are directly usable such as buffer overflows. A way to see the practicality of the technique is to imagine a website having 100 text input boxes. A few of them are vulnerable to SQL injections on certain strings. Submitting random strings to those boxes for a while will hopefully hit the bugged code path. The error shows itself as a broken HTML page half rendered because of SQL error. In this case, only text boxes are treated as input streams. But software systems have many possible input streams such as cookie/session data, the uploaded file stream, RPC channels, or the memory. In any of these input streams, errors can happen. The goal is first, to get an unhandled error, and second, come up with a theory on the nature of the flaw based on the failed test case. Then write an automated tool to test the theory until it is correct. After that, with luck it should become obvious how to package the payload so that its execution will be triggered. If this is not viable, one can hope that another error produced by the fuzzer will yield more fruit. The use of a fuzzer means time is not wasted on checking completely adequate code paths where exploits are unlikely to occur.
  • Specifying the illegal operation, also known as payloads according to Metasploit terminology: remote mouse controller, webcam peeker, ad popupper, botnet drone or password hash stealer. Refer to Metasploit payload list for more examples.
Some companies maintain large databases of known exploits and provide products to automatically test target systems if they are vulnerable.

See also