Information Systems Development: Business Systems and Services: Modeling and Development, is the collected proceedings of the 19th International Conference on Information Systems Development held in Prague, Czech Republic, August 25 - 27, 2010. It follows in the tradition of previous conferences in the series in exploring the connections between industry, research and education. These proceedings represent ongoing reflections within the academic community on established information systems topics and emerging concepts, approaches and ideas. It is hoped that the papers herein contribute towards disseminating research and improving practice The theme for the ISD 2010 Conference was Business Systems and Services: Modeling and Development. The Conference program was scheduled into the following sessions: Business Process Management (BPM), Business Systems Analysis & Design I. II, Model-Driven Engineering in ISD, Agile and High-Speed Systems Development, IS/IT Project Management I.
II, Information System Development Methodology I. III, Web Services and SOA in the BPM Context I. II, Cognitive Aspects of Business Systems and Services, Public Information Systems Development I. II, Data and Information Systems Modeling I. III, Service Oriented Modeling, Managing of IS Development, and Miscellaneous Aspects of ISD.
Strategic Planning: Developing Business Drivers for Performance Improvement Posted on November 19, 2012 by in. Organizational improvement efforts should be driven by business needs, not by the content of improvement models. While improvement models, such as the or the, provide excellent guidance and best practice standards, the way in which those models are implemented must be guided by the same drivers that influence any other business decision. Business drivers are the collection of people, information, and conditions that initiate and support activities that help an organization accomplish its mission. These drivers should be the guiding force behind performance improvement because they represent key factors or influences that matter to an organization's success. But how do we identify these drivers?
This blog posting, the latest in a continuing series on the SEI's work on, describes how we are using integrated strategic planning and the associated information framework to derive the most vital business drivers for performance improvement. An Integrated Strategic Planning Method The strategic planning method we've been using at the SEI integrates the following two complementary techniques that provide a framework for identifying business drivers for performance improvement:., which are indicators that measure how well an organization is accomplishing its goals.
For example, a CSF for agile software projects is achieving a high-level of client-developer interaction., which allow organizations to explore multiple potential futures and generate robust strategies and the early warning signs that indicate how the future may unfold. For example, weather experts will create scenarios based on the critical uncertainties associated with a major weather system and plan for the range of possibilities, while monitoring the variables and narrowing on the most likely scenario over time. Our integrated strategic planning approach (described in my and a November 2010 SEI ) sets the stage for initiatives, such as of alternatives, and, that an organization can apply to improve its performance at multiple scales, ranging from individuals and teams up to the entire enterprise. Tying performance improvement to organizational strategy by identifying key business drivers creates an environment that is business-driven and model-based, but not model-driven.
In other words, improvement decisions are informed by best practice models-such as the CMMI, the Baldrige Criteria for Performance Excellence, the, and the -but are driven by business concerns, rather than by an attempt to apply a particular model for its own sake. Linking Strategic Planning to Performance Improvement Through the SEI's strategic planning and performance improvement work with, we've observed that key business drivers can and should be elicited from integrated strategic plans. In particular, aligning improvement activities with organizational strategic goals and CSFs helps ensure improvement activities achieve business goals. We've also learned that there is no one-size-fits-all improvement solution, that is, no single model improves performance across-the-board. Instead, organizations often see better results when they apply the most applicable parts of multiple models based on strategic business-driven information. When improvement initiatives and activities are directly derived from organizational goals, objectives, and CSFs, they can support and complement strategic initiatives and actions.
We particularly like how broad frameworks, such as the, can be used to identify general initiatives. The Baldrige, considered with regard to the organization's customer goals and coupled with input from a strategic plan, might lead the organization to improve the resilience of their customer-facing web services, which can then be augmented with specific actions (such as ensuring that ) guided by the.
This multi-model combination enables an organization to select the improvement model(s) and practices according to what will best support their business objectives (such as preserving the confidentiality of customer data), rather than according to model-based criteria (such as ). Business-Driven Performance Improvement To showcase the way that integrated strategic planning can help an organization understand its business drivers for improvement, consider an information technology (IT) group with the mission of acquiring IT systems that support the services provided to the broader company's customers.
3 Technological Drivers Privacy is an information concept, and fundamental properties of information define what privacy can—and cannot—be. For example, information has the property that it is inherently reproducible: If I share some information with you, we both have all of that information. This stands in sharp contrast to apples: If I share an apple with you, we each get half an apple, not a whole apple. If information were not reproducible in this manner, many privacy concerns would simply disappear. 3.1 THE IMPACT OF TECHNOLOGY ON PRIVACY Advances in technology have often led to concerns about the impact of those advances on privacy. As noted in, the classic characterization of privacy as the right to be left alone was penned by Louis Brandeis in his article discussing the effects on privacy of the then-new technology of photography. The development of new information technologies, whether they have to do with photography, telephony, or computers, has almost always raised questions about how privacy can be maintained in the face of the new technology.
Today’s advances in computing technology can be seen as no more than a recurrence of this trend, or can be seen as different in that new technology, being fundamentally concerned with the gathering and manipulation of information, increases the potential for threats to privacy. Several trends in the technology have led to concerns about privacy.
One such trend has to do with hardware that increases the amount of. Information that can be gathered and stored and the speed with which that information can be analyzed, thus changing the economics of what it is possible to do with information technology. A second trend concerns the increasing connectedness of this hardware over networks, which magnifies the increases in the capabilities of the individual pieces of hardware that the network connects. A third trend has to do with advances in software that allow sophisticated mechanisms for the extraction of information from the data that are stored, either locally or on the network. A fourth trend, enabled by the other three, is the establishment of organizations and companies that offer as a resource information that they have gathered themselves or that has been aggregated from other sources but organized and analyzed by the company.
Improvements in the technologies have been dramatic, but the systems that have been built by combining those technologies have often yielded overall improvements that sometimes appear to be greater than the sum of the constituent parts. These improvements have in some cases changed what it is possible to do with the technologies or what it is economically feasible to do; in other cases they have made what was once difficult into something that is so easy that anyone can perform the action at any time. The end result is that there are now capabilities for gathering, aggregating, analyzing, and sharing information about and related to individuals (and groups of individuals) that were undreamed of 10 years ago. For example, global positioning system (GPS) locators attached to trucks can provide near-real-time information on their whereabouts and even their speed, giving truck shipping companies the opportunity to monitor the behavior of their drivers. Cell phones equipped to provide E-911 service can be used to map to a high degree of accuracy the location of the individuals carrying them, and a number of wireless service providers are marketing cell phones so equipped to parents who wish to keep track of where their children are. These trends are manifest in the increasing number of ways people use information technology, both for the conduct of everyday life and in special situations.
The personal computer, for example, has evolved from a replacement for a typewriter to an entry point to a network of global scope. As a network device, the personal computer has become a major agent for personal interaction (via e-mail, instant messaging, and the like), for financial transactions (bill paying, stock trading, and so on), for gathering information (e.g., Internet searches), and for entertainment (e.g., music and games). Along with these intended uses, however, the personal computer can also become a data-gathering device sensing all of these activities.
The use of the PC on the network can potentially generate data that can be analyzed to find out more about users of PCs than they. Anticipated or intended, including their buying habits, their reading and listening preferences, who they communicate with, and their interests and hobbies. Concerns about privacy will grow as the use of computers and networks expands into new areas. If we can’t keep data private with the current use of technology, how will we maintain our current understanding of privacy when the common computing and networking infrastructure includes our voting, medical, financial, travel, and entertainment records, our daily activities, and the bulk of our communications? As more aspects of our lives are recorded in systems for health care, finance, or electronic commerce, how are we to ensure that the information gathered is not used inappropriately to detect or deduce what we consider to be private information? How do we ensure the privacy of our thoughts and the freedom of our speech as the electronic world becomes a part of our government, central to our economy, and the mechanism by which we cast our ballots? As we become subject to surveillance in public and commercial spaces, how do we ensure that others do not track our every move?
As citizens of a democracy and participants in our communities, how can we guarantee that the privacy of putatively secret ballots is assured when electronic voting systems are used? The remainder of this chapter explores some relevant technology trends, describing current and projected technological capacity and relating it to privacy concerns. It also discusses computer, network, and system architectures and their potential impacts on privacy. 3.2 HARDWARE ADVANCES Perhaps the most commonly known technology trend is the exponential growth in computing power—loosely speaking the central processor unit in a computer will double in speed (or halve in price) every 18 months. What this trend has meant is that over the last 10 years, we have gone through about seven generations, which in turn means that the power of the central processing unit has increased by a factor of more than 100. The impact of this change on what is possible or reasonable to compute is hard to overestimate. Tasks that took an hour 10 years ago now take less than a minute.
Tasks that now take an hour would have taken days to complete a decade ago. The end result of this increase in computing speed is that many tasks that were once too complex to be automated can now be easily tackled by commonly available machines. While the increase in computing power that is implied by this exponential growth is well known and often cited, less appreciated are the economic implications of that trend, which entail a decrease in the cost of computation by a factor of more than 100 over the past 10 years. Outcome of this is that the desktop computer used in the home today is far more powerful than the most expensive supercomputer of 10 years ago.
At the same time, the cell phones commonly used today are at least as powerful as the personal computers of a decade ago. This change in the economics of computing means that there are many more computers in simple numbers than there were a decade ago, which in turn means that the amount of total computation available at a reasonable price is no longer a limiting factor in any but the most complex of computing problems. Nor is it merely central processing units (CPUs) that have shown dramatic improvements in performance and dramatic reductions in cost over the past 10 years. Dynamic random access memory (DRAM), which provides the working space for computers, has also followed a course similar to that for CPU chips. Over the past decade memory size has in some cases increased by a factor of 100 or more, which allows not only for faster computation but also for the ability to work on vastly larger data sets than was possible before. Less well known in the popular mind, but in some ways more dramatic than the trend in faster processors and larger memory chips, has been the expansion of capabilities for storing electronic information. The price of long-term storage has been decreasing rapidly over the last decade, and the ability to access large amounts of such storage has been increasing.
Storage capacity has been increasing at a rate that has outpaced the rate of increase in computing power, with some studies showing that it has doubled on average every 12 months. The result of this trend is that data can be stored for long periods of time in an economical fashion.
In fact, the economics of data storage has become inverted. Traditionally, data was discarded as soon as possible to minimize the cost of storing that data, or at least moved from primary storage (disks) to secondary storage (tape) where it was more difficult to access. With the advances in the capacities of primary storage devices, it is now often more expensive to decide how to cull data or transfer it to secondary storage (and to spend the resources to do the culling or transferring) than it is to simply store it all on primary storage, adding new capacity when it is needed. The change in the economics of data storage has altered more than just the need to occasionally cull data. It has also changed the kind of. Data that organizations are willing to store. When persistent storage was a scarce resource, considerable effort was expended in ensuring that the data that were gathered were compressed, filtered, or otherwise reduced before being committed to persistent storage.
Often the purpose for which the data had been gathered was used to enhance this compression and filtering, resulting in the storing not of the raw data that had been gathered but instead of the computed results based on that data. Since the computed results were task-specific, it was difficult or impossible to reuse the stored information for other purposes, part of the compression and filtering caused a loss of the general information such that it could not be recovered. With the increase in the capacity of long-term storage, reduction of data as they are gathered is no longer needed. And although compression is still used in many kinds of data storage, that compression is often reversible, allowing the re-creation of the original data set. The ability to re-create the original data set is of great value, as it allows more sophisticated analysis of the data in the future. But it also allows the data to be analyzed for purposes other than those for which it was originally gathered, and allows the data to be aggregated with data gathered in other ways for additional analysis.
Additionally, forms of data that were previously considered too large to be stored for long periods of time can now easily be placed on next-generation storage devices. For example, high-quality video streams, which can take up megabytes of storage for each second of video, were once far too large to be stored for long periods; the most that was done was to store samples of the video streams on tape. Now it is possible to store large segments of real-time video footage on various forms of long-term storage, keeping recent video footage online on hard disks, and then archiving older footage on DVD storage. Discarding or erasing stored information does not eliminate the possibility of compromising the privacy of the individuals whose information had been stored. A recent study has shown that a large number of disk drives available for sale on the secondary market contain easily obtainable information that was placed on the drive by the former owner. Included in the information found by the study was banking account information, information about prescription drug use, and college application information. Even when the previous owners of the disk drive had gone to some effort to erase the contents of the drive, it was in most cases fairly easy to repair the drive in such a way that the data that the drive had held.
Were easily available. In fact, one of the conclusions of the study is that it is quite hard to really remove information from a modern disk drive; even when considerable effort has been put into removing the information, sophisticated “digital forensic” techniques can be used to re-create the data.
From the privacy point of view, this means that once data have been gathered and committed to persistent storage, it is very difficult to ever be sure that the data have been removed or forgotten—a point very relevant to the archiving of materials in a digital age. With more data, including more kinds of data, being kept in its raw form, the concern arises that every electronic transaction a person ever enters into can be kept in readily available storage, and that audio and video footage of all of the public activities for that person could also be available. This information, originally gathered for purposes of commerce, public safety, health care, or for some other reason, could then be available for uses other than those originally intended. The fear is that the temptation to use all of this information, either by a governmental agency or by private corporations or even individuals, is so great that it will be nearly impossible to guarantee the privacy of anyone from some sort of prying eye, if not now then in the future. The final hardware trend relevant to issues of personal privacy involves data-gathering devices.
The evolution of these devices has moved them from the generating of analog data to the generation of data in digital form; from devices that were on specialized networks to those that are connected to larger networks; and from expensive, specialized devices that were deployed only in rare circumstances to cheap, ubiquitous devices either too small or too common to be generally noticed. Biometric devices, which sense physiological characteristics of individuals, also count as data-gathering devices. These sensors, from simple temperature and humidity sensors in buildings to the positioning systems in automobiles to video cameras used in public places to aid in security, continue to proliferate, showing the way to a world in which all of our physical environment is being watched and sensed by sets of eyes and other sensors. Provides a sampling of these sensing devices.
The ubiquitous connection of these sensors to the network is really a result of the transitive nature of connectivity. It is not in most cases the sensors themselves that are connected to the larger world. The standard sensor deployment has a group of sensors connected by a local (often specialized) network to a single computer. However, that computer is in turn connected to the larger network, either an intranet or the Internet itself. Because of this latter connection, the data generated by the sensors can be moved around the network like any other data once the computer to which the sensors are directly connected has received it.
The final trend of note in sensing devices is their nearly ubiquitous. Tion about the use or transaction can be (and often is) gathered and stored. This means that data can be gathered about far more people in far more circumstances than was possible 10 years ago. It also means that such information can be gathered about activities that intuitively appear to occur within the confines of the home, a place that has traditionally been a center of privacy-protected activities.
As more and more interactions are mediated by computers, more and more data can be gathered about more and more activities. The trend toward ubiquitous sensing devices has only begun, and it shows every sign of accelerating at an exponential rate similar to that seen in other parts of computing. New kinds of sensors, such as radio-frequency identification (RFID) tags or medical sensors allowing constant monitoring of human health, are being mandated by entities such as Walmart and the Department of Defense.
Single-sensor surveillance may be replaced in the future with multiple-sensor surveillance. The economic and health benefits of some ubiquitous sensor deployments are significant.
But the impact that those and other deployments will have in practice on individual privacy is hard to determine. 3.3 SOFTWARE ADVANCES In addition to the dramatic and well-known advances in the hardware of computing have come significant advances in the software that runs on that hardware, especially in the area of data mining and information fusion/data integration techniques and algorithms. Owing partly to the new capabilities enabled by advances in the computing platform and partly to better understanding of the algorithms and techniques needed for analysis, the ability of software to analyze the information gathered and stored on computing machinery has made great strides in the past decade. In addition new techniques in parallel and distributed computing have made it possible to couple large numbers of computers together to jointly solve problems that are beyond the scope of any single machine.
Although data mining is generally construed to encompass data searching, analysis, aggregation, and, for lack of a better term, archaeology, “data mining” in the strict sense of the term is the extraction of information implicit in data, usually in the form of previously unknown relationships among data elements. When the data sets involved are voluminous, automated processing is essential, and today computer-assisted data mining often uses machine learning, statistics, and visualization techniques to discover and present knowledge in a form that is easily comprehensible. Information fusion is the process of merging/combining multiple sources of information in such a way that the resulting information is. More accurate or reliable or robust as a basis for decision making than any single source of information would be. Information fusion often involves the use of statistical methods, such as Bayesian techniques and random effects modeling. Some information fusion approaches are implemented as artificial neural networks.
Both data mining and information fusion have important everyday applications. For example, by using data mining to analyze the patterns of an individual’s previous credit card transactions, a bank can determine whether a credit card transaction today is likely to be fraudulent. By combining results from different medical tests using information fusion techniques, physicians can infer the presence or absence of underlying disease with higher confidence than if the result of only one test were available. These techniques are also relevant to the work of government agencies.
Current Business Drivers That Influence Information Systems Development
For example, the protection of public health is greatly facilitated by early warning of outbreaks of disease. Such warning may be available through data mining of the highly distributed records of first-line health care providers and pharmacies selling over-the-counter drugs. Unusually high buying patterns of such drugs (e.g., cold remedies) in a given locale might signal the previously undetected presence and even the approximate geographic location of an emerging epidemic threat (e.g., a flu outbreak). Responding to a public health crisis might be better facilitated with automated access to and screening analyses of patient information at clinics, hospitals, and pharmacies. Research on these systems is today in its infancy, and it remains to be seen whether such systems can provide reliable warning on the time scales needed by public health officials to respond effectively.
Data-mining and information fusion technologies are also relevant to counterterrorism, crisis management, and law enforcement. Counterterrorism involves, among other things, the identification of terrorist operations before execution through analysis of signatures and database traces made during an operation’s planning stages.
Intelligence agencies also need to pull together large amounts of information to identify the perpetrators of a terrorist attack. Responding to a natural disaster or terrorist attack requires the quick aggregation of large amounts of information in order to mobilize and organize first-responders and assess damage.
Law enforcement must often identify perpetrators of crimes on the basis of highly fragmentary information—e.g., a suspect’s first name, a partial license number, and vehicle color. In general, the ability to analyze large data sets can be used to discern statistical trends or to allow broad-based research in the social, economic, and biological sciences, which is a great boon to all of these fields. But the ability can also be used to facilitate target marketing, enable broad-based. E-mail advertising campaigns, or (perhaps most troubling from a privacy perspective) discern the habits of targeted individuals. The threats to privacy are more than just the enhanced ability to track an individual through a set of interactions and activities, although that by itself can be a cause for alarm. It is now possible to group people into smaller and smaller groups based on their preferences, habits, and activities. There is nothing that categorically rules out the possibility that in some cases, the size of the group can be made as small as one, thus identifying an individual based on some set of characteristics having to do with the activities of that individual.
Furthermore, data used for this purpose may have been gathered for other, completely different reasons. For example, cell phone companies must track the locations of cell phones on their network in order to determine the tower responsible for servicing any individual cell phone. But these data can be used to trace the location of cell-phone owners over time. Temperature and humidity sensors used to monitor the environment of a building can generate data that indicate the presence of people in particular rooms. The information accumulated in a single database for one reason can easily be used for other purposes, and the information accumulated in a variety of database can be aggregated to allow the discovery of information about an individual that would be impossible to find out given only the information in any single one of those databases.
The end result of the improvements in both the speed of computational hardware and the efficiency of the software that is run on that hardware is that tasks that were unthinkable only a short time ago are now possible on low-cost, commodity hardware running commercially available software. Some of these new tasks involve the extraction of information about the individual from data gathered from a variety of sources. A concern from the privacy point of view is that—given the extent of the ability to aggregate, correlate, and extract new information from seemingly innocuous information—it is now difficult to know what activities will in fact compromise the privacy of an individual. Two, a third technology trend, the trend toward increased connectivity in the digital world, has a multiplicative effect.
The growth of network connectivity—obvious over the past decade in the World Wide Web’s expansion from a mechanism by which physicists could share information to a global phenomenon, used by millions to do everything from researching term papers to ordering books—can be traced back to the early days of local area networks and the origin of the Internet: Growth in the number of nodes on the Internet has been exponential over a period that began roughly in 1980 and continues to this day. Once stand-alone devices connected with each other through the use of floppy disks or dedicated telephone lines, computers are now networked devices that are (nearly) constantly connected to each other. A computer that is connected to a network is not limited by its own processor, software, and storage capacity, and instead can potentially make use of the computational power of the other machines connected to that network and the data stored on those other computers. The additional power is characterized by Metcalfe’s law, which states that the power of a network of computers increases in proportion to the number of pair-wise connections that the network enables.
A result of connectivity is the ability to access information stored or gathered at a particular place without having physical access to that place. It is no longer necessary to be able to actually touch a machine to use that machine to gather information or to gain access to any information stored on the machine. Controlling access to a physical resource is a familiar concept for which we have well-developed intuitions, institutions, and mechanisms that allow us to judge the propriety of access and to control that access.
These intuitions, institutions, and mechanisms are much less well developed in the case of networked access. The increased connectivity of computing devices has also resulted in a radical decrease in the transaction costs for accessing information. This has had a significant impact on the question of what should be considered a public record, and how those public records should be made available.
Much of the information gathered by governments at various levels is considered public record. Traditionally, the costs (both in monetary terms and in terms of costs of time and human aggravation) to access such. 5 Raymond Kurzweil, The Singularity Is Near, Viking Press, 2005, pp. Metcalfe, “Metcalfe’s Law: A Network Becomes More Valuable as It Reaches More Users,” Infoworld, Oct. See also the May 6, 1996, column.
The validity of Metcalfe’s law is based on the assumption that every connection in a network is equally valuable. However, in practice it is known that in many networks, certain nodes are much more valuable than others, a point suggesting that the value may increase less rapidly in proportion to the number of possible pair-wise connections. Public records have been high. To look at the real estate transactions for a local area, for example, required physically going to the local authority that stored those records, filling out the forms needed for access, and then viewing the records at the courthouse, tax office, or other government office.
When these records are made available through the World Wide Web, the transaction costs of accessing those records are effectively zero, making it far easier for the casual observer to view such records. Connectivity is also relevant to privacy on a scale smaller than that of the entire Internet. Corporate and government intranets allow the connection and sharing of information between the computers of a particular organization. The purpose of such intranets is often for the sharing of information between various computers (as opposed to the sharing of information between the users of computers). Such sharing is a first step toward the aggregation of various data repositories, combining information collected for a variety of reasons to enable that larger (and richer) data set to be analyzed in an attempt to extract new forms of information.
Along with the increasing connectivity provided by networking, the networks themselves are becoming more capable as a mechanism for sharing data. Bandwidth, the measure of how much data can be transferred over the network in a given time, has been increasing dramatically. New network technologies allowing some filtering and analyzing of data as it flows through the network are being introduced. Projects such as SETI@home and technologies like grid computing are trying to find ways of utilizing the connectivity of computers to allow even greater computational levels. From the privacy point of view, interconnectivity seems to promise a world in which any information can be accessed from anywhere at any time, along with the computational capabilities to analyze the data in any way imaginable. This interconnectivity seems to mean that it is no longer necessary to actually have data on an individual on a local computer; the data can be found somewhere on another computer that is connected to the local computer, and with the seemingly unlimited computing ability of the network of interconnected machines, finding and making use of that information is no longer a problem.
Ubiquitous connectivity has also given impetus to the development of digital rights management technologies (DRMTs), which are a response to the fact that when reduced to digital form, text, images, sounds, and other forms of content can be copied freely and perfectly. DRMTs harness the power of the computer and the network to enforce predefined limits on. The possible distribution and use of a protected work. These predefined limits can be very fine-grained.
3.5 TECHNOLOGIES COMBINED INTO A DATA-GATHERING SYSTEM Each of the technology trends discussed above can be seen individually as having the potential to threaten the privacy of the individual. Combined into an overall system, however, such technologies seem to pose a far greater threat to privacy. The existence of ubiquitous sensors, generating digital data and networked to computers, raises the prospect of data generated for much of what individuals do in the physical world. Increased use of networked computers, which are themselves a form of activity sensor, allows the possibility of a similar tracking of activities in the electronic world. And increased and inexpensive data storage capabilities support retention of data by default.
Once stored, data are potentially available for analysis by any computer connected via a network to that storage. Networked computers can share any information that they have, and can aggregate information held by them separately. Thus it is possible not only to see all of the information gathered about an individual, but also to aggregate the information gathered in various places on the network into a larger view of the activities of that individual. Such correlations create yet more data on an individual that can be stored in the overall system, shared with others on the network, and correlated with the sensor data that are being received.
Finally, the seemingly unlimited computing power promised by networked computers would appear to allow any kind of analysis of the data concerning the individual to be done thoroughly and quickly. Patterns of behavior, correlations between actions taken in the electronic and the physical world, and correlations between data gathered about one individual and that gathered about another are capable, in principle, of being found, reported, and used to create further data about the individual being examined. Even if such analysis is impractical today, the data will continue to be stored, and advances in hardware and software technology may appear that allow the analysis to be done in the future. At the very least, these technology trends—in computation, sensors, storage technology, and networking—change the rules that have governed surveillance. It is the integration of both hard and soft technologies of surveillance and analysis into networks and systems that underlies the evolution of what might be called traditional surveillance to the “new” surveillance. Compared to traditional surveillance, the new surveillance is less visible and more continuous in time and space, provides fewer.
3.6 DATA SEARCH COMPANIES All of the advances in information technology for data aggregation and analysis have led to the emergence of companies that take the raw technology discussed above and combine it into systems that allow them to offer directly to their customers a capability for access to vast amounts of information. Search engine services, such as those provided by Google, Yahoo!, and MSN, harness the capabilities of thousands of computers, joined together in a network, that when combined give huge amounts of storage and vast computational facilities. Such companies have linked these machines with a software infrastructure that allows the finding and indexing of material on the World Wide Web. The end result is a service that is used by millions every day. Rather than requiring that a person know the location of information on the World Wide Web (via, for example, a uniform resource locator (URL), such as ), a search engines enables the user to find that information by describing it, typically by typing a few keywords that might be associated with that information.
Using sophisticated algorithms that are the intellectual property of the company, links to locations where that information can be found are returned. This functionality, undreamed of a decade ago, has revolutionized the way that the World Wide Web is used. Further, these searches can often be conducted for free, as many search companies make money by selling advertising that is displayed along with the search results to the users of the service.
While it is hard to imagine using the Web without search services, their availability has brought up privacy concerns. Using a search engine to assemble information about an individual has become common practice (so common that the term “to Google” has entered the language). When the Web newspaper, cnet.com, published personal information about the president of Google that had been obtained by using the Google service, Google charged Cnet with publishing private information and announced that it would not publicly speak to Cnet for a year in retribution. Is an interesting case, because the information that was obtained was accessible through the Web to anyone, but would have been difficult to find without the services offered by Google. Whereas in this case privacy could perhaps have been maintained because of the difficulty of simply finding the available information, the Google service made it easy to find the information, and made it available for free. A second privacy concern arises regarding the information that search engine companies collect and store about specific searches performed by users.
To service a user’s search request, the specific search terms need not be kept for longer than it takes to return the results of that search. But search engine companies keep that information anyway for a variety of purposes, including marketing and enhancement of search services provided to users. The potential for privacy-invasive uses of such information was brought into full public view in a request in early 2006 by the Department of Justice (DOJ) for search data from four search engines, including search terms queried and Web site addresses, or URLs, stored in each search engine’s index but excluding any user identifying information that could link a search string back to an individual. The intended DOJ use of the data was not to investigate a particular crime but to study the prevalence of pornographic material on the Web and to evaluate the effectiveness of software filters to block those materials in a case testing the constitutionality of the Child Online Protection Act (COPA). The four search engines were those associated with America Online, Microsoft, Yahoo!, and Google. The first three of these companies each agreed to provide at least some of the requested search data. Google resisted the original subpoena demanding this information; subsequently, the information sought was narrowed significantly in volume and character, and Google was ultimately ordered by a U.S.
District Court to provide a much more restricted set of data. Although the data requested did not include personally identifiable information of users, this case has raised a number of privacy concerns about possible disclosures in the future of the increasing volumes of user-generated search information. Google objected to the original request for a variety of reasons. Google asserted a general interest in protecting its users’ privacy and. Additionally, Google believed the original request was overly broad, as it included all search queries entered into the search engine during a 2-month period and all URLs in Google’s index. In negotiations with the DOJ, the data request was reduced to a sampling of 50,000 URLs and 5,000 search terms by the DOJ.
In considering only the DOJ’s modified request, the court decided to further limit the type of data that was released to include only URLs and not search terms. Several of the privacy implications considered in this ruling included the recognition that personally identifying information, although not requested, might be available in the text of searches performed (e.g., such as searching to see if personal information is on the Internet, such as Social Security numbers or credit card information, or to check what information is associated with one’s own name, so-called vanity searches). The court also acknowledged the possibility of the information being shared with other government authorities if text strings raised national security issues (e.g., “bomb placement white house”). Although this case was seen as a partial victory for Google and for the privacy of its users, the court as well as others acknowledged that the case could have broader implications. Though outside its ruling, the court could foresee the possibility of future requests to Google, particularly if the narrow collection of data used in the DOJ’s study was challenged in the COPA case. However, others have suggested that this case underscores the larger problem of how to protect Internet user privacy, particularly as more user-generated information is being collected and stored for unspecified periods of time, which makes it increasing vulnerable to subpoenaed requests. Many of the concerns about compromising user privacy were illustrated graphically when in August 2006, AOL published on the Web a list of 20 million Web search inquiries made by 658,000 users over a 3-month.
14 Declan McCullagh, “Google to Feds: Back Off,” CNET News.com, February 17, 2006, available. 15 Order Granting in Part and Denying in Part Motion to Compel Compliance with Subpoena Duces Tecum, United States District Court for the Northern District of California, San Jose Division, Court Ruling No. CV 06-8006MISC JW, p. 4, available. 16 United States District Court for the Northern District of California, San Jose Division, Court Ruling, pp. 17 United States District Court for the Northern District of California, San Jose Division, Court Ruling, p. 18 Thomas Claburn, “Google’s Privacy Win Could Be Pyrrhic Victory,” InformationWeek, March 22, 2006, available.
From AnimateIt.netS.W.A.T. From AnimateIt.netS.W.A.T. Play intrusion 2 hacked full version download free. From AnimateIt.netS.W.A.T.
AOL sought to anonymize users by substituting a code number for their login names, but the list of inquiries sorted by code number shows the topics in which a person was interested over many different searches. A few days later, AOL took down the 439-megabyte file after many complaints were received that the file violated user privacy. AOL acknowledged that the publication of the data was a violation of its own internal policies and issued a strongly worded apology. Some users were subsequently identified by name.
A related kind of IT-enabled company—the data aggregation company—is discussed further in. 3.7 BIOLOGICAL AND OTHER SENSING TECHNOLOGIES The technology trends outlined thus far in this chapter are all well established, and technologies that follow these trends are deployed in actual systems.
There is an additional trend, only now in its beginning stages, that promises to extend the sensing capabilities beyond those that are possible with the kinds of sensors available today. These are biological sensing technologies, including such things as biometric identification schemes and DNA analysis. Biometric technologies use particular biological markers to identify individuals. Fingerprinting for identification is well known and well established, but interest in other forms of biometric identification is high. Technologies using identifying features as varied as retinal patterns, walking gait, and facial characteristics are all under development and show various levels of promise.
Many of these biometric technologies differ from the more standard and currently used biometric identification schemes in two ways: first, these technologies promise to allow the near-real-time identification of an individual from a distance and in a way that is non-invasive and, perhaps, incapable of being detected by the subject being identified; second, some of these mechanisms facilitate automated identification that can be done solely by the computer without the aid of a human being. Such identification could be done more cheaply and far more rapidly than human-mediated forms of identification. Joined into a computing system like those discussed above, such identification mechanisms offer a potential for tracing all of the activities of an individual. Whereas video camera surveillance now requires human watchers, automated face-identification systems could allow the logging.
Chapter 1: The Context of SA&D Methods Objectives: Define information system and name seven types of information system applications. Identify different types of stakeholders who use or develop information systems, and give examples of each. Define the unique role of systems analysts in the development of information systems. Identify those skills needed to successfully function as an information system analyst. Describe current business drivers that influence information systems development. Describe current technology drivers that influence information systems development. Briefly describe a simple process for developing information systems.
![]()
No additional notes A Framework for Systems Analysis and Design A system is a group of interrelated components that function together to achieve a desired result. An information system (IS) is an arrangement of people, data, processes, and information technology that interact to collect, process, store, and provide as output the information needed to support an organization. Information technology is a contemporary term that describes the combination of computer technology (hardware and software) with telecommunications technology (data, image, and voice networks). Conversion Notes The definition of system is new to this edition, and was added to reinforce systems thinking. This is a more concise definition of “information system” than in previous editions. It better reflects what information systems are and do rather than how they are used.
Some books use the term “computer technology.” We prefer the more contemporary term “information technology” as a superset of computer technology. Types of Information Systems Transaction processing systems (TPS) Management information systems (MIS) Decision support systems (DSS) Expert systems (ES) Communications systems Collaboration systems Office automation systems Teaching Notes These definitions can be useful to help students understand what an information system is in all its varieties and flavors. Depending on the prerequisites of your course, you may want to cover these in more or less detail. Stakeholders: Players in the Systems Game A stakeholder is any person who has an interest in an existing or proposed information system. Stakeholders can be technical or nontechnical workers.
They may also include both internal and external workers. Information workers are those workers whose jobs involve the creation, collection, processing, distribution, and use of information.
Knowledge workers are a subset of information workers whose responsibilities are based on a specialized body of knowledge. Teaching Notes Give examples of information workers and knowledge workers to reinforce the difference. Footnote – Information workers (sometimes called “white-collar workers”) have outnumbered blue-collar workers since 1957. Typically a knowledge worker has a degree or credential in some subject area (hence, they are often called “subject area experts”). Examples include engineers, scientists, accountants, lawyers, etc. Briefly describe a typical information system that students would be familiar with, such as an enrollment system for the college.
Invite the class to brainstorm who the stakeholders would be and which of them would be information workers or knowledge workers. Stakeholders System users Project manager Systems analyst System owners System users Project manager Systems analyst System designer System builders External Service Provider (ESP) Teaching Notes Using the information system described earlier (enrollment system or other) for the college, invite the class to identify individuals who might play the system owner role. Systems Analysts Systems analyst – a specialist who studies the problems and needs of an organization to determine how people, data, processes, and information technology can best accomplish improvements for the business. A programmer/analyst (or analyst/programmer) includes the responsibilities of both the computer programmer and the systems analyst. A business analyst focuses on only the non-technical aspects of systems analysis and design. Teaching Notes Business analyst is becoming more popular because of the number of end-users and other knowledge workers being assigned to systems analysts roles in organizations. The Systems Analyst as a Problem-Solver By 'Problems' that need solving, we mean: Problems, either real or anticipated, that require corrective action Opportunities to improve a situation despite the absence of complaints Directives to change a situation regardless of whether anyone has complained about the current situation Teaching Notes It can be useful to present examples of each scenario from the instructor’s personal experiences.
The classification scheme is not mutually exclusive; that is, a project can be driven by multiple instances and combinations of problems, opportunities, and directives. A problem might be classified as both a true problem an opportunity, or an opportunity plus directive. Where Do Systems Analysts Work?
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |