Friday, April 29, 2011

Latest Technology Connects Brain with Robotics

A new device developed in Japan by Honda Motor Company can analyze thought patterns and actually relays them as wireless commands.

Straight out of a science fiction tale, now opening a car trunk or even controlling a home air-conditioner becomes merely a thought.

Honda’s robot is known as Asimo. Shaped like a human, it receives wireless commands via electric currents on a person’s scalp. Asimo can also decipher changes in cerebral blood flow whenever a person thinks about four movements; namely, moving the right hand, moving the left hand, running and eating.
According to Honda, a leader in the field of robotics, the technology is not quite ready for a live demonstration and is still in research stages for a number of reasons. Distractions in a person’s thinking could offset Asimo, and brain patterns differ greatly among individuals, which means advance study of at least two years is required in order for the technology to function. Also, the reading device needs to be smaller so it can be portable.

A recent video reveals a man wearing a helmet that is seated and thinking about moving his right hand. His thought is transmitted to the robot by cords attached to his head inside the helmet. It took a few seconds, but then Asimo, dutifully programmed to respond to brain signals, lifted its right arm.

Yasuhisa Arai, executive at Honda Research Institute had this to say regarding the project:
“I’m talking about dreams today. Practical uses are still way into the future. Our products are for people to use. It is important for us to understand human behavior. We think this is the ultimate in making machines move.”

Deciphering brain patterns represents an important breakthrough in medical research. All over the world, scientists are attempting to unlock the mysteries of the human brain, but Honda believes its research in the field is among the most advanced and least invasive anywhere.

Embedding the necessary sensors under the skin is a painless procedure that is easily tolerated. The Japanese government is encouraging this robotic research as it sees the industry as a path to growth.


Robotics is the branch of technology that deals with the design, construction, operation, structural disposition, manufacture and application of robots.Robotics is related to the sciences of electronics, engineering, mechanics, and software. The word "robot" was introduced to the public by Czech writer Karel Čapek in his play R.U.R. (Rossum's Universal Robots), published in 1920. The term "robotics" was coined by Isaac Asimov in his 1941 science fiction short-story "Liar!"


Multimedia is media and content that uses a combination of different content forms. The term can be used as a noun (a medium with multiple content forms) or as an adjective describing a medium as having multiple content forms. The term is used in contrast to media which only use traditional forms of printed or hand-produced material. Multimedia includes a combination of text, audio, still images, animation, video, and interactivity content forms.
Multimedia is usually recorded and played, displayed or accessed by information content processing devices, such as computerized and electronic devices, but can also be part of a live performance. Multimedia (as an adjective) also describes electronic media devices used to store and experience multimedia content. Multimedia is distinguished from mixed media in fine art; by including audio, for example, it has a broader scope. The term "rich media" is synonymous for interactive multimedia. Hypermedia can be considered one particular multimedia application.

Categorization of multimedia

Multimedia may be broadly divided into linear and non-linear categories. Linear active content progresses without any navigational control for the viewer such as a cinema presentation. Non-linear content offers user interactivity to control progress as used with a computer game or used in self-paced computer based training. Hypermedia is an example of non-linear content.
Multimedia presentations can be live or recorded. A recorded presentation may allow interactivity via a navigation system. A live multimedia presentation may allow interactivity via an interaction with the presenter or performer.

Major characteristics of multimedia

Multimedia presentations may be viewed by person on stage, projected, transmitted, or played locally with a media player. A broadcast may be a live or recorded multimedia presentation. Broadcasts and recordings can be either analog or digital electronic media technology. Digital online multimedia may be downloaded or streamed. Streaming multimedia may be live or on-demand.
Multimedia games and simulations may be used in a physical environment with special effects, with multiple users in an online network, or locally with an offline computer, game system, or simulator.
The various formats of technological or digital multimedia may be intended to enhance the users' experience, for example to make it easier and faster to convey information. Or in entertainment or art, to transcend everyday experience.
Enhanced levels of interactivity are made possible by combining multiple forms of media content. Online multimedia is increasingly becoming object-oriented and data-driven, enabling applications with collaborative end-user innovation and personalization on multiple forms of content over time. Examples of these range from multiple forms of content on Web sites like photo galleries with both images (pictures) and title (text) user-updated, to simulations whose co-efficients, events, illustrations, animations or videos are modifiable, allowing the multimedia "experience" to be altered without reprogramming. In addition to seeing and hearing, Haptic technology enables virtual objects to be felt. Emerging technology involving illusions of taste and smell may also enhance the multimedia experience.

Image Processing

In electrical engineering and computer science, image processing is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it.
Image processing usually refers to digital image processing, but optical and analog image processing also are possible. This article is about general techniques that apply to all of them. The acquisition of images (producing the input image in the first place) is referred to as imaging.

Object Oriented Programming

Object-oriented programming (OOP) is a programming paradigm using "objects" – data structures consisting of data fields and methods together with their interactions – to design applications and computer programs. Programming techniques may include features such as data abstraction, encapsulation, messaging, modularity, polymorphism, and inheritance. Many modern programming languages now support OOP.

Many people first learn to program using a language that is not object-oriented. Simple, non-OOP programs may be one long list of commands. More complex programs will group lists of commands into functions or subroutines each of which might perform a particular task. With designs of this sort, it is common for the program's data to be accessible from any part of the program. As programs grow in size, allowing any function to modify any piece of data means that bugs can have wide-reaching effects.
By contrast, the object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the program. Instead the data is accessed by calling specially written 'functions', commonly called methods, which are either bundled in with the data or inherited from "class objects" and act as the intermediaries for retrieving or modifying those data. The programming construct that combines data with a set of methods for accessing and managing those data is called an object.
An object-oriented program will usually contain different types of objects, each type corresponding to a particular kind of complex data to be managed or perhaps to a real-world object or concept such as a bank account, a hockey player, or a bulldozer. A program might well contain multiple copies of each type of object, one for each of the real-world objects the program is dealing with. For instance, there could be one bank account object for each real-world account at a particular bank. Each copy of the bank account object would be alike in the methods it offers for manipulating or reading its data, but the data inside each object would differ reflecting the different history of each account.
Objects can be thought of as wrapping their data within a set of functions designed to ensure that the data are used appropriately, and to assist in that use. The object's methods will typically include checks and safeguards that are specific to the types of data the object contains. An object can also offer simple-to-use, standardized methods for performing particular operations on its data, while concealing the specifics of how those tasks are accomplished. In this way alterations can be made to the internal structure or methods of an object without requiring that the rest of the program be modified. This approach can also be used to offer standardized methods across different types of objects. As an example, several different types of objects might offer print methods. Each type of object might implement that print method in a different way, reflecting the different kinds of data each contains, but all the different print methods might be called in the same standardized manner from elsewhere in the program. These features become especially useful when more than one programmer is contributing code to a project or when the goal is to reuse code between projects.
Object-oriented programming has roots that can be traced to the 1960s. As hardware and software became increasingly complex, manageability often became a concern. Researchers studied ways to maintain software quality and developed object-oriented programming in part to address common problems by strongly emphasizing discrete, reusable units of programming logic. The technology focuses on data rather than processes, with programs composed of self-sufficient modules ("classes"), each instance of which ("objects") contains all the information needed to manipulate its own data structure ("members"). This is in contrast to the existing modular programming that had been dominant for many years that focused on the function of a module, rather than specifically the data, but equally provided for code reuse, and self-sufficient reusable units of programming logic, enabling collaboration through the use of linked modules (subroutines). This more conventional approach, which still persists, tends to consider data and behavior separately.
An object-oriented program may thus be viewed as a collection of interacting objects, as opposed to the conventional model, in which a program is seen as a list of tasks (subroutines) to perform. In OOP, each object is capable of receiving messages, processing data, and sending messages to other objects. Each object can be viewed as an independent "machine" with a distinct role or responsibility. The actions (or "methods") on these objects are closely associated with the object. For example, OOP data structures tend to 'carry their own operators around with them' (or at least "inherit" them from a similar object or class).

Database Management Systems

Database Management System (DBMS) is a set of computer programs that controls the creation, maintenance, and the use of a database. It allows organizations to place control of database development in the hands of database administrators (DBAs) and other specialists. A DBMS is a system software package that helps the use of integrated collection of data records and files known as databases. It allows different user application programs to easily access the same database. DBMSs may use any of a variety of database models, such as the network model or relational model. In large systems, a DBMS allows users and other software to store and retrieve data in a structured way. Instead of having to write computer programs to extract information, user can ask simple questions in a query language. Thus, many DBMS packages provide Fourth-generation programming language (4GLs) and other application development features. It helps to specify the logical organization for a database and access and use the information within a database. It provides facilities for controlling data access, enforcing data integrity, managing concurrency, and restoring the database from backups. A DBMS also provides the ability to logically present database information to users.

Information Systems

Information Systems (IS) is an academic/professional discipline bridging the business field and the well-defined computer science field that is evolving toward a new scientific area of study.An information systems discipline therefore is supported by the theoretical foundations of information and computations such that learned scholars have unique opportunities to explore the academics of various business models as well as related algorithmic processes within a computer science discipline. Typically, information systems or the more common legacyinformation systems include people, procedures, data, software, and hardware (by degree) that are used to gather and analyze digital information. Specifically computer-based information systems are complementary networks of hardware/software that people and organizations use to collect, filter, process, create, & distribute data (computing). Computer Information System(s) (CIS) is often a track within the computer science field studying computers and algorithmic processes, including their principles, their software & hardware designs, their applications, and their impact on society. Overall, an IS discipline emphasizes functionality over design.
As illustrated by the Venn Diagram on the right, the history of information systems coincides with the history of computer science that began long before the modern discipline of computer science emerged in the twentieth century.Regarding the circulation of information and ideas, numerous legacy information systems still exist today that are continuously updated to promote ethnographic approaches, to ensure data integrity, and to improve the social effectiveness & efficiency of the whole process. In general, information systems are focused upon processing information within organizations, especially within business enterprises, and sharing the benefits with modern society.

Introduction of Compilers

compiler is a computer program (or set of programs) that transforms source code written in a programming language(the source language) into another computer language (the target language, often having a binary form known as object code). The most common reason for wanting to transform source code is to create an executable program.
The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a lower level language (e.g., assembly language or machine code). If the compiled program can run on a computer whose CPU or operating system is different from the one on which the compiler runs, the compiler is known as a cross-compiler. A program that translates from a low level language to a higher level one is a decompiler. A program that translates between high-level languages is usually called a language translatorsource to source translator, or language converter. A language rewriter is usually a program that translates the form of expressions without a change of language.
A compiler is likely to perform many or all of the following operations: lexical analysis, preprocessing, parsing, semantic analysis (Syntax-directed translation), code generation, and code optimization.
Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementors invest a lot of time ensuring the correctness of their software.
The term compiler-compiler is sometimes used to refer to a parser generator, a tool often used to help create the lexerand parser.

Wireless Networks

Wireless network refer to any type of computer network that is not connected by cables of any kind. It is a method by which telecommunications networks and enterprise (business), installations avoid the costly process of introducing cables into to a building, or as a connection between various equipment locations. Wireless telecommunications networks are generally implemented and administered using a transmission system called radio waves.This implementation takes place at the physical level, (layer), of the network structure. 

Wireless networks continue to develop, usage has grown in 2010. Cellular phones are part of everyday wireless networks, allowing easy personal communications. Inter-continental network systems use radio satellites to communicate across the world. Emergency services such as the police utilize wireless networks to communicate effectively. Individuals and businesses use wireless networks to send and share data rapidly, whether it be in a small office building or across the world.
Another use for wireless networks is a cost effective means to connect to the Internet, in regions where the telecommunications infrastructure is both poor and lacking in resources, typically in rural areas and developing countries.
Compatibility issues also arise when dealing with wireless networks. Different devices may have compatibility issues, or might require modifications to solve these issues. Wireless networks are often typically slower than those found in modern versions of Ethernet cable connected installations.
A wireless network is more vulnerable, because anyone can intercept and sometimes divert a network broadcasting signal when point to point connections are used. Many wireless networks use WEP - Wired Equivalent Privacy - security systems. These have been found to be still vulnerable to intrusion. Though WEP does block some intruders, the security problems have caused some businesses to continue using wired networks until a more suitable security system can be introduced. The use of suitable firewalls overcome some security problems in wireless networks that are vulnerable to attempted unauthorized access.

World wide web

The World Wide Web, abbreviated as WWW or W3 and commonly known as the Web, is a system of interlinked hypertext documents accessed via the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia and navigate between them via hyper links. Using concepts from earlier hypertext systems, English engineer and computer scientist Sir Tim Berners-Lee, now the Director of the World Wide Web Consortium, wrote a proposal in March 1989 for what would eventually become the World Wide Web.At CERN in Geneva, Switzerland, Berners-Lee and Belgian computer scientist Robert Cailliau proposed in 1990 to use "HyperText ... to link and access information of various kinds as a web of nodes in which the user can browse at will", and publicly introduced the project in December.
"The World-Wide Web was developed to be a pool of human knowledge, and human culture, which would allow collaborators in remote sites to share their ideas and all aspects of a common project."


The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail.
Most traditional communications media including telephone, music, film, and television are reshaped or redefined by the Internet, giving birth to new services such as Voice over Internet Protocol (VoIP) and IPTV. Newspaper, book and other print publishing are adapting to Web site technology, or are reshaped into blogging and web feeds. The Internet has enabled or accelerated new forms of human interactions through instant messaging, Internet forums, and social networking. Online shopping has boomed both for major retail outlets and small artisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries.
The origins of the Internet reach back to research of the 1960s, commissioned by the United States government in collaboration with private commercial interests to build robust, fault-tolerant, and distributed computer networks. The funding of a new U.S.backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life. As of 2009, an estimated quarter of Earth's population used the services of the Internet.
The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own standards. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.

Computer Architecture

In computer science and computer engineering, computer architecture or digital computer organization is the conceptual design and fundamental operational structure of a computer system. It's a blueprint and functional description of requirements and design implementations for the various parts of a computer, focusing largely on the way by which the central processing unit (CPU) performs internally and accesses addresses in memory.
It may also be defined as the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals.
Computer architecture comprises at least three main subcategories:
  • Instruction set architecture, or ISA, is the abstract image of a computing system that is seen by a machine language (or assembly language) programmer, including the instruction set, word size, memory address modes, processor registers, and address and data formats.
  • Micro architecture, also known as Computer organization is a lower level, more concrete and detailed, description of the system that involves how the constituent parts of the system are interconnected and how they interoperate in order to implement the ISA. The size of a computer's cache for instance, is an organizational issue that generally has nothing to do with the ISA.
  • System Design which includes all of the other hardware components within a computing system such as:
  1. System interconnects such as computer buses and switches
  2. Memory controllers and hierarchies
  3. CPU off-load mechanisms such as direct memory access (DMA)
  4. Issues like multiprocessing.
Once both ISA and microarchitecture have been specified, the actual device needs to be designed into hardware. This design process is called the implementation. Implementation is usually not considered architectural definition, but rather hardware design engineering.
Implementation can be further broken down into three (not fully distinct) pieces:
  • Logic Implementation — design of blocks defined in the microarchitecture at (primarily) the register-transfer and gate levels.
  • Circuit Implementation — transistor-level design of basic elements (gates, multiplexers, latches etc.) as well as of some larger blocks (ALUs, caches etc.) that may be implemented at this level, or even (partly) at the physical level, for performance reasons.
  • Physical Implementation — physical circuits are drawn out, the different circuit components are placed in a chip floorplan or on a board and the wires connecting them are routed.
For CPUs, the entire implementation process is often called CPU design.
More specific usages of the term include more general wider-scale hardware architectures, such as cluster computing and Non-Uniform Memory Access (NUMA) architectures.

Computer Networking

Computer networking or Data communications (Datacom) is the engineering discipline concerned with the communication betweencomputer systems or devices. A computer network is any set of computers or devices connected to each other with the ability to exchange data. Computer networking is sometimes considered a sub-discipline of telecommunications, computer science, information technology and/or computer engineering since it relies heavily upon the theoretical and practical application of these scientific and engineering disciplines. The three types of networks are: the Internet, the intranet, and the extranet. Examples of different network methods are:
  • Local area network (LAN), which is usually a small network constrained to a small geographic area. An example of a LAN would be a computer network within a building.
  • Metropolitan area network (MAN), which is used for medium size area. examples for a city or a state.
  • Wide area network (WAN) that is usually a larger network that covers a large geographic area.
  • Wireless LANs and WANs (WLAN & WWAN) are the wireless equivalent of the LAN and WAN.
All networks are interconnected to allow communication with a variety of different kinds of media, including twisted-pair copper wire cable,coaxial cable, optical fiber, power lines and various wireless technologies. The devices can be separated by a few meters (e.g. via Bluetooth) or nearly unlimited distances (e.g. via the interconnections of the Internet). Networking, routers, routing protocols, and networking over the public Internet have their specifications defined in documents called RFCs.

Introduction to Operating Systems

An operating system (OS) is software, consisting of programs and data, that runs on computers, manages computer hardware resources, and provides common services for execution of various application software.
For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between application programs and the computer hardware, although the application code is usually executed directly by the hardware and will frequently call the OS or be interrupted by it. Operating systems are found on almost any device that contains a computer—from cellular phones and video game consoles to supercomputers and web servers.
Examples of popular modern operating systems for personal computers are: Microsoft Windows, Mac OS X, Linux, and Unix.

Software Engineering

What is Software Engineering?
In Software Engineering we study how to build and maintain software systems in a controlled, predictable way. In particular, good Software Engineering should give control

Functionality: The system should provide functions that meet the needs of all the stakeholders of the system. This will involve determining what these needs are, identifying and resolving conflicting needs, and designing a system that meets the identified needs.

Quality: The quality of a system determines if it is useable in its intended context. The required quality of software varies enormously. Software on a desktop PC is usually of fairly low quality while the control software for railway signalling applications needs to be of the highest standard. The required quality strongly determines the cost of software production.

Resources: The purchasers of a system usually enter into a contractual relationship with the supplier companies. The suppliers need to predict resource requirements: quality of development team, human effort, timescales, and supporting tools and equipment; in order to determine the cost of developing the system. Failure to predict costs can have serious consequences for the suppliers and purchasers of systems.

Software Engineering is a relatively young discipline by comparison with most other engineering disciplines. As a result, many projects still suffer from poor Software Engineering. Most of you will be familiar with many software project disasters.

Why is Software Engineering Important?

It is clear that software is important:

Critical applications: Software has found its way into many applications where failure has serious consequences. For example: car braking and steering, process control, safety systems in hazardous processes, civilian avionics, communication
networks and telephony, . . . .

Competitiveness: Software is seen as the key to competitiveness in many areas. In retail, finance and entertainment, e-commerce is seen as a critical development; in many other areas of economic activity good software is seen as a key element in the competitiveness of firms.

Economically: The estimated value of systems containing embedded software will exceed  dollars in the next few years. This is only one market for software, there are many others.

This does not mean Software Engineering is important at the moment. Critics point to well-publishes failures to supply well-engineered software systems by suppliers who do attempt to use best practice. These well-published failures mask many projects that are successful and are delivered on time and on price. We can argue that the aims of Software Engineering are important and in some contexts those aims can be realized.

Software Products

Stakeholders in a software product are usually concerned with two broad categories of characteristics of software:

Functionality: The important characteristic of a function supplied by software is that is is either present or absent. Stakeholders are usually concerned that the software has all the required functions present (or at least that they can be supplied eventually).

Attributes: An attribute of a software product is something that can be measured (directly or indirectly) and that measurement lies in some range. Stakeholders are also interested in seeing that the attributes of a software product meet some minimum level. Typical attributes are:

Maintainability: This is a measure of how easy a system is to maintain during its deployed life. This cannot be measured directly before the system goes into operation because it depends on many features of the software and on what changes the systems will be expected to undergo. At this stage the Maintainability of a system is assessed qualitatively on the basis of inspections and measures of the quality of the structure of the code. In operation, Maintainability can be measured (usually as mean time to repair). This measure is only significant if the product is used for a long time and/or it has a large number of installations.

Dependability: This is a measure of how “trustworthy” the software is. Usually this is a combined measure of the safety, reliability, availability and security of a system. The issues of measurement of Dependability are similar to those of Maintainability.

Efficiency: For some systems it is important to keep the use of system resources (time, memory, bandwidth) to a minimum. This is often at the cost of added complexity in the software. Improving the efficiency of a system often involves a detailed analysis of the interactions between different modules making up the system as a consequence the cost of improving efficiency often grows non-linearly in the size of the system and the required efficiency.

Usability: Usability is a measure of how easy the system is to use. Again this is hard to measure since it arises from many factors. Often this is approximated by very rough measures like learning time to carry out some operation.

Attributes can make conflicting demands on the software product. For example, improving efficiency may lead to more complex interactions between software modules and to more interactions between formerly independent modules.
Changes like these can have a serious effect on the Maintainability of a system because more interaction between modules can make tracking down and fixing errors much more difficult.

Software Production Activities
The production of software involves a number of different activities during which we try to ensure we deliver the required functions and the required level of attributes. We can group production activities into four broad categories:

Specification: This is a description of what the software has to do and sets acceptable levels for software attributes. For most software systems going from the user needs to a statement of requirements and then to a precise specification is a difficult and error prone task. The study of Requirements Engineering is an increasingly important part of Software Engineering.

Design: This covers the high-level structural description of the architecture of the system, through the detailed design of the individual components in the system and finally to the implementation of the system on a particular computing platform (usually hardware plus operating software).

Validation and Verification: Validation is the activity of checking the correct system is being constructed (building the right system). Verification is the activity of checking the system is being constructed correctly (building the system right). These activities are essential to ensure a software project is going in the right direction.

Maintenance: This activity ensures that the system keeps track with changes in its operating environment throughout its operational life. This involves correcting errors that are discovered in operation and modifying the system to take account of changes in the system requirements. Repairing Millennium Bug errors in software is an example of Maintenance activity (and a good example of the costs of such activity). In one sense we can see the Millennium Bug problem as a change of requirement because when these systems were written their intended lifetime
was much shorter than has turned out to be the case. 

Software Production Processes

Organisations involved in creating software choose a particular way of organising the software development activities depending on the kind of system that is being constructed, and the needs of the organisation. The pattern of organisation is usually called a software development process. Such processes are used to give overall structure to the development process. Different software development processes have different characteristics. Process characteristics provide a basis for deciding which process to choose for a particular project. In particular, in many cases the choice of process should attempt to reduce project risk. Typical software process characteristics are (this is not an exhaustive list):

Visibility: How easy is it for an external assessor to determine what progress has been made?

Reliability: How good is the process at detecting errors before they appear in a product?

Robustness: How well does the process cope with unexpected change?
Maintainability: Is the process easy to change to take account of changed circumstances?

Rapidity: How fast can a system be produced?

Software Production Process Models

Here we briefly introduce three popular software development processes. We will return in later notes to consider them in more detail. Their inclusion here is to illustrate possible organisations of the activities.

Waterfall Model

This is a linear model where each activity provides the input to the next stage in the process. This process usually has high visibility because at the close of each stage full documentation is generated for that stage. Because of the linear nature of the process it is not particularly robust because any changes tend to force us to loop back to some earlier change and then follow through each of the stages again. In the early days of software production this was the standard model used by most developers.

Evolutionary Development
This approach was introduced by Gilb in 1985. It is often associated with the object-oriented approach to system development. The aim of this process is to split the problem up into many sub-tasks each of which can deliver a part of the product that offers a tangible improvement in the system to the users. Each sub-task is tackled and delivered as a separate deliverable in the life of the project. Delivery of a new component changes the perception of the project, then the priorities on completing the remaining tasks are re-evaluated in the light of the new component and the most important (from a user view) is chosen as the next to deliver. Evolutionary development is highly Robust because changes can easily be factored into the process but it is not particularly Visible because it may be difficult to keep track of many sub-tasks in an efficient way.

Spiral Model

This model was introduced by Barry Boehm in 1986. It is an iterative approach to system design that is centered round the creation of prototype systems that more closely approximate the system on each iteration until an acceptable system is constructed. At the start of each cycle the risk of proceeding is assessed and a decision take on whether to proceed on the basis of the project risk. The spiral model is Robust because the iterative nature allows us to plan flexibly, it is also visible because each prototype is fully documented.

Once a process is established it is possible for an organisation to measure concrete elements of its software development process. Such measurement is carried out to discover the effort used on projects and measures of the attributes of the systems produced. Many organisations also attempt to use such measurements to create a predictive model of development costs in terms of features of the customer requirement.



An informal description:
An algorithm is any well-defined computational procedure that takes some values as input and produces some values as output. An algorithm is thus a sequence of computational steps that transforms the input into the output.

A more formal definition:

An algorithm is an ordered set of unambiguous, executable steps that defines a terminating process.

Data Structure
In computer science, a data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently.
Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, B-trees are particularly well-suited for implementation of databases, while compiler implementations usually use hash tables to look up identifiers.
Data structures are used in almost every program or software system. Data structures provide a means to manage huge amounts of data efficiently, such as large databases and internet indexing services. Usually, efficient data structures are a key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design.

Wednesday, April 27, 2011

Computer Graphics

Computer graphics are graphics created using computers and, more generally, the representation and manipulation of image data by acomputer.
The development of computer graphics has made computers easier to interact with, and better for understanding and interpreting many types of data. Developments in computer graphics have had a profound impact on many types of media and have revolution izedanimation, movies and the video game industry.

The term computer graphics includes almost everything on computers that is not text or sound. Today almost every computer can do some graphics, and people have even come to expect to control their computer through icons and pictures rather than just by typing.

Here in our lab at the Program of Computer Graphics, we think of computer graphics as drawing pictures on computers, also called rendering. The pictures can be photographs, drawings, movies, or simulations -- pictures of things which do not yet exist and maybe could never exist. Or they may be pictures from places we cannot see directly, such as medical images from inside your body.

 We spend much of our time improving the way computer pictures can simulate real world scenes. We want images on computers to not just look more realistic, but also to BE more realistic in their colors, the way objects and rooms are lighted, and the way different materials appear. We call this work "realistic image synthesis", and the following series of pictures will show some of our techniques in stages from very simple pictures through very realistic ones.

Object Rendering

Computer graphics uses several simple object rendering techniques to make models appear three-dimensional.


Shading techniques extend the realistic appearance of objects and introduce features such as transparency and textures.


Computers don't create color exactly the way we see it.

Ray Tracing

Reflection and Transparency

The best way to appreciate how far these simple techniques have been developed is through much more complex (and more recent) Cornell computer graphics images

Why Radiosity?
Most surfaces are diffuse, not shiny, and ray tracing does not correctly depict how light reflects from diffuse surfaces. Our laboratory has played a major role in developing radiosity techniques for more realistic and more physically accurate rendering.

Quality of Light
Our research image sampler shows more current work in radiosity and other techniques.