In an effort to provide more detailed information about epileptic lesions in neural tissuefor the purpose of efficient removal, an Optical Coherence Tomography system was built to provide non-destructive imaging. The device is based upon the principle of a low-coherenceinterferometer. Such an instrument can detect if a particular point within a sample of tissue reflects light; an integration and interpretation of many scans of many points provides a highresolution image. The device built for this investigation has been successful in scanning semi-reflectiveobjects such as glass and glossy plastic. Objects with low reflectivity or high degrees of scattering did not produce strong enough signals to be detected successfully. One-dimensional scans were accomplished with reasonable speed and under computer control. Limited twodimensional scans were also taken with much lower speed and with constant human intervention. Optical Coherence Tomography (OCT) is an imaging technique that is similar in principle to ultrasound, but with superior resolution. It relies on exposing a sample to a burst of light and then measuring the reflective response from different depths and is therefore capable of scanning non-invasively beneath the surface of the sample. In ultrasound imaging, it is relatively easy to measure the time delay of each reflected packet. However, for light pulses, interferometry must be used to measure the displacement with meaningful accuracy. The amount of light reflected from each point within the scanning window in the sample is plotted graphically as an OCT image. The goal of this investigation is to use Optical Coherence Tomography to image epileptic lesions on cortical tissue from rats. Such images would be immensely useful for surgical purposes. They would detail how deep the lesion is, allowing for precise removal that neither removes an insufficient amount of damaged tissue nor extracts too much healthy tissue. Though commerical OCT systems already exist, they typically do not scan very deeply beneath sample surfaces. For the purpose of this study, a system must be constructed that scans up to 2 millimeters into tissue1. Unfortunately, an increase in axial depth necessitates a decrease in transverse (along the surface of the sample) resolution due to focal restrictions of the objective lenses2. However, this loss is acceptable for this investigation, as the main goal is to determine lesion depth and not to achieve perfect image clarity.

On earth we live upon an island of ordinary matter. The different states of matter generally found on earth are solid, liquid, and gas. Sir William Crookes, an English physicist identified a fourth state of matter, now called plasma, in 1879. Plasma is by far the most common form of matter.


Plasma in the stars and in the tenuous space between them makes up over 99% of the visible universe and perhaps most of that which is not visible. Important to ASI s technology, plasmas are conductive assemblies of charged and neutral particles and fields that exhibit collective effects. Plasmas carry electrical currents and generate magnetic fields. When the Plasma Antenna Research Laboratory at ANU investigated the feasibility of plasma antennas as low radar cross-section radiating elements, Redcentre established a network between DSTO ANU researchers, CEA Technologies, Cantec Australasia and Neolite Neon for further development and future commercialization of this technology. The plasma antenna R & D project has proceeded over the last year at the Australian National University in response to a DSTO (Defence Science and Technology Organisation) contract to develop a new antenna solution that minimizes antenna detectability by radar. Since then, an investigation of the wider technical issues of existing antenna systems has revealed areas where plasma antennas might be useful. The project attracts the interest of the industrial groups involved in such diverse areas as fluorescent lighting, telecommunications and radar. Plasma antennas have a number of potential advantages for antenna design. When a plasma element is not energized, it is difficult to detect by radar. Even when it is energized, it is transparent to the transmissions above the plasma frequency, which falls in the microwave region. Plasma elements can be energized and de-energized in seconds, which prevents signal degradation. When a particular plasma element is not energized, its radiation does not affect nearby elements. HF CDMA Plasma antennas will have low probability of intercept( LP) and low probability of detection( LPD ) in HF communications. Plasma antennas are radio frequency antennas that employ plasma as the guiding medium for electromagnetic radiation.The concept is to use plasma discharge tubes as the antenna elements. When the tubes are energized, they become conductors, and can transmit and receive radio signals. When they are de-energised, they revert to non-conducting elements and do not reflect probing radio signals. Plasma antenna can be Steered electronically. Another feature of the plasma antenna is that it can be turned off rapidly, reducing ringing on pulse transmission.
In an effort to provide more detailed information about epileptic lesions in neural tissuefor the purpose of efficient removal, an Optical Coherence Tomography system was built to provide non-destructive imaging. The device is based upon the principle of a low-coherenceinterferometer. Such an instrument can detect if a particular point within a sample of tissue reflects light; an integration and interpretation of many scans of many points provides a highresolution image. The device built for this investigation has been successful in scanning semi-reflectiveobjects such as glass and glossy plastic. Objects with low reflectivity or high degrees of scattering did not produce strong enough signals to be detected successfully. One-dimensional scans were accomplished with reasonable speed and under computer control. Limited twodimensional scans were also taken with much lower speed and with constant human intervention. Optical Coherence Tomography (OCT) is an imaging technique that is similar in principle to ultrasound, but with superior resolution. It relies on exposing a sample to a burst of light and then measuring the reflective response from different depths and is therefore capable of scanning non-invasively beneath the surface of the sample. In ultrasound imaging, it is relatively easy to measure the time delay of each reflected packet. However, for light pulses, interferometry must be used to measure the displacement with meaningful accuracy. The amount of light reflected from each point within the scanning window in the sample is plotted graphically as an OCT image. The goal of this investigation is to use Optical Coherence Tomography to image epileptic lesions on cortical tissue from rats. Such images would be immensely useful for surgical purposes. They would detail how deep the lesion is, allowing for precise removal that neither removes an insufficient amount of damaged tissue nor extracts too much healthy tissue. Though commerical OCT systems already exist, they typically do not scan very deeply beneath sample surfaces. For the purpose of this study, a system must be constructed that scans up to 2 millimeters into tissue1. Unfortunately, an increase in axial depth necessitates a decrease in transverse (along the surface of the sample) resolution due to focal restrictions of the objective lenses2. However, this loss is acceptable for this investigation, as the main goal is to determine lesion depth and not to achieve perfect image clarity.
Embryonics is embryonic electronics. Working of multicellular organization in living beings suggests that concepts from biology can be applied to development of new embryonic integrated circuits. The final objective is the development of VLSI circuits that can partially reconstruct themselves in case of a minor fault (self-repair) or completely reconstruct the original device in case of major fault (self-replication). These features are advantageous for applications depending on high reliability, like avionics and medical electronics. The basic primitive of the system is the molecule: the element of new FPGA- essentially a multiplexer associated with a programmable connection network. A finite set of molecules comprises a cell, i.e., a very simple processor associated to some memory resources. A finite set of cells comprises an organism, i.e., an application- specific multiprocessor system. The organism itself can self-replicate, giving rise to a population of identical organisms. The self-repair and self-replication are achieved by providing spare cells. This seminar report tries to bring out the basic concepts in the embryonics approach to realize VLSI circuits. 
High performance computing (HPC) has come of age. No longer is it the preserve of computer scientists in research labs, plugging together printed circuit boards and writing new flavours of parallel operating systems. HPC is a stable, mature technology, an enabling technology for an ever increasing number of scientists and researchers wishing to build and run computational models in their own particular disciplines. HPC has finally delivered on its promises. Here we take a look at the current state of high performance computing from the perspective of the European user community, and assess the needs and aspirations of this community in terms of where HPC might be going, and where, perhaps, it should be going. We aim to capture a snapshot of HPC activities, from the technology itself through related services to the direct views of its European user base, and attempt to draw the whole together into some form of roadmap for large scale computing in the twenty-first century. Quadrics Supercomputer World (QSW) offer a PCI-compatible high-performance ``fat tree interconnect based on the original Meiko Computing Surface network. Called QsNet and built from QSW s Elan III network chips and Elite III switch chips, it offers some of the highest performance currently available in cluster networking systems. The QsNet network is currently used inside the UltraSPARC-II-based QM-1, and QSW plans to produce systems in partnership with Compaq in the third quarter of 1999; the first of these will be the Compaq ``Sierra . It is as yet unclear whether QSW intend to make the QsNet technology available as an ``off-the-shelf networking product. QsNet consists of two hardware building blocks: a programmable network interface called Elan and a high-bandwidth, low-latency communication switch called Elite. Elite switches can be interconnected in a fat-tree topology. With respect to software, QsNet provides several layers of communication libraries that trade off between performance and ease of use. QsNet combines these hardware and software components to implement efficient and protected access to a global virtual memory via remote direct memory access (DMA) operations. It also enhances network fault tolerance via link level and end to end protocols that detect faults and automatically retransmit packets. 
Networked computer systems are rapidly growing in importance as the medium of choice for the storage and exchange of information. However, current systems afford little privacy to their users, and typically store any given data item in only one or a few fixed places, creating a central point of failure. Because of a continued desire among individuals to protect the privacy of their authorship or readership of various types of sensitive information, and the undesirability of central points of failure which can be attacked by opponents wishing to remove data from the system or simply overloaded by too much interest, systems offering greater security and reliability are needed. Freenet is being developed as a distributed information storage and retrieval system designed to address these concerns of privacy and availability. The system operates as a location-independent distributed file system across many individual computers that allow files to be inserted, stored, and requested anonymously. There are five main design goals: Anonymity for both producers and consumers of information Deniability for storers of information Resistance to attempts by third parties to deny access to information Efficient dynamic storage and routing of information Decentralization of all network functions The system is designed to respond adaptively to usage patterns, transparently moving, replicating, and deleting files as necessary to provide efficient service without resorting to broadcast searches or centralized location indexes. It is not intended to guarantee permanent file storage, although it is hoped that a sufficient number of nodes will join with enough storage capacity that most files will be able to remain indefinitely. In addition, the system operates at the application layer and assumes the existence of a secure transport layer, although it is transport-independent. It does not seek to provide anonymity for general network usage, only for Freenet file transactions. Freenet is currently being developed as a free software project on http://sourceforge.net, and a preliminary implementation can be downloaded from http://www.freenetproject.org. It grew out of work originally done by the first author at the University of Edinburgh. There are many reasons why people get involved in the Freenet Project. Some share the views outlined in this document; some share variations of these views, which are also served by what we are trying to achieve; and some just enjoy the technical challenge. These are the ideas which motivated me to join the Freenet project system in the first place, but not necessarily the views that everyone involved in the Freenet project holds.
In the present age of open economy, communication is rated as the infrastructure of infrastructures. This infrastructural era has passed through three different phases since the inventions of Morse s telegraphy and Graham Bell s telephony made nearly a century ago: Communication by connecting geographical points:Wired telephony, telegraphy, and fax are the examplesof this phase.

Communication by connecting machines rather than geographical locations or points: The cellular mobile telephony, paging, and cordless telephony are the examples of this phase. Wireless mode of communication and the support of mobility of machines are characteristics of this phase.

Communication by connecting people rather than geographic points or machines: This is the phase of personal communication, which is believed to be near-natural communication with value-added services. Personal communication is characterised by the PTN/UTN technique, the global coverage technology and the personal mobility (via wireless) technology. It facilitates a person to communicate with another person anywhere in the world, at any time with a single device.

and a single personal telecommunication number (PTN) or a universal telecommunication number. (UTN). It is understood that satellite communication in general and LEO (low earth orbit) satellite communication in particular shall be the major technology behind development process of personal communication. Satellite communication is driven by two important philosophies: First, that satellite is synonymous with wide-area coverage, including global coverage, and two, that satellite also means wireless communication, which supports man-machine mobility. It is interesting to note that both the required technologies related to personal communication are met with satellite communication. 
The growth in demand for digital storage capacity exceeds 60% per annum. Facilities such as storage area networks, data warehouses, supercomputers and e-commerce related data mining, require ever-greater capacity in order to handle the volume of data to be processed. In addition, with the advent of high bandwidth Internet and data intensive applications such as High Definition TV (HDTV) and Video & Music On Demand, even smaller devices such as Personal VCRs, PDAs, mobile phones, etc. Will in the next couple of years demand multi-gigabyte and terabyte capacities. Not less important, is the growing demand for faster data access and reading. For instance High Definition TV and Video & Music on Demand applications require over terabit/sec reading speed. Such and higher speed in conjunction with huge capacity can be implemented only by means of parallel access to any part of information on the carrier.

In the year 2000, one (1) ExaByte (ID^18 Bytes) of information will be stored, growing to more than two (2) ExaBytes by the year 2002. Approximately 10% of the information will be stored on magnetic disk drives (HDD), with the remainder on tapes, optical discs and paper. This increasing capacity demand has thus far, been met through steady increases in the areal density of the magnetic and optical recording media.

While the limits of magnetic recording are still being debated-recently 35 Gbit/sqi has been demonstrated-the limits of conventional optical storage are well understood. Current optical storage technology is working close to the diffractionlimits (5 Gbit/sqi).

Future increases in density are possible by taking advantage of shorter wavelength lasers, higher lens numerical aperture (NA), or by employing Near Field techniques. Finally, optical data storage capacities have been increased by creating double-sided media. Another approach to increasing the effective storage capacity is quite unique for optical memory technologies. This is three-dimensional storage.

True three-dimensional optical storage opens up another dimensional in which to increase the capacity of a given volume of media, with the objective of achieving a cubic storage element having the dimensions of the writing/reading laser wavelength. Even with current wavelengths of 650um, this should surface to store up to a Terabit per cubic centimeter.

Holographic memory is developing technology that has promised to revolutionaries the storage systems. It can store data upto 1 Tb in a sugar cube sized crystal. Data from more than 1000 CDs can fit into a holographic memory System. Most of the computer hard drives available today can hold only 10 to 40 GB of data, a small fraction of what holographic memory system can hold. Conventional memories use only the surface to store the data. But holographic data storage systems use the volume to store data. It has more advantages than conventional storage systems. It is based on the principle of holography.

Scientist Pieter J. van Heerden first proposed the idea of holographic (three-dimensional) storage in the early 1960s. A decade later, scientists at RCA Laboratories demonstrated the technology by recording 500 holograms in an iron-doped lithium-niobate crystal and 550 holograms of high-resolution images in a light-sensitive polymer material. The lack of cheap parts and the advancement of magnetic and semiconductor memories placed the development of holographic data storage on hold. 
Mobile wireless communication services have penetrated into many people s daily works and lives. The contents of communication have also shifted from voice-dominating applications to multimedia services.


A common feature people traveling onboard high-speed public transport vehicles is that their journeys last for hours or even days. They would like to make use of these few hours to make telephone calls, read/send e-mails, browse a website or catch up on some work. There are also passengers who wish to have real on-demand entertainment services including music, movies, and games. Although compared to 10 years ago, onboard communication and entertainment services have improved a great deal in terms of convenience, style and content, a truly personalized environment is still in its infancy.

Only satellite telephones are accessible. Most entertainment programs are carried onboard. Only very limited news or sports programs are broadcast real-time directly from satellite. This is because the technology is not quite there to make it happen. Before taking a look at the latest technologies available for facilitating communication onboard the high-speed vehicles, let s take a look at the history of mobile communication.


Nanotechnology is nothing but the manipulation of matter at atomic level. Working at this level means better precision, complete usage and everlasting qualities. This way it can be applied in any field, viz. production, research, industry, consumer-friendly, applications ,…Nanotechnology is the outcome of the excellent conglomeration of human thinking, technology and molecular behavior.


By arranging the atoms in sequences or structures of our choice, we can build a wonderful, devoid of defects, illness… The consequences of this technology are nothing but our imagination. Whatever we intend, we can achieve, using this miraculous technology..

A practical approach to the implementation is being taken up in this paper about the Nano-Communication System .


The motivations for security in cellular telecommunications systems are to secure conversations and signaling data from interception as well as to prevent cellular telephone fraud. With the older analog-based cellular telephone systems such as the Advanced Mobile Phone System (AMPS) and the Total Access Communication System (TACS), it is a relatively simple matter for the radio hobbyist to intercept cellular telephone conversations with a police scanner.

A well-publicized case involved a potentially embarrassing cellular telephone conversation with a member of the British royal family being recorded and released to the media. Another security consideration with cellular telecommunications systems involves identification credentials such as the Electronic Serial Number (ESN), which are transmitted in the clear in analog systems. With more complicated equipment, it is possible to receive the ESN and use it to commit cellular telephone fraud by cloning another cellular phone and placing calls with it. Estimates for cellular fraud in the U.S. in 1993 are as high as $500 million. The procedure wherein the Mobile Station (MS) registers its location with the system is also vulnerable to interception and permits the subscriber s location to be monitored even when a call is not in progress, as evidenced by the recent highly-publicized police pursuit of a famous U.S. athlete.

The security and authentication mechanisms incorporated in GSM make it the most secure mobile communication standard currently available, particularly in comparison to the analog systems described above. Part of the enhanced security of GSM is due to the fact that it is a digital system utilizing a speech coding algorithm, Gaussian Minimum Shift Keying (GMSK) digital modulation, slow frequency hopping, and Time Division Multiple Access (TDMA) time slot architecture. To intercept and reconstruct this signal would require more highly specialized and expensive equipment than a police scanner to perform the reception, synchronization, and decoding of the signal. In addition, the authentication and encryption capabilities discussed in this paper ensure the security of GSM cellular telephone conversations and subscriber identification credentials against even the determined eavesdropper. 
In a world of ubiquitous presence of electrons can you imagine any other field displacing it? It may seem peculiar, even absurd, but with the advent of spintronics it is turning into reality.

In our conventional electronic devices we use semi conducting materials for logical operation and magnetic materials for storage, but spintronics uses magnetic materials for both purposes. These spintronic devices are more versatile and faster than the present one. One such device is spin valve transistor.

Spin valve transistor is different from conventional transistor. In this for conduction we use spin polarization of electrons. Only electrons with correct spin polarization can travel successfully through the device. These transistors are used in data storage, signal processing, automation and robotics with less power consumption and results in less heat. This also finds its application in Quantum computing, in which we use Qubits instead of bits.

Spintronics came into light by the advent of Giant Magneto Resistance (GMR) in 1988. GMR is 200 times stronger than ordinary Magneto Resistance. It results from subtle electron - spin effects in ultra multilayers of magnetic materials that cause a huge change in electrical resistance. 
BORDER SECURITY USING WIRELESS INTEGRATED NETWORK SENSORS (WINS) Wireless Integrated Network Sensors (WINS) now provide a new monitoring and control capability for monitoring the borders of the country. Using this concept we can easily identify a stranger or some terrorists entering the border.


The border area is divided into number of nodes. Each node is in contact with each other and with the main node. The noise produced by the foot-steps of the stranger are collected using the sensor. This sensed signal is then converted into power spectral density and the compared with reference value of our convenience. Accordingly the compared value is processed using a microprocessor, which sends appropriate signals to the main node. Thus the stranger is identified at the main node.

A series of interface, signal processing, and communication systems have been implemented in micro power CMOS circuits. A micro power spectrum analyzer has been developed to enable low power operation of the entire WINS system. 


Distributed firewalls are host-resident security software applications that protect the enterprise network s servers and end-user machines against unwanted intrusion. They offer the advantage of filtering traffic from both the Internet and the internal network. This enables them to prevent hacking attacks that originate from both the Internet and the internal network. This is important because the most costly and destructive attacks still originate from within the organization.


They are like personal firewalls except they offer several important advantages like central management, logging, and in some cases, access-control granularity. These features are necessary to implement corporate security policies in larger enterprises. Policies can be defined and pushed out on an enterprise-wide basis.

A feature of distributed firewalls is centralized management. The ability to populate servers and end-users machines, to configure and push out consistent security policies helps to maximize limited resources. The ability to gather reports and maintain updates centrally makes distributed security practical. Distributed firewalls help in two ways. Remote end-user machines can be secured . Secondly, they secure critical servers on the network preventing intrusion by malicious code and jailing other such code by not letting the protected server be used as a launch pad for expanded attacks. 


A picture says more than a thousand words and is more fun to look at!!! . Everyone in this world believes in this quote. And this is also one of the main quotes that inspired mobile developers who gave this hot technology -MMS.


MMS, Multimedia Messaging Service, is a standardized messaging service. It traced its roots from SMS (Short Messaging Services) and EMS (Enhanced Messaging Services) .MMS will allow users to send and receive messages exploiting the whole array of media types available today, e.g. text, images, audio, and video, text.

Graphics, data, animations, while also making it possible to support new content types as they become popular. With MMS, for example, users could send each other personal pictures together with a voice message, such as a greeting card with a picture, handwritten message, and a personal song or sound clip that has been recorded by the user itself. Video conferencing, which is expected to make a great impact in the future, is also possible with this technology. Using the Wireless Application Protocol (WAP) as bearer technology and powered by the high-speed transmission technologies EDGE, GPRS and UMTS (WCDMA), Multimedia Messaging allows users to send and receive messages that look like PowerPoint-style Presentations.

MMS supports standard image formats such as GIF and JPEG, video formats such as MPEG 4, and audio formats such as MP3, MIDI and WAV, also the new AMR.

The greatest advantage of MMS is its ability to interact with mobile to mobile terminals as well as with mobile to PDA Laptop Internet and other data devices.

MMS can also act as a virtual email client. Greatly anticipated by young users in particular, MMS is projected to fuel the growth of related market segments by as much as forty percent.

A picture says more than a thousand words and is more fun to look at!!! . Everyone in this world believes in this quote. And this is also one of the main quotes that inspired mobile developers who gave this hot technology -MMS.

MMS, Multimedia Messaging Service, is a standardized messaging service. It traced its roots from SMS (Short Messaging Services) and EMS (Enhanced Messaging Services) .MMS will allow users to send and receive messages exploiting the whole array of media types available today, e.g. text, images, audio, and video, text.

Graphics, data, animations, while also making it possible to support new content types as they become popular. With MMS, for example, users could send each other personal pictures together with a voice message, such as a greeting card with a picture, handwritten message, and a personal song or sound clip that has been recorded by the user itself. Video conferencing, which is expected to make a great impact in the future, is also possible with this technology. Using the Wireless Application Protocol (WAP) as bearer technology and powered by the high-speed transmission technologies EDGE, GPRS and UMTS (WCDMA), Multimedia Messaging allows users to send and receive messages that look like PowerPoint-style Presentations.

Some novel designs for several such simple molecular electronic digital logic circuits: a complete set of three fundamental logic gates: (AND, OR, and XOR gates), plus and adder function built up from the gates via the well-known combinational logic, was demonstrated. This means in coming future, this technology could be a replacement for VLSI. However, currently, this technology is only available under lab condition. How to mass product moletronic chips is still a big problem.


Currently, integrated circuits by etching silicon wafers using beam of light. It s the VLSI lithography-based technology makes mass production of Pentium III processor possible. But as the size of logic block goes to nano-scale, this technology no long available. As wavelength get too short, they tend to become X-rays and can damage the micro structure of molecules.

On the other hand, the mask of lithography of Pentium III is so complex, and the shape and the dimension of its logic block varies so much. Looking at currently available integrated circuits, the transistor density of memory chip are much higher than processor chip, the reason is that the cell of memory is much more simple than circuit of processor.

Because, except the decoding logic, most of the memory bit cell is the same. Could we find a way to fabricate complex logic circuit as Pentium processor using million of same logic units? The PLD(Programmable Logic Devices) is the answer. The paper is organized as following: section II presents some basic of moletronic gate circuit. section III uses PLD technology to build more complex blocks. section IV shows the nanotube can be used for interconnection wires.

As we tend towards a more and more computer centric world, the concept of data security has attained a paramount importance. Though present day security systems offer a good level of protection, they are incapable of providing a trust worthy environment and are vulnerable to unexpected attacks.



Palladium is a content protection concept that has spawned from the belief that the pc, as it currently stands, is not architecturally equipped to protect a user forms the pitfalls and challenges that an all-pervasive network such as the Internet poses.

As a drastic change in pc hardware is not feasible largely due to economic reasons, palladium hopes to introduce a minimal change in this front. A paradigm shift is awaited in this scenario with the advent of usage of palladium, thus making content protection a shared concern of both software and hardware. In the course of this paper the revolutionary aspects of palladium are discussed in detail.

A case study to restructure the present data security system of JNTU examination system using palladium is put forward.
The Tiger SHARC processor is the newest and most power member of this family which incorporates many mechanisms like SIMD, VLIW and short vector memory access in a single processor. This is the first time that all these techniques have been combined in a real time processor.


The TigerSHARC DSP is an ultra high-performance static superscalar architecture that is optimized for tele-communications infrastructure and other computationally demanding applications.

This unique architecture combines elements of RISC, VLIW, and standard DSP processors to provide native support for 8, 16, and 32-bit fixed, as well as floating-point data types on a single chip.

Large on-chip memory, extremely high internal and external bandwidths and dual compute blocks provide the necessary capabilities to handle a vast array of computationally demanding, large signal processin tasks. 


The method of iontophoresis was described by Pivati in 1747.Galvani and Volta, two well-known scientists working in the 18th century, combined the knowledge that electricity can move different metal ions, and that movements of ions produce electricity. The method of administrating pharmacological drugs by iontophoresis became popular at the beginning of the 20th century due to the work of Leduc (1900) who introduce the word iontotherapy and formulated the laws for this process.



Iontophoresis is defined as the introduction by means of a direct electrical current, of ions of soluble salts into the tissues of the body for therapeutic purposes. It is a technique used to enhance the absorption of drugs across biological tissues, such as the skin. Another method for drug delivery through the skin, called phonophoresis, uses ultrasound instead of an electric current. Both these techniques are complicated because of other processes that occur simultaneously with the delivery of the drug. With the present knowledge about these processes, it is easier to select and prepare appropriate drugs and vehicles for iontophoresis than for phonophoresis.

In clinical practice, iontophoresis devices are used primarily for the treatment of inflammatory conditions in skin, muscles, tendons and joints, such as in temperomandibular joint dysfunctions. More recently, iontophoresis has been used in combination with laser Doppler technology as a diagnostic tool in diseases comprising the vascular bed. 
Tunable lasers are still a relatively young technology, but as the number of wavelengths in networks increases so will their importance. Each different wavelength in an optical network will be separated by a multiple of 0.8 nanometers (sometimes referred to as 100GHz spacing. Current commercial products can cover maybe four of these wavelengths at a time.



While not the ideal solution, this still cuts your required number of spare lasers down. More advanced solutions hope to be able to cover larger number of wavelengths, and should cut the cost of spares even further.

The devices themselves are still semiconductor-based lasers that operate on similar principles to the basic non-tunable versions. Most designs incorporate some form of grating like those in a distributed feedback laser. These gratings can be altered in order to change the wavelengths they reflect in the laser cavity, usually by running electric current through them, thereby altering their refractive index. The tuning range of such devices can be as high as 40nm, which would cover any of 50 different wavelengths in a 0.8nm wavelength spaced system. Technologies based on vertical cavity surface emitting lasers (VCSELs) incorporate moveable cavity ends that change the length of the cavity and hence the wavelength emitted. Current designs of tunable VCSELs have similar tuning ranges. 
We have grown accustomed to the idea that computation takes place using electronic components on a silicon substrate. Perhaps our view of computation is too limited. What if the computers were ubiquitous and could be found in many forms? Could a liquid computer exist in which interacting molecules perform computation? The answer is YES --- and this is the story of the DNA computer.


The vast parallelism, exceptional energy efficiency, and extraordinary information density inherent in molecular computers might some day prove capable of attacking problems that have resisted conventional methods (Adleman. 1994, Lipton 1995, Baum 1995, Rothemund 1996, Smith 1996, Quyang et al.,1997). 
ATM is simply a Data Link Layer protocol. It is asynchronous in the sense that the recurrence of the cells containing information from an individual user is not necessarily periodic. It is the technology of choice for evolving B-ISDN (Board Integrated Services Digital Network), for next generation LANs and WANs. ATM supports transmission speeds of 155Mbits / sec. In the future. Photonic approaches have made the advent of ATM switches feasible, and an evolution towards an all packetized, unified, broadband telecommunications and data communication world based on ATM is taking place.


Synchronous Transfer Mode (STM) was the first technique to be considered due to its compatibility with most existing systems and the desire to preserve the investment in existing equipment while evolving to a more flexible network. ATM has been proposed to overcome the limitations of STM and the large delay incurred by conventional packet switching. ATM is one of the general classes of digital packet technologies that relay and route traffic by means of an address contained within the packet. What makes packet technologies attractive for data traffic is that they exploit communication channels much more efficiently than the STM technologies common used to transmit digitized voice.
Sun engineers have been working quietly on anew Java technology called Jini since 1995. Part of the original vision for Java, it was put on the back burner while Sun waited for Java to gain widespread acceptance. As the Jini project revved up and more than 30technology partners signed on, it became impossible to keep it under wraps. So Sun cofounder Bill Joy, who helped dream up Jini, leaked the news to the media earlier this month. It was promptly smothered in accolades and hyperbolic prose.

HOW DOES IT WORK?

When you plug a new Jini-enabled device into a network, it broadcasts a message to any lookup service on the network saying, in effect, Here I am. Is anyone else out there? The lookup service registers the new machine, keeps a record of its attributes and sends a message back to the Jini device, letting it know where to reach the lookup service if it needs help. So when it comes time to print, for example, the device calls the lookup service, finds what it needs and sends the job to the appropriate machine. Jini actually consists of a very small piece of Java code that runs on your computer or device.

WHY WILL JINI BE THE FUTURE OF DISTRIBUTED COMPUTING?

Jini lets you dynamically move code, and not just data, from one machine to another. That means you can send a Java program to any other Jini machine and run it there, harnessing the power of any machine on your network to complete a task or run a program.

WHY WON T JINI BE THE FUTURE OF DISTRIBUTED COMPUTING?

So far, Jini seems to offer little more than basic network services. Don t expect it to turn your household devices into supercomputers; it will take some ingenious engineering before your stereo will start dating your laptop. Jini can run on small handheld devices with little or no processing power, but these devices need to be network-enabled and need to be controlled by another Jini-enabled hardware or software piece by proxy. 
As researchers work towards the creation of plastic-based alternatives in order to to make technology more pervasive, silicon wafers might soon be biting the dust.



No one would need to interact with computers any more as technology would be ingrained into everyday objects like shirts, driverless cars or therapeutic dolls , predicted Nicholas Negroponte, cofounder and director of MIT Media Laboratory in 1998. In his columns in the Wired magazine, (the further claimed that not only was the Digital Age upon us, but that we were already in the final stages of the digital revolution.

A big step in this/all-pervasive computing direction is plastic re-engineering. Research in this field aims to create chips made of plastic wafers instead of silicon. Not only will such chips enable the products Negroponte talked about, it will also allow a hobbyist or a power-user to print his own PC
Modular, self-reconfigurable robots show the promise of great versatility, robustness and low cost. Polybot is a modular, self-reconfigurable system that is being used to explore the hardware reality of a robot with a large number of interchangeable modules. Three generations of Polybot have been built over the last three years which include ever increasing levels of functionality and integration. Polybot has shown versatility, by demonstrating locomotion over a variety of terrain and manipulating a variety of objects.

Polybot is the first robot to demonstrate sequentially two topologically distinct locomotion modes by self-reconfiguration. Polybot has raised issues regarding software scalability and hardware dependency and as the design evolves the issues of low cost and robustness are being addressed while exploring the potential of modular, self-reconfigurable robots.
The history of semiconductor devices starts in 1930’s when Lienfed and Heil first proposed the mosfet. However it took 30 years before this idea was applied to functioning devices to be used in practical applications, and up to the late 1980 this trend took a turn when MOS technology caught up and there was a cross over between bipolar and MOS share.CMOS was finding more wide spread use due to its low power dissipation, high packing density and simple design, such that by 1990 CMOS covered more than 90% of total MOS scale.

In 1983 bipolar compatible process based on CMOS technology was developed and BiCMOS technology with both the MOS and bipolar device fabricated on the same chip was developed and studied. The objective of the BiCMOS is to combine bipolar and CMOS so as to exploit the advantages of both at the circuit and system levels. Since 1985, the state-of-the-art bipolar CMOS structures have been converging. Today BiCMOS has become one of the dominant technologies used for high speed, low power and highly functional VLSI circuits especially when the BiCMOS process has been enhanced and integrated in to the CMOS process without any additional steps. Because the process step required for both CMOS and bipolar are similar, these steps cane be shared for both of them 
Imagine a time when your mobile will be your virtual assistant and will need far more than the 8k and 16k memory that it has today, or a world where laptops require gigabytes of memory because of the impact of convergence on the very nature of computing. How much space would your laptop need to carry all that memory capacity? Not much, if Intel s project with Thin Film Electronics ASA (TFE) of Sweden works according to plan. TFE s idea is to use polymer memory modules rather than silicon-based memory modules, and what s more it s going to use architecture that is quite different from silicon-based modules. 

While microchip makers continue to wring more and more from silicon, the most dramatic improvements in the electronics industry could come from an entirely different material plastic. Labs around the world are working on integrated circuits, displays for handheld devices and even solar cells that rely on electrically conducting polymers—not silicon—for cheap and flexible electronic components. Now two of the world’s leading chip makers are racing to develop new stock for this plastic microelectronic arsenal: polymer memory. Advanced Micro Devices of Sunnyvale, CA, is working with Coatue, a startup in Woburn, MA, to develop chips that store data in polymers rather than silicon. The technology, according to Coatue CEO Andrew Perlman, could lead to a cheaper and denser alternative to flash memory chips—the type of memory used in digital cameras and MP3 players. Meanwhile, Intel is collaborating with Thin Film Technologies in Linkping, Sweden, on a similar high capacity polymer memory.
The demand for digital storage capacity exceeds a growth of 60% per annum. Facilities like storage area networks, data warehouses, supercomputers and e-commerce related data mining requires much greater capacity to process the volume of data.

Further, with the advent of high bandwidth Internet and data-intensive applications such as high-definition TV (HDTV) and video & music on-demand, even smaller devices such as personal VCRs, PDAs, mobile phones etc will require multi-gigabyte and terabyte capacity in the next couple of years.

This ever increasing capacity demand can only be managed by the steady increase in the areal density of the magnetic and optical recording media. In future,this density increase is possible by taking advantage of the shorter wavelength lasers, higher lens numerical aperture (NA) or by employing near-field techniques. Today, the optical data storage capacities have been increased by creating double-sided media. This approach for increasing the effective storage capacity is quite unique for optical memory technologies. Fluorescent multilayer disc (FMD) is a three-dimensional storage for large amount of data. This three-dimensional optical storage opens up another dimension of increasing the capacity of a given volume of media, with the objective of achieving a cubic storage element, having the dimensions of writing / reading laser wavelength. The current wavelength of 650 µm should be sufficient enough to store up to a Terabyte of data. 
A Surface-conduction Electron-emitter Display (SED) is a flat panel display technology that uses surface conduction electron emitters for every individual display pixel. The surface conduction electron emitter emits electrons, that excite a phosphor coating on the display panel which is similar to the basic concept found in traditional cathode ray tube (CRT) televisions. This means that SEDs can combine the slim form factor of LCDs with the high contrast ratios and can also refresh rates making the picture quality of CRTs better .The researches so far claim that the SED consumes less power than the LCD displays. The surface conduction electron emitter apparatus consists of a thin slit, across which electrons tunnel when excited by moderate voltages (tens of volts). 

When the electrons cross electric poles across the thin slit, some are scattered at the receiving pole and are accelerated towards the display surface by a large voltage gradient (tens of kV) between the display panel and the surface conduction electron emitter apparatus. The SED display offer brightness, color performance and viewing angles on par with CRTs. However, they do not require a deflection system for the electron beam. Engineers as a result can create a display that is just a few inches thick which is still light enough for wall-hanging designs. The manufacturer can enlarge the panel merely by increasing the number of electron emitters relative to the necessary number of pixels. Since 1987, SED technology has been developing. Canon and Toshiba are the two major companies working on SEDs. 
Powered by digital electronics, this optical solution facilitates the entire digital connection between a graphic or video source and the screen, in movie video projectors, televisions, home theatre systems and business video projectors. Digital Micro mirror Device or DLP® chip which is a rectangular array of up to 2 million hinge- mounted microscopic mirrors (each of these micro mirrors measures less than one-fifth the width of a human hair) controls the light in optical semiconductor of the DLP projection system.

A digital image is projected on to the screen or surface using the mirror just by synchronising the digital video or graphic signal, a light source, and a projection lens. The edge of DLP projectors compared with the conventional projection systems are. 1) digital grayscale and color reproduction is carried out because of the digital nature, which makes DLP the final link in the digital video infrastructure 2) More efficient than competing transmissive LCD technologies as DLP has its roots in reflective DMD.3) Capacity to create seamless, film like images. So with DLP, EXPERIENCE THE DIGITAL REVLOUTION. 
Posing a great challenge to the traditional Charge Coupled Devices (CCD) in various applications, CMOS image sensors have improvised themselves with time, finding solutions for the problems related with the noise and sensitivity. The use of Active Pixel Sensors having its foundation with the sub-micron technologies have helped to attain low power, low voltage and monolithic integration allowing. The manufacture of miniaturised single-chip digital cameras is an example of this technology. 

The incorporation of advanced techniques at the chip or pixel level has opened new dimensions for the technology. Now after a decade, the initial tussle over the advocacy regarding the emergence of complementary-metal-oxide-semiconductor (CMOS) technology over the charge-coupled device (CCD) have slowly dropped showing the strengths and weakness of the technologies. REFERENCES 
This project of a multihearted CPU or the super computer on a chip, which has opened a new dimension in the era of devices, is a combined undertaking by the corporates giants like the Sony, Thosiba and the IBM (STI). This innovative technology will find its applications in Playstation 3 video game console of Sony, in replacing the existing processors, in the broadband network technology, in boosting the performance of the existing electronic devices. As per the details available this, single chip can perform I trillion floating point operations per second, 1 TFLOP, which is several hundred times faster than a high-end personal computer.

Give more room space for additional hardware resources to perform parallel computations rather than allowing the single threaded performance is the core concept of this project. This means, only minimum resources are allocated to perform the single threaded operations compared to performing more parallelizable multimedia-type computations like the multiple DSP-like processing elements. 
The invention of new fluorescence-based chemical sensor has facilitated the myriad potential applications such as monitoring oxygen, inorganic gases, volatile organic compounds, biochemical compounds etc, as the technology is versatile, compact and inexpensive in nature. Depending upon the vital criteria’s of accuracy, precision, cost and the ability to meet the environmental range of the intended application, proper sensor can be chosen for the military control based subsystem. Sensor web and video sensor technology are two widely applied sensor techniques, in which Sensor Web is a type of sensor network or geographic information system (GIS) well suited for environmental monitoring and control, where as video sensor technology is used for digital image analysis. In sensor web technology, we have a wirelessly connected shapeless network of unevenly distributed sensor platform or pods, which is very much different from the TCP/IP-like network with respect to its synchronous and router-free nature.

Because of this unique architecture every pod in the network knows what is happening with every other pod throughout the Sensor Web at each measurement cycle. The main requirements for a video sensor technology is an application software and a computer that acts as the carrier platform, which is usually equipped with a Linux or Microsoft operating system upon which the application software works. By programming the digital algorithms the interpretation of digital images and frame rates can be carried out. The video sensor is very much helpful in evaluating the scenes und sequences within an image section of a (CCD) camera. 
WiDEN enhanced specialized mobile radio (or ESMR) wireless telephony protocol of Motorola employs this software upgrade developed by the company. In a WiDEN or Wideband Integrated Dispatch Enhanced Network by applying a bandwidth of 100 kbit/s the subscriber can communicate over four 25 kHz channels. The iM240 PCMCIA card that can transmit the data at a speed of 60 kbit/s was the first WiDEN-compatible device to be released by Motorola. Unveiled in mid-summer 2005, the first WiDEN-compatible telephones, the motorola i850 and the i760 used the software upgrade of i850/i760 which puts them in the WIDEN network. The Motorola i870 c, released on 31 October 2005, made possible the commercial launch of WiDEN. Today this new technology is widely used in Nextel s National Network. 

Update: Since the Sprint Nextel merger the company determined that because Sprint s CDMA network was already 3G and going to EVDO (broadband speeds) and then EVDO Rev A (T-1 speeds) it would be redundant to keep upgrading the IDEN data network.
Human cognition depends primarily on the ability to perceive, interpret, and integrate audio-visuals and sensoring information. Adding extraordinary perceptual abilities to computers would enable computers to work together with human beings as intimate partners.

Researchers are attempting to add more capabilities to computers that will allow them to interact like humans, recognize human presents, talk, listen, or even guess their feelings. The BLUE EYES technology aims at creating computational machines that have perceptual and sensory ability like those of human beings. It uses non-obtrusige sensing method, employing most modern video cameras and microphones to identifies the users actions through the use of imparted sensory abilities . The machine can understand what a user wants, where he is looking at, and even realize his physical or emotional states.
Genomic Signal Processing (GSP) is the engineering discipline that studies the processing of genomic signals.The theory of signal processing is utilized in both structural and functional understanding. The aim of GSP is to integrate the theory and methods of signal processing with the global understanding of functional genomics, with special emphasis on genomic regulation.

Gene prediction typically refers to the area of computational that is concerned with algorithmically identifying biology genomic DNA, that are stretches of sequence, usually biologically functional. This especially includes protein-genes, but may also include other functional coding RNA genes and regulatory regions. Gene elements such as finding is

one of the first and most important steps in understanding the genome of a species once it has been sequenced. 

In summer 1977, ESA placed the first technological study contract in the domain of intersatellite optical links. Now, twenty years later, a major milestone has been reached with the SILEX laser terminals having been flight tested for integration with their host spacecraft. At the same time, ESA is preparing itself for a new challenge: the potential massive use of optical cross links in satellite constellations for mobile communications and global multimedia services. This is an opportune moment to look back at the past twenty years of ESA effort in laser communications, to take stock of the results achieved and to reflect on ways to face the challenges of the future.

Introduction

Twenty years ago, in summer 1977, ESA placed a technological research contract for the assessment of modulators for high-data- rate laser links in space. This marked the beginning of a long and sustained ESA involvement in space optical communications. A large number of study contracts and preparatory hardware development followed, conducted under various ESA R&D and support technology programmes. In the mid- 1980 s, ESA took an ambitious step by embarking on the SILEX (Semiconductor laser Intersatellite Link Experiment) programme, to demonstrate a pre-operational optical link in space.

SILEX, which will be in operation in the year 2000, has put ESA in a world-leading position in civilian optical intersatellite links. While SILEX formed the backbone of ESA s optical communications activities in the recent past, additional R&D activities were undertaken to develop attractive second-generation systems, particularly for the commercial satellite market. Indeed, at the turn of the century, literally thousands of intersatellite links - radio-frequency (RF) and optical - are expected to be in operation in commercial multi-satellite constellations providing mobile communications, video conferencing and multimedia services. The race is on for the European laser communication industry to enter this lucrative market. Optical technology offers too many advantages in terms of mass, power, system flexibility and cost, to leave the field entirely to RF. With the heritage of twenty years of technological preparation, European industry is well positioned to face this burgeoning demand for commercial laser terminals. The early days

When ESA started to consider optics for intersatellite communications, virtually no component technology was available to support space system development. The available laser sources were rather bulky and primarily laboratory devices. ESA selected the CO2 gas laser for its initial work. This laser was the most efficient and reliable laser available at the time and Europe had a considerable background in CO2 laser technology for industrial applications. ESA undertook a detailed design study of a CO2 laser communication terminal and proceeded with the breadboarding of all critical subsystems which were integrated and tested in a complete laboratory breadboard transceiver model.

This laboratory system breadboarding enabled ESA to get acquainted with the intricacies of coherent, free-space optical communication. However, it soon became evident that the 10 micron CO2 laser was not the winning technology for use in space because of weight, lifetime and operational problems. Towards the end of the 1970 s, semiconductor diode lasers operating at room temperature became available, providing a very promising transmitter source for optical intersatellite links. In 1980, therefore, ESA placed the first studies to explore the potential of using this new device for intersatellite links. At the same time, the French national space agency, CNES, started to look into a laser-diode-based optical data-relay system called Pastel. This line of development was consequently followed and resulted in the decision, in 1985, to embark on the SILEX pre-operational, in-orbit optical link experiment.

SILEX

SILEX is a free-space optical communication system which consists of two optical communication payloads to be embarked on the ESA Artemis (Advanced Relay and TEchnology MIssion Satellite) spacecraft and on the French Earth-observation spacecraft SPOT-4. It will allow data transmission at 50Mbps from low Earth orbit (LEO) to geostationary orbit (GEO) using GaAlAs laser-diodes and direct detection.

The SILEX Phase A and B studies were conducted around 1985, followed by technology breadboarding and predevelopment of the main critical elements which were tested on the so-called System Test Bed to verify the feasibility of SILEX. A detailed design phase was carried out in parallel with the System Test Bed activities up to July 1989. At that time, the development of SPOT-4 Phase C/D was agreed with an optical terminal as passenger. This was an important decision since it made a suitable partner satellite available for the ESA data-relay satellite project; the stage was therefore set to start the main SILEX development effort in October 1989.

In March 1997, a major milestone was reached in the SILEX programme: both terminals underwent a stringent environmental test programme and are now ready for integration with their host spacecraft. However, due to the agreed SPOT-4 and Artemis launch dates, it is likely that the in-orbit demonstration of the overall system will not start before mid-2000. Consequently, the GEO terminal will need to be stored after the completion of the spacecraft testing. The first host spacecraft (SPOT-4) is planned for launch in February 1998. The launch of Artemis on a Japanese H2A is delayed for non-technical reasons until February 2000. Apart from launching Artemis, Japan is participating in the SILEX programme with its own laser terminal, LUCE (Laser Utilizing Communications Equipment), to be carried onboard the Japanese OICETS satellite (Optical Inter-orbit Communications Engin-eering Test Satellite), set for launch in summer 2000.

Optical ground station on Tenerife As part of the SILEX in-orbit check-out programme, ESA started to construct an optical ground station on the Canary Islands in 1993 (Fig. 2). This station, which will be completed by the end of 1997, simulates a LEO optical terminal using a 1 m telescope, allowing the performances of the GEO optical terminal on Artemis to be verified. The optical ground station will receive and evaluate the data transmitted from Artemis and will simultaneously transmit data at optical wavelengths towards Artemis. In addition to its primary objective as the SILEX in-orbit check-out facility, the optical ground station will also be used for space-debris tracking, lidar monitoring of the atmosphere and astronomical observations. 
Wireless Integrated Network Sensors (WINS) now provide a new monitoring and control capability for monitoring the borders of the country. Using this concept we can easily identify a stranger or some terrorists entering the border. The border area is divided into number of nodes. Each node is in contact with each other and with the main node. 

The noise produced by the foot-steps of the stranger is collected using the sensor. This sensed signal is then converted into power spectral density and the compared with reference value of our convenience. Accordingly the compared value is processed using a microprocessor, which sends appropriate signals to the main node. Thus the stranger is identified at the main node. A micro power spectrum analyzer has been developed to enable low power operation of the entire WINS system. 

Thus WINS require a Microwatt of power. But it is very cheaper when compared to other security systems such as RADAR under use. It is even used for short distance communication less than 1 Km. It produces a less amount of delay. Hence it is reasonably faster. On a global scale, WINS will permit monitoring of land, water, and air resources for environmental monitoring. On a national scale, transportation systems, and borders will be monitored for efficiency, safety, and security. 
Pixie dust is the informal name that IBM is using for its antiferromagnetically-coupled (AFC) media technology, which can increase the data capacity of hard drives to up to four times the density possible with current drives. AFC overcomes limits of current hard drives caused by a phenomenon called the superparamagnet effect. 

AFC allows more data to be packed onto a disk. The pixie dust used is a 3- atom thick magnetic coating composed of the element ruthenium sandwiched between two magnetic layers. The technology is expected to yield 400 GB hard drives for desktop computers, and 200 GB hard drives for laptops. 

IBM s use of AFC for hard drives overcomes what was considered an insuperable problem for storage: the physical limit for data stored on hard drives. IBM discovered a means of adding AFC to their standard production methods so that the increased capacity costs little or nothing.
Optical switching enables routing of optical data signals without the need for conversion to electrical signals and, therefore, is independent of data rate and data protocol.Optical Burst Switching (OBS) is an attempt at a new synthesis of optical and electronic technologies that seeks to exploit the tremendous bandwidth of optical technology, while using electronics for management and control. 

In an OBS network the incoming IP traffic is first assembled into bigger entities called bursts. Bursts, being substantially bigger than IP packets are easier to switch with relatively small overhead. When a burst is ready, reservation request is sent to the core network. Transmission and switching resources for each burst are reserved according to the one-pass reservation scheme, i.e. data is sent shortly after the reservation request without receiving an acknowledgement of successful reservation. 

The reservation request (control packet) is sent on a dedicated wavelength some offset time prior to the transmission of the data burst. This basic offset has to be large enough to electronically process the control packet and set up the switching matrix for the data burst in all nodes. When a data burst arrives in a node the switching matrix has been already set up, i.e. the burst is kept in the optical domain. The reservation request is analysed in each core node, the routing decision is made, and sent to the next node. When the burst reaches its destination node it is disassembled, and the resulting IP packets are sent to their respective destinations. 

The benefit of OBS over circuit switching is that there is no need to dedicate a wavelength for each end-to-end connection. OBS is more viable than optical packet switching because the burst data does not need to be buffered or processed at the cross-connect. Advantages 

* Greater transport channel capacity 

* No O-E-O conversion 

* Cost effective 

Disadvantages 

* Burst dropped in case of contention 

* Lack of effective technology 

HYDERABAD: Writing the annual examinations to save the academic year was favoured by the student representatives from all the universities in Telangana region who met at the Osmania University on Friday.
The OU JAC leaders Pulla Rao Yadav and Marri Anil said that majority students spoke in favour of writing the examinations as it would also help them stay on the campus and continue their agitation apart from saving the academic year. They felt that if examinations were boycotted they would lose admissions and have to vacate the hostels as per the rules. This would severely hamper the movement taken up by the students. They said a few wanted the boycott of exams to build up pressure. However, the discussions would continue on Saturday and a final decision would be taken.
The meeting also discussed the issue of keeping the movement peaceful to sustain it on a long term basis.
Mr. Pulla Rao Yadav said the need to educate people in villages and create awareness across all sections was also discussed. As a part of this a padayatra or a bus yatra would be planned soon. Boycott of exams not want opt favour ed on agenda
Meanwhile, some organisations expressed their displeasure for not giving them enough importance on the Vidyarthi Garjana platform. The meeting felt that there should not be divisions on such small issues and decided to take everyone along in the agitation. Marri Anil, JAC member said the issue was small and their opinions were taken positively.
The students also decided to post one crore letters to the Prime Minister, Manmohan Singh seeking a separate State. A box has been put up at the University for students and others to post letters. A student leader claimed that the response was good on the first day.
Kakinada: KIET fest was inaugurated on the college (Kakinada Institute of Engineering and Technology) premises on Friday by its chairman P.V. Viswam. Students of CSE, EEE and ECE departments presented papers on solid state drive, power generation, wireless power transmission etc. A projects’ expo was later organized and it was followed by cultural events.
Thursday, January 7, 2010

Jntu hyderabad Revised B.Tech TIME TABLE-FINAL

click the below link jntu hyderabad Revised B.Tech TIME TABLE-FINAL 



click the below link for jntu hyderbad Revised MBA(CCC) Examination Time Table


http://www.ziddu.com/download/8055275/-100107-185028-RevisedMBACCCExaminationTimeTable.zip.html
click the below link for jntu hyderabad Revised MBA(PTPG) TIMETABLE 


click the below link for jntu hyderabad Revised MCA IV AND V SEM TIME TABLE