Anatomical developments in brain- through evolution

Document Sample
Anatomical developments in brain- through evolution Powered By Docstoc
					                          Chairman’s Message

     It is heartening to see the August Issue of IEEE Student Branch, VIT
Newsletter, “SWASTI” being released on 1st August ’05.

       Swasti gives an opportunity to students and faculty of Vellore
Institute of Technology (Deemed University) to pen down their technical
knowledge as Technical Articles, Technical Abstracts and Technical
Quizzes, thus giving them a chance to share their knowledge with other
students and faculty all over the world.

     Swasti, being an online newsletter, is easily accessible to anyone,
anywhere in the world.

      The best technical article chosen from each issue of Swasti, will be
forwarded to be published in the Madras Section Newsletter, which
covers the IEEE Student Branch of colleges all over Tamil Nadu.

       I look forward to more students and faculty joining IEEE to become a
part of the Esteemed IEEE FRATERNITY…

Please send your feedback/suggestions to zutshiboy@yahoo.com .



                                                             Aditya Zutshi
                                                                  IV ECE
                                                        Chairman & Editor
                           From Editor’s Desk


     Welcome to the August Issue of IEEE Student Branch, VIT
Newsletter, “Swasti”.

       It is heartening to see Students and Faculty Members coming forward
for submitting their Technical Articles for the Newsletter.

      We hope of having more students and faculty members contributing
Technical    Articles/Technical    Paper    Abstracts/Technical     Projects
Abstract/Technical Quizzes for the newsletter in the coming future.

      Please mail your contributions for the forthcoming issues to
zutshiboy@yahoo.com / kashyap.reddy@gmail.com .


                   Let The Saga Of IEEE Continue…


                                                                    Editors:

                                                             Aditya Zutshi
                                                                  IV ECE
                                                            Kashyap Reddy
                                                                   II CSE
                       The Big Bang – In The Brain


The environment is changing, sometimes slowly and imperceptibly, but time
witnesses the transformation of the primordial swamp into the austerity of
technological age. The living inhabitants of the earth are also changing in
their bodily endowments, and these changes fit them better or make them
handicap for their existence in the environment in which they find
themselves. Both structural and functional changes are taking place, slowly
but steadily.

     A new pearl has recently been added in the necklace of human
evolution, a theory that answers the very fundamental question ‘how did we
get such a large brain’?
The answer lies in a tiny mutation, this was a mutation in the gene myosin
(gene responsible for muscle production) which gave rise to a new mutated
form called MYH16.It took place some 2.4 million years ago as a major step
in the development of a larger brain that we now as humans have.

              Myosin gene codes for a protein that builds strong muscular
jaws. Myosin is a protein that along with other proteins helps to contract
muscles (more importantly jaw muscles). The mutation causes weaker
muscle formation. Experiments were conducted involving macaque monkey
and human genes to determine how this mutation worked. It was established
that myosin gene only worked in the muscles of the head used for chewing
and biting. The difference between the two subjects was that in the macaque
it was in its normal form and humans have the mutated version. Studies were
conducted on humans from every continent and it was found that the gene is
mutated in all of them. The mutation is not found in other primates even in
chimpanzees (the closest relatives of humans alive). So at some point human
evolutionary track shifted, incorporating this mutated gene.

       The question that next needs to be asked is why is this important, what
does this gene mutation actually do? What is being suggested is that when
the gene mutated, there was less muscles pulling and attached on the cranial
bone (head bone) of our ancestors. And because muscles can arrest bone
growth and will shape the bones that it is attached to. It can have a crucial
role in bone growth, in this case skull growth. When the head structures of
the macaque (as well as other primates) were compared to humans, it was
seen that the crests on the heads differed as well. The large crests on the
macaque illustrate how jaw muscles attach to the head and leave little room
for the brain, and this crest is close to non-existent in human heads
(compensated for smaller, less powerful jaws). By having smaller muscles
there are smaller anchors attached to the head and the skull is free to grow
into a round shape. It is thus suspected that powerful jaws are incompatible
with large powerful brains.

The more interesting point about the mutated gene, to help further idea that
the big brain and powerful jaw are unrelated, is when the mutation took
place. The findings point out, that the mutation occurred at a very important
point. Using the coding sequence for the myosin domains as a molecular
clock, it was estimated that this mutation appeared approximately 2.4
million years ago, pre-dating the appearance of modern human body size
and emigration of Homo (early humans) from Africa. This represents the
first proteomic (protein based) distinction between humans and
chimpanzees. The mutation occurred just before major evolutionary changes
took place (where the hominid began to have larger brain size), very
important to making us into what we are now. The changes did not happen
over night, but it took thousands of years before substantial changes were
noticeable. The mutation appeared close to 2.4 million years ago, when the
lineage diverged, leading to the humans and chimpanzees (and other
primates). At about 2 million years ago the genus Homo habilis began to
emerge with larger brains and smaller jaws.

        The controversy of this matter begins around whether this mutation
actually put us on the path to who we are today. Whether this one fact could
determine the rest of our ancestor’s lineage. Were the early humans
beginning to make tools and thus furthering their brain development? It is
possible that these early humans were advancing with technology of their
tools and the ones who had this mutation were able to have larger brains,
thus giving them a better chance to succeed. This advantage could have been
the leading factor in the survival of the new genus. With the movement
towards scavenging for food and eating meat, it is possible that our ancestors
may not have needed the powerful jaws to chew the tough plants (thus our
feeding habits changed leading us to being omnivores).

     That evolution does not happen so easily. Even if the jaws had less
power and were moving towards a smaller size, changes in the teeth would
have occurred as well. The idea that one mutation in the jaw muscle
formation could lead to what we are now does seem a little easy for an
explanation. There are still genetic differences that need to be looked at.
       It would seem to me, even though I am no expert on the subject, that
such a change could be a determining factor. While it may not be the only
mutation need to develop us into what we are to day, it is possible to
acknowledge that with out such a mutation we may not have been who we
are. It is clear that other primates do not have this mutated gene, and they
have not turned out as we did. Such is the way of evolution, chance and luck
to push us in the right direction. I think it would be safe to say that without
this mutation, we would not be what we are. Our large brains may have not
come to be and we could be still living in the wilderness. It is hard to say
that one small event has determined our entire lineage, but it is not out of the
question.



                                                          Dushyant Mishra
                                                        IV Bio-Technology
                                                                     VIT
                                       E-Mail: dushyant_mishra@yahoo.com
                Computer Made from DNA and Enzymes


Israeli scientists have devised a computer that can perform 330 trillion
operations per second, more than 100,000 times the speed of the fastest PC.
The secret: It runs on DNA. A year ago, researchers from the Weizmann
Institute of Science in Rehovot, Israel, unveiled a programmable molecular
computing machine composed of enzymes and DNA molecules instead of
silicon microchips. Now the team has gone one step further. In the new
device, the single DNA molecule that provides the computer with the input
data also provides all the necessary fuel. The design is considered a giant
step in DNA computing. The Guinness World Records last week recognized
the computer as "the smallest biological computing device" ever
constructed. DNA computing is in its infancy, and its implications are only
beginning to be explored. But it could transform the future of computers,
especially in pharmaceutical and biomedical applications. Following Mother
Nature's Lead Biochemical "nanocomputers" already exist in nature; they are
manifest in all living things. But they're largely uncontrollable by humans.
We cannot, for example, program a tree to calculate the digits of pi. The idea
of using DNA to store and process information took off in 1994 when a
California scientist first used DNA in a test tube to solve a simple
mathematical problem. Since then, several research groups have proposed
designs for DNA computers, but those attempts have relied on an energetic
molecule called ATP for fuel. "This re-designed device uses its DNA input
as its source of fuel," said Ehud Shapiro, who led the Israeli research team.
Think of DNA as software, and enzymes as hardware. Put them together in a
test tube. The way in which these molecules undergo chemical reactions
with each other allows simple operations to be performed as a byproduct of
the reactions. The scientists tell the devices what to do by controlling the
composition of the DNA software molecules. It's a completely different
approach to pushing electrons around a dry circuit in a conventional
computer. To the naked eye, the DNA computer looks like clear water
solution in a test tube. There is no mechanical device. A trillion bio-
molecular devices could fit into a single drop of water. Instead of showing
up on a computer screen, results are analyzed using a technique that allows
scientists to see the length of the DNA output molecule. "Once the input,
software, and hardware molecules are mixed in a solution it operates to
completion without intervention," said David Hawksett, the science judge at
Guinness World Records. "If you want to present the output to the naked
eye, human manipulation is needed." Don't Run to the PC Store Just Yet As
of now, the DNA computer can only perform rudimentary functions, and it
has no practical applications. "Our computer is programmable, but it's not
universal," said Shapiro. "There are computing tasks it inherently can't do."
The device can check whether a list of zeros and ones has an even number of
ones. The computer cannot count how many ones are in a list, since it has a
finite memory and the number of ones might exceed its memory size. Also,
it can only answer yes or no to a question. It can't, for example, correct a
misspelled word. In terms of speed and size, however, DNA computers
surpass conventional computers. While scientists say silicon chips cannot be
scaled down much further, the DNA molecule found in the nucleus of all
cells can hold more information in a cubic centimeter than a trillion music
CDs. A spoonful of Shapiro's "computer soup" contains 15,000 trillion
computers. And its energy-efficiency is more than a million times that of a
PC. While a desktop PC is designed to perform one calculation very fast,
DNA strands produce billions of potential answers simultaneously. This
makes the DNA computer suitable for solving "fuzzy logic" problems that
have many possible solutions rather than the either/or logic of binary
computers. In the future, some speculate, there may be hybrid machines that
use traditional silicon for normal processing tasks but have DNA co-
processors that can take over specific tasks they would be more suitable for.
Doctors in a Cell Perhaps most importantly, DNA computing devices could
revolutionize the pharmaceutical and biomedical fields. Some scientists
predict a future where our bodies are patrolled by tiny DNA computers that
monitor our well-being and release the right drugs to repair damaged or
unhealthy tissue. "Autonomous bio-molecular computers may be able to
work as 'doctors in a cell,' operating inside living cells and sensing
anomalies in the host," said Shapiro. "Consulting their programmed medical
knowledge, the computers could respond to anomalies by synthesizing and
releasing drugs." DNA computing research is going so fast that its potential
is still emerging. "This is an area of research that leaves the science fiction
writers struggling to keep up," said Hawksett from the Guinness World
Records.


                                                               Anupam Singh
                                                                    III CSE
                                                                   Avik Dey
                                                                    III ECE
                                                                        VIT
            Glaucoma Classification From Optic Nerve Images


Abstract:

Glaucoma is one of the major causes of preventable blindness in the world.
It induces nerve damage to the optic nerve head via increased pressure in the
ocular fluid. It is presently detected either by regular inspection of the retina,
measurement of the Intra Ocular Pressure (IOP) or by a loss of vision. It has
been observed that nerve damage precedes the latter two events and that
direct observation of the nerve head could therefore be a better method of
detecting glaucoma, if the observations could be made reliably. This paper
describes our work in enhancing and segmenting the optic nerve head in
images of the retina and classification according to the severity of stages.
Once glaucomatous disc is segmented its shape is compared with normal
optic disc which reflects the severity of the nerve damage. Finally once the
nerve head has been located, the shape will be quantified and stored as a
feature vector for further analysis.

                                                                     R.Balaji
                                                                    JRF, VIT
                                         E-Mail: balaji_ranga@rediffmail.com
                    Data encryption and compression


Abstract:

Data… the very word brings into our mind the digital world. All data in the
digital world ultimately gets converted to binary digits, 0 and 1. In the
present paper, we are going to see how simple and easy manipulations can
be done on data to achieve the feats of….
       1: Encryption: Making the data unreadable to outsiders
       2: Compression: Reducing the size of data thus increasing disk space

Different methods with real world working C++ codes are presented.
Advantages and disadvantages are discussed.

Data encryption:

INTRODUCTION:

      The world of information age has arrived. The digital age in the early
80’s was restricted to certain scientific laboratories. But, as the age
progressed, almost about a fourth of the world has come to the digital age.
So many prying eyes try to access data illegally. Your credit card number or
the email id could be the next target!

      The process of encryption has two major operations. Let us say that
you are the user and your email provider is the server. As you enter your
email id and password and press the “sign in” button, the data entered is
encrypted so that no one can capture it in the login process. Then, when your
information reaches the server, the data is decrypted and the response is
generated.

WORKING PRINCIPLES:

      We shall see two working principles
      1. Negate encryption
      2. XOR encryption
1. Negate encryption: Negate is a common bitwise operator present in
almost all programming languages. The function of negate is to change a bit
into its opposite. For instance, a 0 is converted into 1 and a one is converted
into 0.

      For example, let us assume that the user entered 3. 3 in binary format
is

                          3 = 0011

     During encryption, negate operator is applied on 3. As a result, 3
becomes,

Encryption phase:

                         3 = 0011
                    Negate (3) = 1100 =12

      Thus, the original input 3 was converted to 12 by the negate operator.
Thus data is encrypted. Now, to decrypt the data, apply negate to 3 again…

Decryption phase:

                  Negate (3) = 1100
             Negate (Negate (3)) = 0011 = 3

      Now, we have the original data, 3 recovered.

Advantages: Encryption is done at a rapid speed.
Disadvantage: This type of encryption can be easily hacked. Anyone can
apply a negate operator and find out the original data.

2. XOR Encryption: XOR is another bitwise operator that is applied on two
binary data. The truth table of XOR is given below for reference…

             P                 Q              P (XOR) Q
             0                 0                  0
             1                 0                  1
             0                 1                  1
             1                 1                  0
      Let us take the input 3 as P. Here, the encrypter (user) has the right to
choose the value of Q. Q is the encrypting key. Let us assume Q to be 8.
We have,
                   P=0011, Q=1000
                    P (XOR) Q = 1011           (refer to the truth table)

      Thus, the input data, 3 was converted into 11 (i.e., 1011). Data is
encrypted. To decrypt the data, apply the XOR operation with 8 again.

                      11 (XOR) 8
                   1011 (XOR) 1000
                   result = 0011            (refer to the truth table)

      Hence, the data 3 is obtained back on decryption.

Advantages: Hack proof. The hacker has to guess the encryption key (8 in
the above case) to decrypt.

Disadvantages: Slower than the negate encryption.

DATA COMPRESSION:

INTRODUCTION: The most commonly used peripheral is the floppy. It
has only a capacity of 1.44 MB. Now, there is no need to get frustrated if
your file size is 1.55 MB. Data compression techniques have arrived. With
compressing utilities like winzip, file size can be reduced greatly. Two of the
compression techniques described in this paper are…

   1. Compression by removing extra characters
   2. Compression by using a dictionary

   1. Compression by removing extra characters: In this type of
      compression, the character which is repeated again and again is
      removed. This reduces storage space. Suppose a file has the character
      “a” repeated 10 times. We have,
                   Total size =Size of 10 “a”s = 10 bytes

      During compression, replace 10 a’s and instead put the number of a’s
and the character a.
        After compression:
                    Total size=no of a’s(2 bytes) + size of one ‘a’(1 byte) = 3
bytes
     Hence, the size of 10 bytes was reduced to three bytes using data
compression.

Advantages: 1. Greater speed
  2. Ability to change the repeated character according to users choice

Disadvantages: 1. Can be applied only to text and source code files.

2. Compression using a dictionary: If you had ever wondered why pointers
are being used in C/C++, here is the solution. This technique followed by
most of the compression utilities today uses pointers. As per this technique,
the software stores several commonly used expressions in a dictionary.
Whenever the file which is being compressed contains such a word, the
word is replaced by the pointer to the dictionary.

For example:

Before compressing:
Word in file:    “ Arachnophobia”                       Size: 13 bytes

After compressing:
Word in file:     Pointer to Arachnophobia              Size: 1 byte

Whoa! Reduction of 12 bytes.

Advantages: 1.All files can be compressed. (Even binary files have
common sequence of terms)
     2. High extent of compression can be achieved.

Disadvantages: 1.The system must have software containing the dictionary
to decompress. The softwares are usually more than 10 MB in size.
THE C++ CODES:

      Discussion of the principles has given us sufficient knowledge on how
compression and encryption can be achieved. Let us now see the working
codes for these principles.

Encryption: Negate encryption
CODE START:

#include<iostream.h>
#include<conio.h>
#include<fstream.h>
int main()
{
      char ch;
      ifstream in("Z:\conv.txt");        //source file
      ofstream out("Z:\done.txt");      //destination file
      if(!in)
      {
              cout<<"Error in source file!";
      }
      if(!out)
      {
              cout<<"Error in target file!";
      }
      while(in.get(ch))
      {
              ch=~ch;
              out.put(ch);

      }
      cout<<"Success!";         //encrypted

      getch();
      return 0;
}
//result: After this, to decrypt the file, replace conv.txt with done.txt and
//done.txt with some other name

CODE END
Encryption: XOR Encryption

CODE START:

//XOR Encryption...try these only with source files and text files
#include<iostream.h>
#include<conio.h>
#include<fstream.h>
int main()
{
      char ch;
      ifstream in("Z:\conv.txt");        //source file
      ofstream out("Z:\done.txt");      //destination file
      if(!in)
      {
              cout<<"Error in source file!";
      }
      if(!out)
      {
              cout<<"Error in target file!";
      }
      while(in.get(ch))
      {
              ch=ch^8;
              out.put(ch);

      }
      cout<<"Success!";

      getch();
      return 0;
}
//result: After this, to decrypt the file, replace conv.txt with done.txt and
//done.txt with some other name

COMPRESSION:

//Compression removing extra characters
//The extra character removed in this program is the “spacebar”
#include<iostream.h>
#include<conio.h>
#include<fstream.h>
void main()
{
      char ch;
      int count,flag;
      ifstream in("c:\\winnt\\temp\\shyam\\conv.txt", ios::in | ios::binary);
      //compression state
      ofstream out("c:\\winnt\\temp\\shyam\\comp.txt", ios::in | ios::binary);
      while(in.get(ch))
      {
             if(ch==' ')
             {
                    count=1;
                    in.get(ch);
                    while(ch==' ')
                    {
                           count++;
                           in.get(ch);
                    }
                    out.put(count+127);
                    out.put(ch);
             }
             else
             {
                    out.put(ch);
             }
      }
      cout<<"success";
getch();
}

Decompression:
void main()
{
      int ch;
      char space;
      FILE *fp,*qp;
      int count,flag;
      space=' ';
      fp=fopen("c:\\winnt\\temp\\shyam\\comp.txt","r");
      //compression state
      qp=fopen("c:\\winnt\\temp\\shyam\\decomp.txt", "w");

      ch=getc(fp);
      while(ch!=EOF)
      {
            if(ch>127)
            {
                   ch=ch-127;
                   for(count=1;count<=ch;count++)
                   {
                         putc(space,qp);
                   }
            }
                   else
                   {
                         putc(ch,qp);
                   }
            ch=getc(fp);
      }
      fcloseall();
getch();
}


CONCLUSION:

DATA ENCRYPTION:
   1. In data encryption, if security is the prime factor and not the speed, it is
advisable to use an XOR encryption with a big key (like 250).
   2. If speed is one of the criteria, use XOR with a smaller key (like 8)
   3. If only minimal security is needed, we can always use the negate
   encryption

DATA COMPRESSION:
  1. If the file size is too large, it is always better to go to dictionary
     compression. Make sure to use a popular software like winzip.
   2. If the file size is small but a bit big for a floppy, use the “Extra
      character elimination” compression. The software occupies only about
      10kB.


Bibliography:
   1. C Projects by Yashwanth Kanetkar
   2. C++ - The complete reference by Helbert Schildt

NOTE: All the programs in this paper has been built and tested in the Turbo
C++ compiler.


                                                      SD Shyam Bharath
                                                               II CSE
                                           Email: shyam.cool@gmail.com
       Closed loop Adaptive Optics system for Free space Optical
                           Communication


Introduction

Optical communication has become the order of the day due to its very high
bandwidth, low noise and reduced number of repeaters.                     Optical
communication systems use an optical fiber to carry the light from one end
to the other. In other words, the optical fibers only replace the electrical
wires. The present global research aims at fiberless optical communication,
something similar to wireless electronic communication. Since light needs a
transparent medium for propagation, fiberless optical link is not possible for
terrestrial communications. However, fiberless optical link is a must for
ground to satellite and vice versa and satellite to satellite, the so-called free
space optical communication [1].            Also, Urban Optical Wireless
Communication (UOWC) use light beams propagating through the
atmosphere to carry information, by placing a transmitter and a receiver on
high-rise buildings at a separation of several hundred meters [2]. To
establish optical communication between two satellites, the line of sight of
their optics must be aligned during the entire time of communication. A laser
is used for such purposes, due to its high directionality. The direction of the
laser beam keeps changing as it travels through the atmosphere. This is
because of the reason that the various layers of the atmosphere have
different refractive indices depending on the temperature and pressure at that
height. Further, these refractive index values fluctuate with time due to
fluctuations in temperature and pressure [3]. This results in the change of
direction of the propagating laser beam with time. For instance, even a
small temperature fluctuation of a tenth of a degree Kelvin would generate
beam tilts of the order of a few micro radians, which, after propagation over
a few hundred meters, result in a large shift of the incoming beam at the
detector plane. This causes the focused spot of the laser beam from the
distant source to move about in a plane (X-Y motion) and Z being the
direction of propagation. For tilts above a particular value, the beam will no
more fall on the detector, meaning the break of optical link. Hence a
mechanism to steer the laser beam at the receiving end becomes a must.

The main complexity of satellite optical communication is the pointing
system [4].    The complexity of the pointing system derives from the
necessity to point from one satellite to another over a distance of tens of
thousands of kilometers with a beam divergence of a few micro radians.
Pointing systems use complementary information sources to point the
information beam in right direction, rough pointing based on ephemeredes
data and fine pointing based on an electro–optic tracking system. The basic
and popular method of tracking between satellites includes use of a beacon
signal on one satellite and a quadrant detector and tracking system in the
other satellite. The fine elevation and azimuth angle of the pointing system
responds to the output signal of the quadrant detector.

Arnon et al extensively studied the theoretical limitations of free-space
optical communication using satellite orbit earth equation for coarse
adjustment and fine control of beam focusing on to the signal detector by
laser beam steering applications [5-7]. Xiaoming Zhu described the free
space communications through the atmospheric turbulence channel under
practical consideration [1].
`
The SHWFS has been used for the position and displacement sensing as well
as tip/tilt estimation [8-9]. Max and Esposito et al performed laser beam
steering using an APD at laboratory level [10,11]. This work describes a
beam steering control system for a propagating laser beam, which is
essential in free-space optical link. This article describes an indigenously
developed Shack-Hartmann Wavefront Sensor, which measures the
wavefront tilt in real-time and a closed loop tip-tilt compensation using a
piezo-driven mirror. The system uses a commercial frame grabber and a
CCD camera, which are less expensive. Spatial sampling of the beam with a
10x10 microlens array, Zernike Polynomial decomposition up to 4th order
and real-time display of tilt-map are carried out. These values of tilt are
needed to drive a steering mirror to compensate the tilt.

Shack-Hartmann Wavefront Sensor

   The Shack-Hartmann Wavefront sensor (SHWFS) consists of a lenslet
   array and a set of position detectors placed behind it. The lenslet array
   spatially samples the beam and focuses it onto the position detectors as
   shown in Fig.1.1 The centroids of sub-images formed by each lens of
   lenslet are determined and the sensor output is a set of {x, y} spot
   positions [12]. If the incoming beam is perfectly plane, then all the sub
   aperture spots exactly fall at the centers of the CCD sub arrays. When the
   incoming beam is aberrated(tilted), then the spots are deviated from the
   original position on the CCD detector array. Wavefront measurement by
  SHWFS is based on the measurements of local slopes of a distorted
  wavefront ∂Φ/ ∂n relative to a reference plane wavefront. The local slope
  is proportional to the displacement of the spot center ∆S.


                                          f




                                                                 (a)


    Wavefront                 Lenslets        CCD        CCD
                                              Detector   image




                                                                 (b)




      Fig.1.1 Shack-Hartmann Wavefront Sensor focal plane image
                 ( ideal and distorted spot image)

The centroid position formulae used in this Shack-Hartmann sensor are
expressed as [13-15],
             I        J

            ∑∑ x(i, j )s(i, j )
            i =1 j =1
  Xc(K) =         I       J
                                                                       (1.1)
                 ∑∑
                 i =1 j =1
                              s (i, j )
             I        J

            ∑∑ y(i, j )s(i, j )
            i =1 j =1
  Yc(K) =         I       J
                                                                       (1.2)
                 ∑∑
                 i =1 j =1
                              s (i, j )
Where x(i,j) and y(i,j) is the co ordinate position of the (i,j)th pixel in the kth
sub-aperture, s(i,j) is the input wavefront signal i.e., intensity at the pixel (i,j)
on the square sub-aperture has I x J pixels. With the formulae, the centroid
position of the input wavefront at the Kth sub aperture could be calculated as
(Xc(K), Yc(K)).

When there is a wavefront distortion, the positions of the spot-centres are
changed. The displacement of the spot center xij c, yij c within the sub-
aperture with respect to a reference position xij r, yij r is measured and the
local gradient of the wave-front Φ(x,y) is obtained according to [13]

              ∂ϕ 1
                 = Sx                                                 (1.3)
              ∂x  f
              ∂ϕ 1
                 = Sy                                                 (1.4)
              ∂y  f

where Sx=xc-xr, , Sy= yc-yr and f is the focal length of the lenslet. In matrix
form it could be written as:
             S=Aa                                                 (1.5)
where A is so-called gradient rectangular matrix with M columns and 2NxNy
rows (Nx being the number of focal spots along x direction and Ny along y
direction).
As a result we obtain Zernike coefficients [13-30]

            a = B•S                                                   (1.6)
where B = (ATA)-1AT.

Now, when Zernike coefficients are obtained (as detailed in the following
section), a map of the surface is created in a number of equally spaced
points, and displayed on the screen. In our WFS, we have taken Zernike
terms up to 4th order Cartesian form for the measurement of wavefront
aberration from the slope values.

In the Wavefront Sensor (WFS) developed by us, the optical design was
carried out for 10x10 sub-aperture sampling of the wavefront under test. The
schematic of the developed SHWFS layout is shown in Fig1.2. The system
consists of a He-Ne laser source with collimating optics for providing
reference plane wavefront and it is spatially sampled by the microlens array,
which focuses it on to a CCD camera detector as a set of 10x10 spots. The
CCD camera (Hitachi KP-M2E) was connected to a frame grabber
(PCVISION card from Coreco Imaging Inc.), interfaced to a computer. The
shutters S1 and S2 are used to select one of the beams (reference or test).




   Fig.1.2 Schematic of the Optical Setup: S1,S2 micro controlled
 stepper motor shutters, M1 –Mirror, L1,L2,L3, - lenses, SPF-Spatial
 Filter, NDF-Neutral density filter, BS-Beam splitter, MLA-Microlens
                                 Array.


To begin with, the shutter S1 is open and S2 is kept closed as the initial
condition of the wavefront sensing program. The well-collimated plane
wavefront from the He-Ne laser now gets focused as a set of spots on the
CCD array. The centroids of these spots are taken as reference co-ordinates
and stored as the values for a zero tilt beam. Once this is done, S1 is closed
and S2 is opened which allows the external test beam for wavefront
measurement. The shutters S1 and S2 are fitted on stepper motors
automatically controlled by the Wavefront Sensing software. The new
positions of the centroids are then estimated. By measuring the positional
shift of the centroids with respect to the reference data, the local tilt or slope
of the sub-aperture wavefronts are calculated. From these slope or gradient
measurements, the wavefront profile is obtained through wavefront
reconstruction algorithms and plotted. The Zernike coefficients, aberrations
etc.,




      Fig.1.3 Wavefront Measurement Algorithm flow chart


Principle of Laser Beam Steering

Figure1.4 shows a typical arrangement used for beam steering in line of
sight communications. The laser beam from an exo-atmospheric source or a
distant atmospheric source is reflected off a tip-tilt mirror (steering mirror)
on to the signal detector. A part of the beam is directed to the wavefront
sensor using a beam-splitter. The wavefront sensor measures the tilt
undergone by the incoming beam in both X and Y directions and a
correction is applied to tip-tilt mirror to annul the tilt in both directions in
real-time, as a closed loop control system.
         Fig.1.4 Schematic of Laser beam steering arrangement


Tilt compensation by beam steering


The global tilt correction for laser beam steering was carried out at the
laboratory level in a closed loop fashion by driving a tip-tilt mirror using the
wavefront sensor data. The optical setup of Fig. 1.5 was used for this
purpose. A test beam from a second laser was reflected off two tilt mirrors
and made to enter the Wavefront Sensor. The first tilt mirror was used to
introduce a tilt. This tilt was measured by the WFS computer, which drives
the second tilt mirror for compensating the same. The tilt was introduced
by a two-axis tilt mirror (Piezo-Jena, Germany) having the tilt range of 2
milliradians corresponding to 0-10 V driving voltage. The tilt compensation
signals for driving the tilt mirror control unit were generated using a DT 332
D/A converter card from Data Translation Inc. The tilt correction of the
laser beam was carried out through a two-axis tilt mirror (Mad city Inc.),
which has a resolution of 0.02 microradians. The tilt compensator receives
the compensation signal from the WFS computer and nullifies the tilt using
adaptive control algorithms.
       Fig.1.5 Closed loop tilt measurement / compensation setup
                  (WFS – Wavefront Sensor as in Fig. 1.2 ,
                  CL- Collimating Lens, SPF-Spatial Filter,
                  NDF-Neutral density filter)


Fig.1.6 (a) shows the wavefront profile when a known tilt of 240 µrad was
introduced in both X and Y directions. The compensation of the tilt along
both axes could be visualized in Fig. 1.6(b) when the tilt compensator was
activated. The values of the X-tilt, Y-tilt, peak to valley etc. are displayed
on the right side of the graphic window. For legibility, some these values
are shown in Table 1.0 Figures 1.7 (a) and 1.7 (b) show yet another set of
results obtained by introducing a tilt of 480 µrad along both axes and the
compensated wavefront, respectively.
            Fig.1.6 (a) Wavefront profile for 240 µrad tilt


                                        2-axis tilt correction




   Fig. 1.6 (b) Wavefront tilt compensation profile for 240 µrad tilt


Zernike term         WFS output              WFS output
                     of aberrated            of corrected wavefront
                     wavefront
X tilt               228 µrad                1.84 µrad
Y tilt               -235 µrad               -0.12 µrad
Amag                 -483.68                 -0.424
P-V value            160 nm                  9.01 nm

Table 1.0 - Measured values of aberrated & compensated wavefronts
                                   480 micro radian 2-axis tilt




       Fig. 1.7 (a) Wavefront profile for 480 µrad tilt




                                    2-axis tilt correction




Fig. 1.8 (b) Wavefront tilt compensation profile for480 µrad tilt
Conclusion

A closed loop tip-tilt correction for laser beam steering has been achieved.
The tilts measured by the developed Wavefront Sensor are in close
agreement with the applied tilts and the tilt compensator totally nullifies the
tilt to a very high degree of compensation. The performance of the system
shows its ability to compensate the tilt effects due to atmospheric turbulence
on the light wavefront. It can be employed for the real-time adaptive optics
techniques for tracking a laser beam as in optical communication.

                                                    Dr. P. Arulmozhivarman
                                                          Department of EIE
                                                                        VIT
                                          Email: parulmozhivarman@vit.ac.in



Note: Contents of the paper are private property of the author and should not
be distributed or published without the permission of the author.
              What We Can Do With Artificial Intelligence


We have been studying this issue of AI application for quite some time now
and know all the terms and facts. But what we all really need to know is
what can we do to get our hands on some AI today. How can we as
individuals use our own technology? We hope to discuss this in depth (but as
briefly as possible) so that you the consumer can use AI as it is intended.
First, we should be prepared for a change. Our conservative ways stand in
the way of progress. AI is a new step that is very helpful to the society.
Machines can do jobs that require detailed instructions followed and mental
alertness. AI with its learning capabilities can accomplish those tasks but
only if the worlds conservatives are ready to change and allow this to be a
possibility. It makes us think about how early man finally accepted the
wheel as a good invention, not something taking away from its heritage or
tradition. Secondly, we must be prepared to learn about the capabilities of
AI. The more use we get out of the machines the less work is required by us.
In turn less injuries and stress to human beings. Human beings are a species
that learn by trying, and we must be prepared to give AI a chance seeing AI
as a blessing, not an inhibition. Finally, we need to be prepared for the worst
of AI. Something as revolutionary as AI is sure to have many kinks to work
out. There is always that fear that if AI is learning based, will machines learn
that being rich and successful is a good thing, then wage war against
economic powers and famous people? There are so many things that can go
wrong with a new system so we must be as prepared as we can be for this
new technology. However, even though the fear of the machines are there,
their capabilities are infinite Whatever we teach AI, they will suggest in the
future if a positive outcome arrives from it. AI are like children that need to
be taught to be kind, well mannered, and intelligent. If they are to make
important decisions, they should be wise. We as citizens need to make sure
AI programmers are keeping things on the level. We should be sure they are
doing the job correctly, so that no future accidents occur.


AIAI Teaching Computers Computers

Does this sound a little Redundant? Or maybe a little redundant? Well just
sit back and let me explain. The Artificial Intelligence Applications Institute
has many project that they are working on to make their computers learn
how to operate themselves with less human input. To have more unctionality
with less input is an operation for AI technology. I will discuss just two of
these projects: AUSDA and EGRESS.

AUSDA is a program which will exam software to see if it is capable of
handling the tasks you need performed. If it isn't able or isn't reliable
AUSDA will instruct you on finding alternative software which would better
suit your needs. According to AIAI, the software will try to provide
solutions to problems like "identifying the root causes of incidents in which
the use of computer software is involved, studying different software
development approaches, and identifying aspects of these which are relevant
to those root causes producing guidelines for using and improving the
development approaches studied, and providing support in the integration of
these approaches, so that they can be better used for the development and
maintenance of safety critical software." Sure, for the computer buffs this
program is a definitely good news. But what about the average person who
think the mouse is just the computers foot pedal? Where do they fit into
computer technology. Well don't worry guys, because us nerds are looking
out for you too! Just ask AIAI what they have for you and it turns up the
EGRESS is right down your alley. This is a program which is studying
human reactions to accidents. It is trying to make a model of how peoples
reactions in panic moments save lives. Although it seems like in tough
situations humans would fall apart and have no idea what to do, it is in fact
the opposite. Quick Decisions are usually made and are effective but not
flawless. These computer models will help rescuers make smart decisions in
time of need. AI can't be positive all the time but can suggest actions which
we can act out and therefor lead to safe rescues.

So AIAI is teaching computers to be better computers and better people. AI
technology will never replace man but can be an extension of our body
which allows us to make more rational decisions faster. And with Institutes
like AIAI- we continue each stay to step forward into progress.

No worms in these Apples

Apple Computers may not have ever been considered as the state of art in
Artificial Intelligence, but a second look should be given. Not only are
today's PC's becoming more powerful but AI influence is showing up in
them. From Macros to Voice Recognition technology, PC's are becoming
our talking buddies. Who else would go surfing with you on short notice-
even if it is the net. Who else would care to tell you that you have a business
appointment scheduled at 8:35 and 28 seconds and would notify you about it
every minute till you told it to shut up. Even with all the abuse we give
today's PC's they still plug away to make us happy. We use PC's more not
because they do more or are faster but because they are getting so much
easier to use. And their ease of use comes from their use of AI. All Power
Macintoshes come with Speech Recognition. That's right- you tell the
computer to do what you want without it having to learn your voice. This
implication of AI in Personal computers is still very crude but it does work
given the correct conditions to work in and a clear voice. Not to mention the
requirement of at least 16Mgs of RAM for quick use. Also Apple's Newton
and other hand held note pads have Script recognition. Cursive or Print can
be recognized by these notepad sized devices. With the pen that
accompanies your silicon note pad you can write a little note to yourself
which magically changes into computer text if desired. No more
complaining about sloppy written reports if your computer can read your
handwriting. If it can't read it though- perhaps in the future, you can correct
it by dictating your letters instead. Macros provide a huge stress relief as
your computer does faster what you could do more tediously. Macros are old
but they are to an extent, Intelligent. You have taught the computer to do
something only by doing it once. In businesses, many times applications are
upgraded. But the files must be converted. All of the businesses records but
be changed into the new software's type. Macros save the work of
conversion of hundred of files by a human by teaching the computer to
mimic the actions of the programmer. Thus teaching the computer a task that
it can repeat whenever ordered to do so. AI is all around us all but get ready
for a change. But don't think the change will be harder on us because AI has
been developed to make our lives easier.


The Scope of Expert Systems

As stated in the 'approaches' section, an expert system is able to do the work
of a professional. Moreover, a computer system can be trained quickly, has
virtually no operating cost, never forgets what it learns, never calls in sick,
retires, or goes on vacation. Beyond those, intelligent computers can
consider a large amount of information that may not be considered by
humans. But to what extent should these systems replace human experts? Or,
should they at all? For example, some people once considered an intelligent
computer as a possible substitute for human control over nuclear weapons,
citing that a computer could respond more quickly to a threat. And many AI
developers were afraid of the possibility of programs like Eliza and the bond
that humans were making with the computer. We cannot, however, over
look the benefits of having a computer expert. Forecasting the weather, for
example, relies on many variables, and a computer expert can more
accurately pool all of its knowledge. Still a computer cannot rely on the
hunches of a human expert, which are sometimes necessary in predicting an
outcome. In conclusion, in some fields such as forecasting weather or
finding bugs in computer software, expert systems are sometimes more
accurate than humans. But for other fields, such as medicine, computers
aiding doctors will be beneficial, but the human doctor should not be
replaced. Expert systems have the power and range to aid to benefit, and in
some cases replace humans, and computer experts, if used with discretion,
will benefit human kind.


                                                             Anupam Singh
                                                                  III CSE
                                                                      VIT
 Investigating The Electrolytic Properties Of Materials Using E-Mosfet


INTRODUCTION:
Historically, the experimental determination of electrical conductivity stimulated
the important theory of Arrhenius. Today, it is essential to study and measure the
conductivities of various materials that find wide applications in different fields.
The electrolytic properties of materials can be measured in different ways; the most
common being conductivity cell. The set-up consists of a conductivity cell in one arm
of a wheatstone bridge that allows the measurement of the electrical resistance
provided by the cell. However, this method does not always give a precise result
because the measured conductivity also depends on the conductivity of other ions or
species, which are present in the environment where the measurement is done.
Moreover, the conductivity of the material may be changed not only by
electrochemical or chemical oxidation or reduction, in aqueous media or gas
adsorption in dry state. Further, this method is applicable only to substances that
are soluble in aqueous or any other medium or those that can be adsorbed.
To overcome the above disadvantages, a different possibility to characterise the
redox materials by investigating their work function based on the field effect
transistors is brought in.

OPERATION AND WORKING OF E-MOSFET:
The basic MOS transistor is illustrated in below for the case of an enhancement mode n
channel device formed on the p-type Si substrate.
The n+ source and drain regions are diffused or implanted into relatively lightly doped p-
type substrate, and a thin oxide layer separates the conducting gate from the Si surface.
No current flows from drain to source without a conducting n channel between them.
This is because; in the equilibrium condition of E- MOSFET the Fermi level is flat .The
conduction Band is close to the Fermi level in the n+ source/drain, while the valence
band is closer to the Fermi level in the p-type material. Hence, there is a potential barrier
for an electron to go from the source to the drain, corresponding to the built in potential
of the back-to-back p-n junctions between the source and the drain.




When a positive voltage is applied to the gate relative to the substrate (which is
connected to the source in this case), positive charges are in effect deposited on the gate
metal. In response, negative charges are induced in the underlying Si, by the formation of
a depletion region and a thin surface region containing mobile electrons. These induced
electrons form the channel of the FET and allow current to flow from drain to source.
Since electrons are electrostatatically induced in the p-type channel region, the channel
becomes less p-type, and therefore the valence band moves down, farther away from the
Fermi level. This obviously reduces the barrier for electrons between source, channel and
the drain. If the barrier is reduced sufficiently by applying a gate voltage in excess of
what is known as the threshold voltage, there is a significant current flow from the source
to the drain. Thus MOSFET is a gate controlled potential barrier.


Now the work function characteristic of the metal can be defined in terms of energy
required to move an electron from the fermi level to outside the metal. In MOS work it is
more convenient to use a modified work function q φme for the metal oxide interface.
Similarly q φsi is the modified work function of the semiconductor oxide interface. The
threshold voltage Vt reflects the difference in the work function of electrons in the gate
electrode (φme) and silicon (φsi), φ me-si

Vt=φ me-si /q + const
This type of device can also be used for the characterization of the work function of
redox materials being applied as gate electrode.




For this purpose, the MOSFET has been modified to the E-MOSFET where the studied
material is deposited on top of a gate oxide and contacted by a surrounding metal
electrode made for platinum. The conductivity of the channel is modulated by the work
function of the studied material, which is brought in direct contact with a solution. Then,
the threshold voltage of this device can be calculated as

Vt=φ me-m /q + φm-si /q +const

Where φ m-si is the difference in the work function of electrons in the material and
silicon.
φ me-m can be changed due to possible electron exchange between two conducting
materials; the studied material and Pt.
The second term, which is typical for the sandwich of material/, Sio2/Si will remain
constant because no direct exchange is possible. This term depends only on the design
and fabrication parameters of the device.
By using this device, a change in the threshold voltage and thus the work function of the
material due to redox process can be characterized by measuring the change in the
threshold voltage.

Vt=∆φme-m /q                                                                     1

WORK FUNCTION AND ITS RELATION WITH THE REDOX
PROPERTIESOF MATERIALS
When the metal electrode with electro active material takes part in a redox process, the
redox reaction between the material and the other species occurs with the electron
transfer as follows:
Red→ Ox + e-

Or

Ox + e-→Red

Where Red and Ox are the characteristic redox couples for the electro active material.

The redox process occurs of the electro active material can also be realised physically by
applying a current or potential as in the electrochemical methods. Depending on the
redox reaction, which occurs due to chemically, and physically induced process, a change
in the work function of the material depends on the change in the oxidation ratio of the
material.
If the redox process occurs in a solution, the work function of the material can be written
as:

φm=µe + ziFχ

Where χ, zi and F are the surface potential, the valence of the ions involved in the redox
process, and the Faraday constant, respectively, and µe is the chemical potential of the
electrons in the electro active material, which depends on the oxidation ratio of the
material.

µe=µ°e+ RT/nF ln [Ox]/[Red]


Where µ°e and [Ox]/[Red] are the standard chemical potential and the oxidation ratio of
the redox material.
Hence we have

φm= const + RT ln [Ox]/{Red] + zi F χ                                                 2

From 1 and 2 we have

∆Vt=∆φ me m/q= RT /n F ln [Ox]/[Red]                                                 3

REDOX PROPERTIES OF ZINC COMPOUNDS
This paper specifically deals with the study of the conductivity of Zinc Oxide (ZnO).
Using E-MOSFET having a Zinc oxide gate can achieve this.
Here ZnO undergoes reaction with Hydrogen peroxide.

Zn2+ + 2e → Zn                                                                       4

                           Zn2+ (ZnO)       Concentration A
Zn → Zn2+ + 2e                                                                        5

                            Zn2+ (Zn (OH) 2)        concentration B


In solution hydrogen peroxide is reduced to water

H2O2 + 2H+ + 2e → 2 H2O                                                               6
                ←

From 4, 5and 6, the threshold voltage of the E-MOSFET having a Zinc oxide gate
reflects the work function of Zinc Oxide, which depends on its Redox reaction.

Vt= - RT/2F ln [ZnO]/[[Zn (OH) 2] = RT/2F ln [H2O2]

COMMERCIAL BENEFITS OF ZINC OXIDE

Recent research shows that Zinc oxide can find applications in the electronics industry as
an alternative semiconductor material. The objective of the paper is to assist the research
by determining quantitatively, the conductivity of the substance.



                                                                         Shruti Badhwar
                                                                                 III EEE
                                                                                     VIT
                Exploring The World Of Grid Computing


Cluster computing has been a major field of interest in networking. In this
multiple interconnected independent nodes are present called clusters that
co-operatively work together as a single unified resource. Cluster resources
are owned by a single organization and they are managed by a centralized
resource management and scheduling system. That means all users of
clusters have to go through a centralized system that manages allocation of
resources to application jobs. If we take concepts such as server
virtualization and clustering, and add a degree of automation to them so
physical servers can be allocated to and de-allocated from different
workloads with no or minimal manual intervention, then we get the
processing power dimension of ‘grid’. If we add similar automation to
storage and databases, we have the data dimension. With automatic or semi-
automatic provisioning and de-provisioning of hardware and software assets
based on changing demand, we can reduce the need for manpower to rebuild
servers manually with all of the risks and delays that go with that.

A more formal definition and explanation for Grid is a type of parallel and
distributed system that enables the sharing, selection, and aggregation of
services of heterogeneous resources(such as supercomputers, storage
systems, databases and scientific instruments etc.) distributed across
"multiple" administrative domains based on their availability, capability,
performance, cost, and users' quality-of-service requirements.

Grid computing offers a model for solving massive computational problems
by making use of the unused resources (CPU cycles and/or disk storage) of
large numbers of disparate, often desktop, computers treated as a virtual
cluster embedded in a distributed telecommunications infrastructure. Grid
computing's focus on the ability to support computation across
administrative domains sets it apart from traditional computer clusters or
traditional distributed computing.

Grids offer a way to solve Grand Challenge problems like protein folding,
financial modelling, earthquake simulation, climate/weather modelling etc.
Grids offer a way of using the information technology resources optimally in
an organisation. They also provide a means for offering information
technology as a utility bureau for commercial and non-commercial clients,
with those clients paying only for what they use, as with electricity or water.
The potential of computer grids is enormous and when the concept becomes
mainstream it holds the promise of transforming the computer power
available to the individual. At present, a computer user is restricted by the
power of his/her own computer. When the grid comes on line there will be
no restrictions: the cheapest, oldest model will have access to the computing
resources of millions of other computers worldwide.
The development and deployment of applications which can realize a Grid’s
performance potential face two substantial obstacles. First, Grids are
typically composed from collections of heterogeneous resources (based on
different platforms, hardware/software architectures, and computer
languages), capable of different levels of performance. Second, the
performance that can be delivered varies dynamically as users with
competing goals share resources, resources fail, are upgraded, etc.
Consequently, Grid applications must be able to exploit the heterogeneous
capabilities of the resources they have at their disposal while mitigating any
negative effects brought about by performance fluctuation in the resources
they use.
 Wide fluctuations in the availability of idle processor cycles and
communication latencies over multiple resource administrative domains
present a challenge to provide quality of service in executing grid
applications. The dynamic resource characteristics in terms of availability,
capacity and cost, make essential the ability to adapt job execution to
dynamically varying conditions.
In other words user of the grid is really only interested in submitting their
application to the appropriate resources and getting correct results back in a
timely fashion. In such a framework where time used to complete a work is
very important scheduling is central to the performance of the system. The
type of scheduling necessary here is adaptive scheduling.
Adaptive scheduling is the allocation of pending jobs to grid resources by
considering the available resources and their current characteristics, and the
submitted jobs at each moment.
The compensation-based scheduling is a new algorithm and framework
(which is an extension of adaptive scheduling) that has been proposed to
compensate resource loss during application execution by dynamically
allocating additional resources. Providing predictable execution times is a
challenge in grids due to wide fluctuations in resource capacities. Network
bandwidth and latencies may change depending on traffic patterns of the
Internet. The availability of idle CPU cycles also varies depending on local
resource usage. With such fluctuations there is no certainty when a grid task
or job will complete its execution. Application execution times, thus,
become unpredictable, which is very unfavorable to the business enterprises
paying huge amounts to use the grid. There are basically three ways to do
this

    • Advanced reservation
    • Predictive techniques
    • Feedback control
Such systems use complex methods for application performance monitoring
and schedule corrections. These frameworks include special compilers and
application toolkits that might be difficult for new application developers to
learn and use.
Future works thus aim towards realizing infinite potential of grid
computing, they include multi-resource compensation, resource partitioning
and allocation, the improvement in the execution time estimator, and the use
of heuristics/dynamic methods to determining the value of sensitivity factor
in the application execution rate formula.


                                                                  Pankhuri
                                                                Megha Bassi
                                                                    III CSE
                     RFID – An Emerging Technology


Radio frequency identification, or RFID, is a generic term for technologies
that use radio waves to automatically identify people or objects. There are
several methods of identification, but the most common is to store a serial
number that identifies a person or object, and perhaps other information, on
a microchip that is attached to an antenna (the chip and the antenna together
are called an RFID transponder or an RFID tag). The antenna enables the
chip to transmit the identification information to a reader. The reader
converts the radio waves reflected back from the RFID tag into digital
information that can then be passed on to computers that can make use of it.
An RFID system consists of a tag, which is made up of a microchip with an
antenna, and an interrogator or reader with an antenna. The reader sends out
electromagnetic waves. The tag antenna is tuned to receive these waves. A
passive RFID tag draws power from field created by the reader and uses it to
power the microchip’s circuits. The chip then modulates the waves that the
tag sends back to the reader and the reader converts the new waves into
digital data. Typically a tag would carry no more than 2KB of data—enough
to store some basic information about the item it is on. Companies are now
looking at using a simple "license plate" tag that contains only a 96-bit serial
number. The simple tags are cheaper to manufacture and are more useful for
applications where the tag will be disposed of with the product packaging.
RFID is a proven technology that's been around since at least the 1970s. Up
to now, it's been too expensive and too limited to be practical for many
commercial applications. But if tags can be made cheaply enough, they can
solve many of the problems associated with bar codes. Radio waves travel
through most non-metallic materials, so they can be embedded in packaging
or encased in protective plastic for weather-proofing and greater durability.
And tags have microchips that can store a unique serial number for every
product manufactured around the world. Many companies have invested in
RFID systems to get the advantages they offer. These investments are
usually made in closed-loop systems—that is, when a company is tracking
goods that never leave its own control. That’s because all existing RFID
systems use proprietary technology, which means that if company A puts an
RFID tag on a product, it can’t be read by Company B unless they both use
the same RFID system from the same vendor. But most companies don’t
have closed-loop systems, and many of the benefits of tracking items come
from tracking them as they move from one company to another and even one
country to another.
The Electronic Product Code, or RFID, was developed by the Auto-ID
Center as a successor to the bar code. It is a numbering scheme that will be
used to identify products as they move through the global supply chain. But
will RFID replace bar codes? Probably not. Bar codes are inexpensive and
effective for certain tasks. It is likely that RFID and bar codes will coexist
for many years. Thousands of companies around the world use RFID today
to improve internal efficiencies. Club Car, a maker of golf carts uses RFID
to improve efficiency on its production line. Paramount Farms—one of the
world’s largest suppliers of pistachios—uses RFID to manage its harvest
more efficiently. NYK Logistics uses RFID to improve the throughput of
containers at its busy Long Beach, Calif., distribution center. Some
companies are combining RFID tags with sensors that detect and record
temperature, movement, even radiation. One day, the same tags used to track
items moving through the supply chain may also alert staff if they are not
stored at the right temperature, if meat has gone bad, or even if someone has
injected a biological agent into food. And many other companies are using
RFID for a wide variety of applications.

                                                                Rajat Rastogi
                                                                       II EIE
                                                                          VIT
                    Microsats And Their Applications


ABSTRACT :

Micro satellites are becoming increasingly popular because they provide a
variety of applications in Science and Technology, Disaster Monitoring,
Communication, etc. They have the added advantage of small size and low
cost which make possible developing countries and universities improve
their scientific technological and practical talent. The present paper consists
of a brief description of microsats, their evolution, the expected outcome of
ANUSAT (a small satellite programme in India), some of their applications
are also presented ,finally their future prospects are explained

1 INTRODUCTION:

"The micro-satellite boasts a number of technology firsts, and has the ability
to observe the same spot on earth from a number of different directions."
Microsatellites were invented as amateur radio Satellites in the 1990s as
hamsats: Microsats UoSATs Radiosputniks Mini-Sputnik Fuji Badr KITsat .
The number of small satellites in orbit and in development is increasing [1],
though, and their missions are often related to testing new technology which
could be used in future small satellite systems. Smaller satellites will have a
significant role to play in the future development of space in the context of
capacity building in space technology for developing countries. Smaller
satellites are typically launched as secondary passengers, hitching a ride on a
larger launcher with a larger payload. Small satellites must be dedicated to
their specific task, more miniaturized and weight effective. Also, they can be
more autonomous to reduce ground support. The small satellite could
analyze earth resources, develop the communication and database and to
monitor the changes in utilization of resources and environment in the state
level planning of India.

2. EVOLUTION OF MICROSATs:

1. Increase in channel capacity by 11.3 times (reduction in satellite power)
2.With more powerful microprocessors and DSPs becoming available near,
toll quality low bitrate codecs and design complexity has been achieved.
3.Reduction in earth station size and antenna size .(Antenna costs are
proportional to 2.5 times antenna diameter. Reduction in antenna size
therefore reduces the costs substantially and facilitates quicker installation
also).
4. Very Small Aperture Terminals (VSATs) or micro earth stations can be
used for voice and data communications.

These pint-sized satellites are the off-spring of two converging trends:
The promise of microelectronics and microfabrication for shrinking
spacecraft parts, including sensors, power supplies, and even thrusters.
Microsystems could become the means to implement decentralisation,
whereby a given number of dispersed components could be used in place of
a larger centralised unit, thus achieving greater efficiency, redundancy and
economies of scale. Micro-electro-mechanicalsystems (MEMS), in
engineering will replace bulky mechanical systems with light, silicon-based
equivalents.
In the Microsats, changes would be necessary, such as in the path from data
acquisition on-board the spacecraft to delivery of information to the end-
user, spacecraft networking, launch, de-orbiting, operations. Microsystems,
which then may help the microsats in opening ways to reduce costs and
broaden applications. In addition, the innovative concepts required to create
microsystems may permeate into the space sector and stimulate new mission
ideas. To develop microsats micro- devices (in the fields of sensors, optics,
lasers, mechanisms, electronics, etc) have to be deployed. The motivation
behind the application of microsystems to space is manifold: significant cost
reductions, the possibility of enabling new functions and improving the
performance of existing ones i.e develop microsystems that could replace
current modules or subsystems.
Microsystems having a potential use in space are, for instance,
spectroradiometers, mass spectro-meters, microcameras, multiparameter
logistic sensors, distributed unattended sensors, embedded sensors and
actuators, inertial navigation units, GPS receivers, propellant leak detectors
and wear monitors for ball bearings. Replacement of Electro-mechanical
scans imagers with CCDs. Microaccelerometers could be used instead of
Earth sensors, microresonators could be used in place of much bigger SAW
filters, and Micro electro mechanical RF switches could provide better
isolation than PIN diodes.
3 CLASSIFICATION of Satellites Class Cost Mass Large satellite $ >
  100 M > 1000 kg Small satellite $ 50 - 100 M 500 - 1000 kg Mini-
  satellite $ 5 - 20 M 100 - 500 kg Micro-satellite $ - 3 M 10 –100 kg
  Nano-satellite $ < 1 M < 10 kg

4 CHARACTERISTICS OF MICROSATS:

    Small satellites are becoming increasingly important Their attraction lie
    in the promise of low-cost and short development times made possible by
    the use of proven standard equipment and off the self components and
    techniques. Microsat would, as its name suggests, be very small. Its
    dimensions would be about 250 cm by 60 cm by 80 cm, and its mass,
    with a full propellant load, would only be around 220 kg .The propellant
    load itself would comprise about 140 kg of that . A microsat is
    considered to consist of a bus and instruments.

Microsatellite Bus Support system and instrument boxes are joined to form
the bus structure The outer skin is used to mount solar cells. The bus
provides support systems including, power, telemetry, attitude control and
determination, data acquisition and storage, command and control and
thermal control.
Microsats orbit in LEO and is so because of the following factors:

- Low launch cost     - High launch
                        Reliability
- Low radiation       - Short slant
                         Range
- Global coverage     - Doppler ranging

ISRO has initiated Small Satellite Program few years ago for demonstration
of new technologies that can be adapted to operational mission as well as
Small Satellite Earth Observation program to complement the existing earth
observation mission.

5   LOW COST SMALL SAT MISSION PROGRAMME (ANUSAT)

    Anna University will build the ANUSAT(a small high-detailed remote
    sensing satellite )and isro will launch the satellite as a piggyback payload
    on its Polar Satellite Launch Vehicle, PSLV. The satellite will be
    developed in about three years. Being the first of its kind for an Indian
   university in spacecraft development, the micro-satellite will be a
   comparatively simple one weighing around 60 kg. It will have body
   mounted solar panels generating about 40 Watt of electrical power and
   will be spin-stabilised. It will have a data store-and-forward payload for
   conducting experiments on message transfer across the country.

The major areas of development include structure, thermal management,
control and guidance, power system, command and data handling,
communication, satellite integration and test The expected outcome of the
project would be:

• To establish a center of excellence in the development and usage of micro
satellites
•To complement the development efforts of satellite application
requirements by providing a micro-satellite platform for technology
development.
• To train scientists and engineers for future in space technology
• To initiate research activities towards development of micro-satellite.
. For cartography, land resource management, ecological monitoring and
emergency cases.
This could also provide new development like miniaturized communication
system    and     turbo    coding   required    for    future  mission.

6 COMPARISION WITH BIG SATELLITES

The microsats could fit where larger satellites couldn't, making more launch
opportunities available. Large satellites normally have a skeletal structure to
which support systems and instruments are attached.
Microsatellites, in order to keep the weight down, use an exoskeleton
approach. Compared with the massive proportions and tonnage weights of
civilian and military communications satellites, the pee-wee microsats were
nine-inch cubes under 25 lbs. each. Microsats shared a standard framework
design, but each was outfitted with electronics suited to its particular
mission. Microsats are more reliable than big satellites since they have fewer
parts.
7 ADVANTAGES OF MICROSATS

Low cost will be preferable to anyone wanting to launch a satellite, as long
as it is not at the expense of quality. Cheaper satellites will be of particular
interest to developing countries, universities and even schools. Several small
satellites designed by students have been launched already, such as the
Munin nanosatellite. This satellite was designed to monitor auroral activity.
The satellite was successfully launched and produced data and images.

7.1 Assessment of natural resources And environmental management
Natural Resources and Environment Management. The services of the small
satellite to the state level planning in India are required for thematic
mapping, agriculture, forestry, environmental monitoring, earth and mineral
resources prospecting, ocean development, land management and
exploration, planning and construction.


7.2 Launch
On the most basic level, the cost of launching any satellite is often quoted in
price per kilogram the lighter the satellite, the cheaper it will be to launch.
Flying as a secondary payload has the advantage of containing costs by not
allowing schedule slips, since the primary payload will go anyway. The
disadvantage is that if the microsat is not ready the ride is lost. The greater
maneuverability of small satellites helps them not to restrict the orbit of the
piggyback satellite. If orbit is not ideal several payloads could even be
launched together in one mini or microsat- sized framework.The number of
piggyback opportunities could increase with the interest in space tourism.

7.3.Repairs
For a swarm of small satellites, replacement would be easy an identical
satellite could take the position of the damaged one in formation, while the
damaged satellite could either be deorbited or collected later for repair.
With a simple design, small satellites could be produced and launched in
large quantities so a backup would always be available.
.
7.4Microsatellites—An Example of the Proliferation of Long Duration-
orbital Interceptor Technology
Advances in miniaturization enable many countries to enter space with
small, light weight, inexpensive and highly capable systems that can perform
a variety of missions.
1.Included in this list of missions is counterspace operations, such as long-
duration-orbital inspection and intercept.
2 .Microsatellites enabled increasingly complex missions to be performed
via smaller and smaller platforms.
3.Microsatellites can perform satellite inspection, imaging and other
functions and could be adapted as weapons. Placed on an interception course
and programmed to hone-in on a satellite, a micro satellite could fly
alongside a target until commanded to disrupt, and then disable or destroy
the target. Detection of and defense against such an attack would be
difficult.

7.5 Small satellite programme In university higher education

The development of 'smaller, faster, cheaper, better' spacecraft incorporation
of leading edge technology; and manageable portions now enables any
country, or even a university, to build, launch and operate in orbit its own
small satellite. The universities can provide a multi-disciplinary environment
to combine the educational and research capabilities into a focused program.
It is in this context that the proposal for development of micro-satellite by
Anna University assumes significance. Quick turn around time is beneficial
for teaching purposes. A graduate student can be a part of the whole process
that includes planning, building, testing, launch, measuring and analysis
within the time frame of his or her dissertation work Small-satellites highly
capable and exhibit most of the characteristics of a large satellite. This
makes them particularly suitable as educational projects to provide hands-on
experience of all stages and aspects (both technical and scientific) of
satellites - from design, construction, all the way to orbital operation. Micro-
satellite development at universities provide Space Technology research
environment and develop Space Scientists and Engineers for future It helps
to bridge the gap between space technology and higher education in
Universities This will make Earth remote sensing small satellite with
significant higher operational and informational capacity in comparison with
its analogues.
Further, it presents an ideal opportunity for training students, engineers and
scientists in different disciplines, including engineering, software
development for on-board and ground computers and management of
sophisticated technical programs in the University as part of Higher
Education program and economic viability. NASA will launch its
Magnetospheric Constellation Project -- a science mission that aims to put
100 microsatellites in orbit around Earth to observe the Sun and the affects
solar events have on Earth's magnetic field.



7.6    Simplicity

They also have the advantage of simplicity, so that scientists can concentrate
on building measurement equipment .Microsatellites would make
measurements to build up a detailed picture of the Earth's magnetosphere.
The satellites would require minimal equipment other than the measurement
equipment - a power system, a GPS receiver to pinpoint the position of the
satellite, a basic attitude control system and a means of transmitting the data
to Earth. For this mission, low cost would be important and with 30 almost
identical satellites to produce a basic small satellite design would be the best
solution.

8 USES OF SMALL SATELLITES

8.1Science and Engineering

 These micro-satellites could be used to test advanced technologies for future
operational The fast turn around time makes micro and nano missions highly
suitable for testing new ideas, payloads and design.

8.2 Earth Observation and Monitoring

The miniaturization of space technology opens new possibilities for smaller
nations in the surveillance of economic ocean zones and adjacent waters of
interest .The aperture size of any instrument used to make observations is a
limiting factor on the resolution. This would seem to be a disadvantage to
the use of small satellites for Earth observation but microsat images would
be useful for such purposes as cartography, oceanography, town planning,
forestry and agriculture planning and environmental and disaster monitoring.
Images used for these purposes would need to be constantly updated to
remain useful, and the same would be true of weather monitoring. Small
satellites, being cheap and easily replaced, would fit this purpose well.
8.3Communications

The task of internet connection can be easily carried out through microsats
as message is sent to the satellite from a small ground station later it will
pass over the ground station of the recipient and transmit the message to
them. With a computer and a ground station, people such as doctors and
teachers in places with little communications infrastructure, or those living
in isolated areas where it would be difficult to provide phone lines, could
contact others anywhere in the world. Microsats operated over the Internet
and are capable of pointing and tracking targets in space or on the ground.

8.4FurtherAfield

Small satellites need not always be restricted to Earth orbit! A possible
future mission to another planet could get several satellites for the price of
one by releasing nano satellites from a mother ship. These could perform a
wide variety of observations, such as measuring the composition of the
atmosphere and imaging the surface in detail. These measurements would be
of use to planetary scientists and, in the case of Mars and the moons of the
outer planets, could pave the way for future landers and even manned
missions.

8.5 Mars Network plan

The microsats are relay satellites for spacecraft on or near the surface of the
planet will allow more data to come back from Mars missions. The Marsat
will collect data from each of the smaller satellites and beam it to Earth. It
will also keep Earth and distant spacecraft connected continuously and allow
for high-bandwidth data and video of the planet, according to Mars Network
officials

8.6 Disaster Monitoring

A likely near-term use for a constellation of microsatellites is in a global
disaster monitoring network. Enough satellites could be launched that the
entire populated surface area of the Earth would be under observation at all
times. The resolution of these satellites need not be very high to pick out
events such as flooding and forest fires. Software could be developed to
recognize the signs of a possible disaster (for example, by looking for large
differences between images taken at different times or detecting suspicious
signs such as smoke from a fire) and the images could then be flagged for
checking by humans. This might give people living near an affected site
warning – enough to evacuate or to try and minimize any destruction caused.
For example, satellite images of a forest
fire coupled with information on wind and weather from an environmental
monitoring satellite could be used to choose the best places to dig fire-breaks
to prevent populated areas being damaged. Microsatellites would be able to
strike or probe the potentially hazardous objects that threaten Earth. In
addition, they might be handy rescue vehicles used to inspect disabled
satellites and relay observations about them to ground stations; they might
also dock wit. Sensors, guidance and navigation controls, avionics, and
power and propulsion systems--all must perform precisely and in concert so
the vehicles can find, track, lock onto, and rendezvous with their targets,
even though those targets are also on the move.


9 REQUIREMENTS

Microsats calls for more intelligent sensors that can autonomously adapt and
filter the specific data of interest. Novelty and change detection will be
required in order to conserve data processing capabilities as well as
bandwidth and power consumption. Problems connecting to mission
analysis ; subsystem & payload design ; satellite manufacture; in-orbit
commissioning & operation should be solved. Investigatation low-cost
small satellite techniques for improved cost/efficiency;areas and applications
can benefit from nano-satellites ; intelligent, adaptive techniques for space
control devices as well as sensors should also be done.

10     DISADVANTAGES

10.1 Possible Concerns

Worse, if launching a satellite becomes possible for schools and small
businesses, satellite observation might also come within the reach of a
terrorist organisation. With many new technologies, there are ethical
questions to be answered and trade-offs between privacy and convenience to
be considered. However, small satellites with relatively low resolution
would not to pose any significant threat to personal privacy. Microsatellites
can not be expected to be able to fulfil all space mission requirements. Only
the simplest missions should be chosen to be flown on microsats, i.e., those
with minimum pointing, optical aperture, run-time, and lifetime
requirements. They should be built totally then tested rather than building
and testing one subsystem at a time. They should be built with no margins
Micro satellite lifetimes can be expected to be up to three years, however,
required lifetimes are stated to be very short, i.e., about six months.


11 Possibilities for the Future

The biggest obstacle in the way of large-scale use of small satellites is
getting them to orbit. The satellites themselves can be built by students or
even amateur radio enthusiasts, but the cost of launching them is
prohibitive.. Once this challenge is overcome small satellites will become
increasingly popular as a flexible and inexpensive way to provide access to
space. Just as a school would not be able to afford a supercomputer but
could find the funds for a laptop, in the near future a small Satellite - with
the discoveries it could make and the learning it could inspire - will be
within reach.

                                                             B . Maneendra
                                                                 S . Karthik
                                                                    IV ECE
                                                                        VIT
                           Wireless Middleware


Advances in wireless networking technologies have brought up a new paradigm of
computing and a new way of using mobile devices. Before this paradigm can function
effectively and efficiently, many challenges have to be faced. Wireless Middleware
combined with XML and SOAP can provide a better solution to problems inherent in
wireless environment. Wireless Middleware has helped to overcome the problems of
corporate LAN applications but it itself facing some problems and challenges in face of
changing requirements and environments, thus there is clear need for more flexible
middleware. The Middleware serves various applications in varied fields but the main
application is mobile middleware.

AIM AND OBJECTIVE:

The main aim of the this study is to get acquainted with various aspects of the new
technology “Wireless Middleware” which implements a reliable channel through which
wireless clients can communicate with a fixed system in a nomadic environment. The
basic architecture of the wireless middleware and its varied applications in the wireless
environment are also studied. The future of middleware in distributed network and to
devise new middleware solutions to fulfill the requirements of these emerging
technologies. It also deals with the usage of XML and SOAP in implementing
middleware technologies and extracting maximum benefits.

What is Wireless Middleware?

  Evolution of Wireless Middleware:
To use wireless services, earlier wireless data was suggested as an extension of
corporate LAN and application could run over an IP based wireless network without
any modification. But the problem was Internet uses Internet protocols to transport user
data which lead to congestion, collisions and long delays. So, evolved wireless
middleware which is a software lying between
or middle of the corporate application and IP transport. It is set of generic services
above Operating System. It takes a corporate LAN application, squeezes data to fit in
the bandwidth to reduce the overhead without sacrificing reliability. Wireless
middleware can help alleviate a lot of problems inherent to delivering content and
applications to wireless devices. Middleware might be the best way to link e-business
applications to handheld devices and mobile phones on wireless network.
Features:

Broadbeam Corporation’s proven middleware clears an inside track to rapidly develop,
deploy and manage mobile applications and enable business around the globe to
empower and leverage their mobile workforces. It is a smart networking stack to
interface your existing applications to new kinds of networks.
Wireless Middleware is a software system that sits between user client devices and the
application software or databases located on the server .It provide application designers
with high level of abstraction to achieve distribution transparency.
Mature Middleware Technologies such as CORBA, Java 2 Enterprise Edition, and
SOAP/Web services have been successfully designed and used with fixed networks.
Middleware let you link Internet routine and application to wireless web without
rewriting applications, database interfaces creates common platform for integration of
various sources under diverse system and display.
Common features of tasks supported by wireless middleware are:
    • Transformation: Forms a bridge between one markup language to another,
       mechanically (transform only data format) and intelligently (transform according
       to device specification).
    • Detection and storage: Store device characteristics in the database and use them
       whenever required.
    • Optimization: Use data compression algorithms to minimize         transferable data
       and thus improve overall performance.
    • Security: Provide end to end security.
    • Operation support: Provide various tools and utilities to manage wireless devices.

Benefits:

   •   Support for multiple wireless devices.
   •   Continuous wireless access to content and application.
   •   Long term cost savings.
   •   Ease up process of transformation.
   •   Focus on application without worrying about underlying network

Conclusion:

Wireless Middleware eases process of transforming markup language, delivering
content and data, providing protocol and device recognition, incorporating and properly
routing business logic through enterprise systems and transforming data formats for
compatibility with different databases. But with changing trends middleware has to be
expanded for corporate EAI.
It may prove great benefit for some wireless delivery systems but not a universal
solution due to system platforms, system extensibility, targeted wireless devices, back
and legacy systems, data storage and workflow systems.
But the overcoming can be removed by combining wireless middleware with
application server which offers intelligent data transformations and extensibility in logic
and business rules


                                                                              Roli Sharma
                                                                                   IV CSE
                                                                                      VIT
                Cell Lysis Technique Using Magnetic Field


This method is applicable to processes in which the required metabolites are
intracellular i.e. the metabolites that are produced within the cell and not
released. In this method, the cell is exposed to suitably strong magnetic field
which disrupts the functioning of the ion transport pumps. Due to this the
cell doesn't remain electrically neutral, thus the cell is torn apart due to
electrical forces.
As an electrochemical gradient is developed across the lipid bilayer, there is
generation of an electric field in the lipid bilayer. As the lipid bilayer is
hydrophobic, it does not allow the diffusion of water-soluble substances
across it. And as the cell requires to regulate the concentration of ions inside
and outside the cell, the ions are transported across the lipid bilayer via the
carrier proteins. Once the ions enter the lipid bilayer, they get accelerated
due to the presence of electric field. If a strong magnetic field is applied, a
magnetic force acts on the ion. And if this force is greater than the binding
force between the carrier protein and the ion, then this would prevent the
entry of the ion into the cell.


The magnetic force that acts on the ion is given by the relation,

F = q(V x B).

where,
F is the force vector.
V is the velocity vector.
B is the magnetic field vector.
q is the charge on the ion.




In this case, we are considering a negative ion that is entering the lipid
bilayer, where,
d - thickness of the lipid bilayer (5nm).
r - the radius of curvature of the path taken by the negative ion.

The magnetic force that acts on the ion is equal to the centripetal force that it
experiences.
F = m(vxv)/r, -------------------------------------------------------- 1
where,
F - the magnitude of the magnetic force that the ion experiences.
m - mass of the ion.

But,
F = qvBsinө, --------------------------------------------------------- 2
ө = 90˚
where,
ө - angle between velocity vector & magnetic field vector.

Equating 1 & 2, we get,
r = (mxv)/(qxB).

The condition that has to be obeyed for this method is r<<d. So if the
magnetic force is stronger than the binding force between the carrier protein
and the ion, then the ion will not be able to enter the cell. The magnitude of
the magnetic field should be calculated with respect to the ion which has the
highest m/q ratio. This would be the minimum magnitude of the magnetic
field required for the functioning of the process.

So as the ions are not allowed to enter the cell, the cell no longer remains
electrically neutral and it is torn apart due to the electrical forces. And thus
the required intracellular metabolite is released. Using this process, the cell
can be lysed very easily.


                                                                     Chetan Dhawan
                                                                  II Bio-Technology
                                                                               VIT
         Submission of Articles For Forthcoming Swasti Issues


      Swasti is a monthly newsletter of IEEE Student Branch, VIT. It
covers the report of the Latest Events organized by the IEEE Student
Branch, VIT, Achievements of IEEE Student Members of our university
during the month, as well as Technical Articles and Quizzes.

       The Faculty, Students, and Alumni of Vellore Institute of
Technology (Deemed University) can contribute Technical Articles,
Technical Abstracts, and Technical Quizzes etc. for the forthcoming
issues of Swasti.

      The articles should be submitted as a soft copy to the IEEE Student
Branch    Committee      Members      or    should    be    e-mailed   to
zutshiboy@yahoo.com or kashyap.reddy@gmail.com .

       The articles submitted on or before 20th of the current month will be
included in the newsletter to be released on the 1st of next month, else will
be included in the next issue.

      We look forward to active participation from faculty members,
students and alumni in our endeavour of making the IEEE Student Branch,
VIT as one of the most active branches.


                        We Value Your Comments


At IEEE Student Branch, VIT, we look forward to having your comments
and your views on the functioning of IEEE Student Branch, VIT. We
welcome your comments and would try our level best to keep up to the
expectations of the faculty, students and alumni of Vellore Institute of
Technology (Deemed University).
Please click on the link below to sign the guestbook.

                    http://www.vit.ac.in/ieee/guest.html
      IEEE Student Branch, VIT Executive Committee 2005-2006


Branch Counselor

Mr. R. Saravana Kumar, Senior Lecturer EEE Dept.

Chairman

Mr. Aditya Zutshi, IV ECE

Vice Chairman

Mr. Rahul Pratyush Mohanty, III EIE

Secretary

Mr. Anupam Singh, III CSE

Treasurer

Miss. Aditi Sharma, III IT

Web Administrator

Mr. Kashyap Reddy, II CSE

Public Relations Officer

Mr. Abhinav Bisen, II EIE

Editors

Mr. Aditya Zutshi, IV ECE
Mr. Kashyap Reddy, II CSE

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:41
posted:10/28/2011
language:English
pages:60