Intro Photonic Computing II

Document Sample
Intro Photonic Computing II Powered By Docstoc
					What is Photonic computing?

This file is provided free of charge by the Rocky Mountain Research Center as a part of its public educational program. All Rights Reserved, copyright 1996 by the Rocky Mountain Research Center

Please note: The discussions in this section have been phrased so that both the novice and the expert can understand the principles involved. The novice will have to pay close attention so that a clear understanding comes through. The expert, however, must also pay close attention, even though much of what is taught is old hat, because we are accomplishing some very new things here. It's easy to slide over important points when reading too quickly. A missed point can even lead one to imagine that things that have been physically demonstrated in the lab, or are taught in texts that the reader may not be familiar with, are 'impossible'. We are presenting a step-by-step learning process for everyone on this road to the global conversion from electronics to photonics. | Next | Top of Page |

What is Photonic Computing?
Well, electronic computing uses electrons to perform the logic that makes computing work. Photonic computing uses photons of laser light to do the same job, only thousands of times faster. Electronic transistors are whittled into silicon wafers to make modern computer chips. Today's technology, however, is pushing the electron to its physical limits. As a result, the manufacturing processes are becoming increasingly expensive for producing even minor improvements. However, photons are manipulated using inexpensive computer-generated holograms made of plastic

or glass. Photonic computers, therefore, will be far more valuable than their slower electronic counterparts, and far less expensive to manufacture. Interestingly, most telephone companies have been investing heavily in the global conversion from copper wire to optical fiber because light does a better job of carrying information than does electricity. This is because photons (the basic unit of light) go faster, and have a higher bandwidth than do electrons. Thus, photons are inherently more valuable than electrons. If we can just get them to accomplish the logic tasks that make computing work, they will become the next logical computing upgrade. Over 65 major companies have invested heavily in the search for an inexpensive "nonlinear" crystal able to make one light beam turn another light beam on and off, which is a prerequisite for the production of a completely photonic (optical) computer. As explained below, their quest has yet to provide a practical elementary logic component. Photonic logic, based on a different physical principle, is proving to be the key to the production of a completely optical computing system. Such a system would completely replace the start-and-stop surges of electrons with tiny light beams that simply blink on and off, in order to carry information and perform the logic of computing in light-speed photonic computers... without slowing down the photons in some crystal or enslaving them to some electrooptical process. | FAQ | Photonics Menu | Home | Previous | Next |

Short History of the Photonic Transistor:
In 1989 the Photonic Transistor was invented at the Rocky Mountain Research Center, and then tested in the laboratories of the University of Montana, and Montana State. In 1992, U.S. Patent 5,093,802 was issued to the Rocky Mountain Research Center. Since that time, the entire basis of interference-based photonic computing has been growing and growing. Even that crude first example was able to accomplish what the 'experts' said was impossible, making one light beam turn another light beam on and off without the use of some electronic gizmo in the middle. Every since the 1930s, the advantages of light were recognized for carrying information within the newly emerging computer science. The

problem was that, back then, they lacked to tools needed to make light compute. As a result, the task fell to electrons, and the electronic computer age was born. Since then, three major events have laid the groundwork for the present effort at producing fully photonic (optical) digital computers. The first was the invention of the laser. Without the laser it was easy to see why researchers rejected light as a viable computing medium. Ordinary optical signals are a noisy mish-mash of difficult-to-control electromagnetic energy rather akin to the noisy old spark-gap radio transmitters from the beginning of the 20th century. Lasers, on-the-other-hand, produce much cleaner continuous wave signals that are more like modern radio transmitters that can be used to convey complex information. The next discovery was the computer-generated hologram. Unlike lenses, mirrors and so on, holograms can be calculated into existence using the laws of optical physics. We can input any known optical signal, and calculate the hologram(s) needed to direct and manipulate it in any way that the laws of physics permit. When laser light is directed through the resulting hologram, the light does exactly as we have calculated it to do. In a photonic computer there will be many such optical signals. Each one will have its own characteristics and will need to be redirected into some other configuration in concert with many other such signals. Holograms allow us to do just that. They allow us to interconnect optical components and even create optical components. What's more, they can be mass-manufactured of inexpensive glass, plastic and aluminum, just like the holograms on credit cards. The third background element that has brought forth the photonic transistor at this time rather than 20 some years ago when lasers first became readily available has a more human element. Over the years a considerable investment was made by those 65 companies, and a number of universities, into the optical computing effort. This effort began long before lasers and holograms. Work concentrated on electro-optical and nonlinear optical methods of optical computing. Problems soon arose, problems that have proven insurmountable. First of all, any system that uses electrons in the process cannot be any faster than the electrons themselves. Electronic signals are able to traverse a microscopic electronic transistor and accomplish their assigned task in a little less than a nanosecond. Light on-the-other-hand, travels 30 cm (about 11 3/4 inches) in that same nanosecond (a billionth of a second.) A whole lot of photonic transistors can be placed in than same nanosecond accomplishing entire computing tasks in the same time it takes a single electronic transistor to even switch from off to on. Therefore, every electro-optical device will always

be hamstrung by the 'electro' part. If so, then there's not much reason to use light if it's hung up on electrons. Nonlinear optics exploits the optical properties of certain (rather expensive) materials that slow light down to two different speeds at the same time. Two problems occur: the first is obvious...the light is slowed down. The second is that in order to get light to respond within such crystals so as to perform computing tasks, the lasers have to be so powerful that the components get fried whenever one puts enough of them together to do anything useful. The effort to produce inexpensive nonlinear crystals that switch fast and at reasonable power levels has not been successful. In fact such a substance has been dubbed "Unobtainium" by those in the field. Their multi-million dollar effort has failed to produce marketable optical computers. After all, how would you like to be the head of research at some big company or university that has to go to the board of directors and tell them that things just didn't work out, and all the money's gone? Worse yet, that some little outfit in Montana found the secret right under their noses by examining optical effects that they had rejected decades ago...(under entirely different technological circumstances.) It's no wonder that both research and thinking gets entrenched when budgets are at stake. So over the years, when ever anyone suggested using optical interference, they were rejected out-of-hand based on antiquated technology, and a strong desire to maintain the status quo. However, there is one over-riding thing to remember. Interference based Photonic Transistors work! They have been tested in the lab, and we are continuing to make rapid progress toward the goal of replacing nearly all electronics with photonics on a global scale. | FAQ | Photonics Menu | Home | Previous | Next | How the first Photonic Transistor works: A "photonic" computer should use photons. Photons are the basic unit of electromagnetic energy just as electrons are the basic unit of electricity. Unlike the nonlinear optical materials that require a large supply of photons to bias them up to some switching level, Photonic Transistors need only signal levels of photons to work. Just as we can see certain things at night that emit only a handful of photons per second, so too, the Photonic Transistor must be able to operate using small amounts of energy. The next desirable attribute is that they should always operate at the full speed of light. Unlike the so called SEEDs (Self Electro-optic Effect Devices) that are popular with the big budget people, Photonic Transistors do not use

electricity in any way shape or form. The fundamental physical control and manipulation processes used do not slow down the light. The only retardation occurs during the very short time that the energy must pass through a dense medium such as a thin hologram. The Photonic Transistor is vacuum compatible, meaning that they can be operated in air or even in a vacuum where there light moves at the universal speed limit. Optical Interference is a process of energy rearrangement that occurs when two laser beams pass through the same point at the same time. (Not that this is the only such circumstance, but it is the one under discussion at the moment.) The energy pattern forms an interference image that depends upon the wavefront pattern, input energy levels, and phase components of each the two input beams, along with the geometry of the encounter. Interference has another very important property. If we accurately know all of the input parameters of all of the inputs, the output interference image formed can be calculated by a process called the "Linear addition of amplitudes" or the "Vector addition of amplitudes." Since digital computers operate at discrete energy levels, (two levels in the case of binary logic,) Each two-input photonic logic gate will have 4 possible combinations of its inputs being either high or low...on or off. As a result, 4 different images need to be calculated, one for each input combination. During high speed computing, the interference image will switch continually among this set of images. Taken together, they form a "Dynamic Image". Therefore. At any given location within the dynamic image, the amplitude, and thus the energy level (which is proportional to the square of the amplitude,) will change among 4 different static states as determined by the optical arrangement of the transistor. If we place an image component separator, such as a mask with a hole in it, at any location within the dynamic image, then any energy that shows up at the hole goes through the mask into the output. Any energy that does not show up at the hole is prevented by the mask from contributing to the output of the transistor. If we select a location for the hole that switches through a set of energy patterns that match what we want this particular transistor to do, then the transistor will do what we want. The output will have a modulated waveform that is dictated by the optical arrangement and controlled by the states of the input beams. The transistor has one other feature of over-riding value. The calculated image set produces definite outputs through the hole that define the output energy in all of its states. This precise description of the output signal

can then be used to accurately calculate the next transistor, lens, hologram or what ever. By mathematically stepping from one device to the next, we will be calculating into existence entire photonic computers. Usually there are multiple sites in the Dynamic Image that have compatible energy-state series. Holes can be placed at these locations too, and its energy combined with that from the other holes to produce composite transistors with all different kinds of attributes. Now that we have a method of calculating and manipulating known light signals so as to make them do all sorts of things:
  

What are some of the basic things that can be done? What difficulties arise with each component? How do we compensate for complexities that arise?

(end of file 1, continued in file 2)

The first logic components:
The first photonic logic components built using laboratory test models of Photonic Transistors produced the Boolean logic functions: XOR (pronounced 'exclusive-or', NOT, and an OR. According to the rules of Boolean switching logic, various combinations of only two of these basic functions are all that are needed in order to construct all of computing! That first transistor was also used to produce an elementary photonic signal amplifier for increasing the amount of energy in a modulated input beam. Amplifiers are also needed in order to interconnect working logic functions into functioning computers. Of these, the component with the fewest number of complications in its output configuration was the NOT function. In electronics, the NOT is called an 'inverter.' In photonics, when its input beam is on, it produces an off output...and an off input produces an on output. Since at least one of the transistor's inputs must be on in order to produce energy in the output, the 2nd input beam accomplishes exactly what all the 'experts' said was impossible. One beam was used to turn another beam on and off without some electronic do-hickey in the middle.

The OR function works like the dome light switches in your car. When one door OR the other door, OR both doors are open the dome light is on. All of the doors have to be closed in order to turn off the overhead light. The XOR is like the light over your basement stairwell. When the upstairs switch is off (down,) along with the downstairs switch, the light is off. When they are both up (on) then the light is still off. Only when one is on and the other off (in the opposite positions) does the light go on. (Unless of course if your electrician put one of them in up-side-down.) With light, one of the most curious things is that darkness (at the minima) can be produced by beams that are on. That is, light makes darkness. It is this basic quality of light that allows us to make an XOR, and do all sorts of other neat things. To understand how it works recall that the photonic transistor is based on three basic principles:



1. That given the optical characteristics of a beam of laser light, the laws of optical physics can be used to calculate a hologram (or part of a hologram) that will reconfigure that energy into any other physically allowable optical configuration that we will need for interconnecting information-carrying light beams in holographic integrated circuits. Thus, computer-generated holograms are capable of implementing photonic digital computing on a grand scale. 2. That holographically controlled laser beams can thus be configured so as to produce photonic transistor processes, just as has been already demonstrated using discrete-element optics. 3. That the patented foundation of photonic computing provides the theoretical basis for organizing multiple light beams into completely photonic digital computers that are able to imitate electronic functions at light speed.

The Original Patent: The best way to really begin understanding photonic transistors is by reading and studying (with an open mind) the various patents that are available through this web site or directly from the Patent Office. The brief examination here provides only a cursory examination of photonic transistors, and is by no means an exhaustive explanation of the subject that is meant to answer all of many pertinent questions that come up. We will cover some of the frequently asked questions, however.

It is commonly believed that photonic computing cannot be accomplished using optical interference because it is a "linear" process. That is, when two or more tiny laser beams are superimposed onto the same spot, at the same time, the energy redistributions that occur follow the laws of linear (algebraic, ie. + and -) addition of amplitudes. However, as will be shown, the holographic photonic transistor affords us the opportunity to use all of the various combinations of electromagnetic energy that the laws of physics allow. And that, while there are complications that occur, we can keep track of these from process to process and provide the needed corrections during the process of designing the computer-generated holograms. These holograms will then be manufactured into commercially available photonic computers the very near future. To begin the process we need to start some where. The place to start is at the elementary Boolean logic level, the basic photonic switches that are the heart of digital computing. Many electronic-imitating Boolean logic devices, which are the basic transistor circuits used to build digital computers, have two inputs that at various times are either on or off. This produces 4 different possible configurations:
   

Both light beams on. The 1st one on and the 2nd one off. The 2nd on and 1st off, and Both beams off.

In one of the simplest arrangements, the two inputs are two slots side by side, just as Thomas Young used to demonstrate the physics of optical interference nearly 200 years ago. By switching the light on and off that goes through each slit independently we have produced a two-input photonic device that produces a dynamic image from which we are able to extract energy to form our outputs. Note: The assumption is made in this discussion that when each beam is on, it has characteristics exactly like those of a simple two-slit experiment regardless of the source of the light. That is, the light arrives at the back of the slits as a plain wave of coherent, in-phase laser light just as if the laser were simply shown on the back of a mask with the slits in it. One can also perform such an experiment by blocking the light to the slits separately to illustrate the various input states. When the 1st beam is on by itself, a generally consistent distribution of energy occurs over the cross section of the beam having a phase at each

location that depends upon the geometry of the optics. The actual pattern has a distribution of energy wherein its amplitude and phase may be precisely calculated for every location in the Dynamic Image. Thus, every possible waveform for every location can be determined quite accurately. When the 2nd beam is on by itself, a similar energy distribution occurs, however its phase distribution is different from the 1st beam image even if we align the images so that their amplitude variations match. This is because the 2nd slit is not at the same location as the 1st one, so the wavelength-unit distances to each location in the output image from each location in the two inputs will be geometrically different. It is these physical distances that are used to calculate the phase of a ray that is expected to show up at a particular location. Then we can sum its amplitude in with all of the other amplitudes of all of the other rays that are supposed to arrive at that location at the same time. When input both beams are on together, an energy redistribution occurs so that the energy becomes concentrated in the areas where energy from the two beams is naturally in-phase due to that geometry. The greatest amount of energy concentration is at those places is called the "maxima", and those places that have a minimum amount of energy are naturally called the "minima". The maxima is said to be produced by "Constructive Interference" (which I often abbreviate as CI). The minima is said to be caused by "Destructive Interference" (which I abbreviate as DI). In between the maxima and minima there is a range of energy distributions from weak low level signals near the minima to strong signals near the maxima. The optics can be designed so that there is little or no phase shift between the various states at the location where the maxima shows up when both inputs are on. At the location where the minima shows up, the phase shifts by 180 degrees between the single beam two single beam input states and goes dark when both inputs are on. The places in between the maxima location and minima location can have all sorts of phase fluctuations that we will put to work in later devices. The terminology used to describe the continually changing patterns of the dynamic image can get confusing. If we choose a naming convention that is defined during the state when both inputs are on, we can call also call those locations 'maxima' and 'minima'. If we place mask with a hole in one such location, we can thus describe where it is. The problem is that when we modulate the inputs, other images are formed. The energy distributions in these images may change considerably at the location of hole, and thus the amount of energy that makes it out the hole changes. But, once we have placed a hole in

one place, and given it a name that comes from the two-beam state...we don't move the hole around. It stays in the same place. To keep track of it, we keep the same name even though it may not be a maxima or minima in those other states. So, by placing an output hole in the separating mask that encompasses the maxima and its vicinity the output will be very nearly like the precise center of the maxima. And one placed at the minima and its vicinity will very nearly match the activities at the exact minima...when both inputs are on. The XOR: An output hole placed at the location of the minima provides an output when either of the input beams is on by itself. However, because 'light makes darkness,' a minima occurs over the output hole. No energy arrives at that location, so nothing goes through the hole. Thus, the output when both inputs are on is off. In Boolean computer switching logic terms, the device is an XOR. Just like the stairwell illustration. There is, however, a 180 degree phase shift that occurs between the two single beam states. This introduces a phase modulation component that must be compensated for in succeeding components. It is important to note though, that in spite of the phase shift, the needed logic information has been extracted by a combination of its inputs. How we use that information depends on what is needed in succeeding logic stages. However, without interference, and without energy separation from the components of the Dynamic Image, as detailed in our first patent, no XOR information is extracted! Such steps are vital for the creation of photonic computing. The NOT: As with any XOR, if one of the beams is kept on all the time, the device becomes a NOT. That is, when the 2nd input is on, the output is off and vis versa. With the NOT, there is no adverse phase modulated component, because the reverse phase state is not used. Since that one beam is kept on as a power supply to the device, and the other beam causes its energy to either exit the hole or not exit the hole, that 2nd input beam is actually turning the power beam on and off. The OR:

What happens if the hole is moved over to the maxima, the CI position? Now energy appears in the output whenever any of the beams are on. While there is a variation in output amplitude that must be compensated for, this Boolean device is an OR. Just like the car door illustration. The all Photonic Amplifier: It requires only two basic Boolean devices in order to produce ALL OF COMPUTING, and we have three of them in that first patent alone. However, in order to make up for energy loss from one device to another, one needs an amplifier. So, if one beam is kept on all the time as a sort of photonic power supply, and the 2nd beam is switched on, the output through the maximapositioned hole jumps from the single beam level to 4 times that level. (The reasons for this are discussed at some length in our other articles on the basics of interference.) Thus, the information-carrying portion of the output has 3 times as much energy as the original modulated input. Thus, the invention is also a light speed amplifier. If two such amplifiers are interconnected, just as in electronics, the result is a flip flop, a light speed binary information storage device. Two of which are described in that first patent. | FAQ | Photonics Menu | Home | Previous | Next |

Beyond the first transistors: As mentioned above, there are certain limits to the operation of some of the devices based on this first patent. The first is the existence of phase and amplitude fluctuations in the output of all but the NOT device. As a result, a number of means and methods have been devised so as to either accomplish the same job a different way, or to be able to compensate the output so that the logic information produced can still be used without causing problems in succeeding devices. Interference is not something that is easily accomplished on the macro scale. But then, we are usually not interested in making big transistors. Little ones are what we want. So how small can they be made? The 3M company has demonstrated its ability to produce 20,000 independent holographic-like lens on a single square centimeter of material. While there's no reason to imagine that is the limit for making small scale devices, certainly it's a fine start. Unlike the economic vitality (or lack of it) in

the electronics world that depends upon the ever-increasing cost of silicon real estate, the inexpensive glass, plastic and aluminum that will be used to make photonic computers permits one to use as much material as is needed. Certainly there's no reason why photonic computers cannot eventually be made even smaller than today' lap tops. When they do, they certainly will have a lot more horse power. Most of the physical limitations are addressed through other parts of our intellectual property. Some these will be discussed below under "Patents and Patents Pending". | FAQ | Photonics Menu | Home | Previous | Next |

Speed Trials of Working Transistors Light really moves. In one second, electromagnetic energy can circle the earth seven and a half times in one second. In one nanosecond, (one billionth of a second,) light goes 30 cm, or about 11 3/4 inches. By measuring the dimensions of the smallest working model of our photonic transistor we can calculate the amount of time it takes light to pass through the device in order to accomplish the above photonic logic and amplifications functions. In a working photonic computer, these will be the switching times used to determine how fast we will be able to make a photonic supercomputer go. First by way of comparison, so that we can realize the import of what has been accomplished, the electronic transistors that perform logic in a $5,000,000 Cray III supercomputer are able to switch in about 0.25 nanoseconds using expensive gallium arsenide transistors. That translates into a 2 nanosecond clock cycle time which in more familiar terms is 500 mhz. So the new 200 mhz Pentium is getting up there. Of course it's tough to make $2000 Pentium machines out of materials that are needed to build a $5,000,000 Cray. Our test transistor was made using a piece of glass so that we could easily hold the image component separator still in relation to the beam combining optics. By calculating the distances through the glass, and the attached mask, the transit times were able to be calculated. Each one be built was progressively smaller until we reached the point there the transit time for 632.8 nm, red, laser light is 0.007 nanoseconds!

The Cray reportedly has transistors that switch in 0.25 ns. In that case or roughly 35 times faster than the Cray. Some will call such a device 'crude.' Well, have you ever seen a picture of the first electronic transistor? It looks 'crude' as ever. However, it had one overriding quality...IT WORKED!. That first example of semiconductor based amplification was never used to do anything other than to demonstrate how transistors work. The crude examples that we have produced, likewise have one overriding feature, THEY WORK! They work exceedingly well. They are real and functioning transistors that do what no prior photonic device is capable of doing without expensive and pulse-time consuming processes. They use light beams to turn other light beams on and off. A thing that all the so called experts said was 'impossible!' For a long time after the electronic transistor was invented, AT&T couldn't get a dime out of the major electronic companies, all of which had a considerable investment in vacuum tubes. They had loads of people, with all sorts of 'proofs' that transistor computers would never work. Not until they contacted a little- known company in Texas that made electronic instruments...Texas Instruments, who had no stake in vacuum tubes, did the technology take off. I guess people are always that way. When Thomas Edison invented the phonograph, he took it to France to demonstrate it in front of the French academy of sciences. After the demonstration, two of the most prominent members of the society stood up to pronounce, the phonograph they had just witnessed working, as a hoax. They said that it was impossible to make a machine talk, and that the American doing the demonstrating was a ventriloquist! But, back to the subject. What really is Photonic Transistor's speed limit? Pipelined Pulses: If the transit time through an electronic transistor is one nanosecond, the input must remain either completely on or completely off for that full nanosecond. Otherwise considerable noise will be introduced into the system. The Photonic transistor, however, is able to operate using pipelined pulses.

That is, a continuous stream of very short pulses can be introduced into a single transistor, pulses that are much shorter than the transit time of the device, and they will all be processed independently without any noise buildup. Just as information pipelining is an important part of the architecture of the Pentium and many supercomputers, so too, pipelining information into the various light beams that make up a photonic computer can greatly increase its throughput. But how fast? What is the theoretical limit? The theoretical limit for the shortest pipelined pulse would be equal to the period of oscillation for a one-wavelength-long pulse. Now I didn't say that this limit is easy to reach. I merely said that it seems to be a limit. If it can be reached, the switching time for that same red laser light would be 2.1 femtoseconds! A 'femtosecond' is one millionth of a nanosecond. If a shorter wavelength is used, the pulse time is shorter. If 300 nm ultraviolet light is used, the switching time is 1 femtosecond! However, such switching time comparisons to today's electronic computers do not take into account light's ability for accomplishing massively parallel computing. That is, by doing millions of things at the same time, far more work can be accomplished. Channelizing the visible part of the spectrum provides over 4 billion separate channels. Photonic transistors are capable of operating using all of them individually and all together. They can be manipulated as easy as forming the right kind of dynamic images and separating the appropriate energy patterns from them.
(end of file 2, continued in file 3)

Photonic computing has the physical ability to surpass electronic capabilities on a grand scale physically as well as economically. In 1994, Cyber Dyne Computer Corp. was formed as a for-profit corporation that purchased our photonic technology. Since then, Cyber Dyne and the Rocky Mountain Research Center (RMRC,) have been engaged in a massive patenting program so as to lock together all of the basic photonic computing technology into a single body of intellectual property, which is now owned by, and is being developed by All Optical Networks, Inc. (AON) the successor to Cyber Dyne. The first ten patents were issued in the name of RMRC, while those following are being issued directly to AON.

Unlike the rest of the industry, our approach doesn't just tweak the same set of technological tools, in an attempt to come up with some slight improvement, for another me-too product. Rather, each of our patents, covers an important cornerstone for an entire photonic technology rather than just a few sporadic inventions, or incremental improvements. Each addresses a separate issue in the progressive development of the photonic revolution to make a unified body of intellectual property capable of supporting a real photonic revolution. Below is a list of those first ten patents: U. S. patent 5,093,802. Optical Computing Method Using Interference Fringe Component Regions. Broad fundamental patent of the basic Photonic Transistor covering the use of optical interference to accomplish basic Boolean switching functions and signal amplification. The photonic transistor imitates electronic transistor functions at light speed. As of this writing, photonic transistors have been produced that switch signals in 1.5 femtoseconds. (10-15 seconds.) U.S. patent 5,623,366 and 5,644,123. Photonic Signal Processing, Amplification, and Computing Using Special Interference. Fundamental methods patent covering the production of, and all applications of "Special Interference" wherein light is deflected from a destructive interference point to another location. Special interference enables precision control of light. The patent includes 30 electronic-component-imitating photonic circuits. While special interference requires special optics, it performs the Boolean AND function, among others, because it is based on the law of energy conservation rather than the conventional methods of manipulating light using interference. U. S. patent 5,466,925. Amplitude to Phase Conversion Logic is a fundamental patent covering vector addition of amplitudes as used in interference based optical computing. It includes three light-speed logic devices that have amplitude modulated inputs and phase modulated outputs: the multi-input threshold device, the multi-input Boolean AND, and the multi-input OR. It also covers the NAND having amplitude inputs and outputs. Amplitude to Phase Conversion Logic is a excellent example of how the most fundamental operations of optics have been incorporated into our intellectual property in order to borden their scope and increase their value. Light beams can carry binary information using phase modulation (binary phase changes between 0 and 180.) just as they are able to carry amplitude modulated information by blinking them on and off. This patent is essentially the direct application of the principle of vector summation of waves for providing a phase modulated output from amplitude (or even phase) modulated inputs. As in the arrangement described above, optical interference is used to produce an image, such as is produced by a hologram. This time, three or more beams are brought together to produce the interference image. One of the beams is kept on

all the time as a bias beam. The other beams are aligned so that they are out-ofphase with the bias beam but in-phase with each other at the location of where vector summation is to take place. The phase of the output will change suddenly, flipping its phase by 180 degrees whenever the sum of the amplitude modulated inputs is greater than the bias beam. So, how can this be used to produce logic functions? If the amplitude of the bias beam is set greater than the individual input beams then it will require the sum of all inputs in order to overcome the threshold established by the bias and flip the output phase 180 degrees. This logic function is called a ulti-input AND. Now suppose that the bias level is smaller than any of the single modulated beams. In this case it requires only one...any one, of the modulated inputs to flip the output phase. This is a multi-input Boolean OR. This patented process can have as many inputs as one would like, and as many input/output combinations as one would like by this simple manipulation of vector wave combinations. By combining the two inventions, the scope of photonic computing technology is spreading to cover every practical photonic configuration that uses interference images. U. S. patent 5,555,126. Dual Phase Amplification with Phase Logic. Fundamental patent covering phase modulated logic that uses conventional interference to produce signal amplification, and information storage on circulating light beams. U.S. patent 5,726,804. Wavetrain Stabilization and Sorting. A means and method of producing long continuous wavetrains, and very narrow line width laser light from generally less expensive broad-lined lasers including laser diodes. Plus wavetrains can be sorted and distributed as desired. U.S. patent 5,617,249. Frequency Multiplexed Logic, Amplification, and Energy Beam Control. Fundamental use of multiple frequencies (colors) simultaneously and independently within a single device. For example, a single bistable element that would hold one bit in an electronic equivalent would be able to store a megabyte or more of information using different colors of the visible spectrum. Millions of basic logic interactions ordinarily done one-at-a-time can be accomplished simultaneously using this important invention. This invention enables photonic devices to calculate and store more information, in a single light beam than can be stored in a compariable sized silicon chip... by several orders of magnetude. U.S. patent 5,691,532. Photonic Heterodyning using an Image Component Separator.Fundamental patent on heterodyning light in a manner similar to the way heterodyning in radio and television is used to select separate frequency channels. Along with the highly selective filter that is included in 5,623,366 the invention enables light waves to be manipulated just like radio waves.

U.S. patent 5,835,246. Addressable Imaging. The operation of optical memory requires quick access and rapid address decoding. This basic patent covers the entire optical mechanism required to produce light speed optical memory, both RAM and ROM. U.S. patent 5,770,854. Pattern Recogniation Computing. Optical images are essentially just patterns of energy distributions. Such images contain information at different locations within the images, which changes continually as the inputs change. This extremely valuable patent covers the mechanism for producing pattern recognizable images from arbratrary input patterns. Nearly all computing processes are basically some form of pattern recognition. Thus, this patent covers the fundamentals of nearly all optical computing. It doesn't do any good to build it, if one doesn't own it. Patents determine what it is that one owns. Narrow patents have limited coverage and limited value. Broad patents provide their owners with a government-granted monopoly for nearly 20 years. Many of the above patents are now issuing in country after country all around the world so that the overseas markest can also be opened up to All Optical Networks. End of Article

Shared By: