With his sister’s perspective in mind, Eby quickly realized that relatively cheap, CubeSat technology could provide a solution to the current bottleneck in Mars-serving probes. Thus, the concept for a low-cost Mars lander was born.

“She’s always gotten me interested in Mars,” Eby says of his sister, who is a senior research scientist at the Tucson-based Planetary Science Institute. “She used to do some of the on-orbit camera-targeting for Mars — taking high-resolution pictures. This was about a decade ago and I was visiting her and it was my birthday. As a gift, she let me select where on Mars I wanted to take a picture. So, I picked this one crater and drew a narrow swath across it, and a few days later, it came back: a great high-resolution image. And I spent hours looking at that one image because it was the only high-resolution image of that one spot on Mars. And it was so spectacular. And it’s amazing how much on Mars we’ve never explored because we’ve only been to a handful of spots.”

It is with that intrepid sense of discovery and wonder that Eby and Mechanical Systems Department support staff members Ash Peltz and Vivian Churchill — supported by the Aerospace Independent Research and Development (IRAD) program — set about designing an economical vehicle for Martian exploration as part of the MarsDrop project.

MarsDrop’s basic goal is to repurpose Aerospace’s Reentry Breakup Recorder (REBR) vehicle as a planetary micro-probe for use in a Mars-based mission. The current aeroshell that houses a given REBR vehicle is aerodynamically stable and well-suited for Mars entry. Eby and his team have developed a landing system for the REBR vehicle that leaves volume within the spacecraft for scientific experimentation.

The MarsDrop, which would hitch a ride to the red planet as a secondary payload aboard a Mars mission, will also make use of a style of gliding parachute known as a parawing that was developed in the 1960s for the Apollo and Gemini missions. The parawing’s lift creates less vertical speed than a standard parachute, which enhances steering capacity and enables long-distance gliding. The gliding capability of the MarsDrop is one of its most exciting features, allowing for potentially stunning flyovers of the Martian surface.

The large, wildly expensive Mars rovers (Curiosity, Opportunity) are incredibly complex and remarkable in their ability to collect data and execute numerous functions while exploring the planet, but they are hamstrung by budget limitations and the sheer, physical size of the vehicles. The MarsDrop, though technologically rudimentary by comparison, offers a low-cost means of exploring Mars that can be utilized in a host of different mission plans. Eby sees potential for solo scientific missions, flyover missions, and even fleet missions that spread a group of landers over a spectrum of the Martian landscape.

In May of this year, Eby and Churchill performed an initial test of the technology in the Nevada desert by sending a MarsDrop lander up to 80,00 feet in a weather balloon and then cutting it free — forcing an earthly descent. Unlike the expensive rovers, MarsDrop landers can be tested under representative conditions on Earth without fear of financial catastrophe due to a potential crash or malfunction. “You could test this in a wind tunnel,” says Eby, “but the problem with a wind tunnel is that you are at sea level. It doesn’t match what the atmosphere on Mars is like. It’s not cold. It’s not a near-vacuum. But in high altitudes on Earth the atmosphere is very thin, very cold, and almost a near-vacuum, which is about what the Mars surface atmosphere is like.”

In its initial test, the MarsDrop lander performed well, though an electrical short prevented its parawing from deploying, forcing the lander to use one of its backup parachutes. “This first flight was a lot about just seeing whether we could get the experiment to work. It was a great learning experience,” says Eby.

Over the next year, Eby and his team plan to test the lander a few more times in order to prove the concept. Once all of the kinks are ironed out, the group will have developed a compact, cost-effective vessel for future scientific exploration on Mars. “You can’t replace a three billion dollar Mars rover with a million dollar, six-pound lander,” says Eby. “The rovers can do amazing things. But the lander will be a good complement. You can send them out very inexpensively, scout out high-risk areas and do some simple science.”

One of the more fanciful applications that the Aerospace team has dreamed up is a flyover of Mars’ magnificent canyon system A Valles Marineris. Aside from its scientific utility, Eby sees the lander — and its potential for theatrical flyovers — as a wonderful PR tool for current and future Mars missions. “To say this is the first time we’ve ever flown over Mars or flown on Mars, I think it would capture the interest of the public,” says Eby. “On the last mission that landed on Mars, they had a descent imager that captured a minute or two of video. It’s really quite spectacular. Imagine doing something similar to that over ten minutes of imagery. That would be quite interesting.”

With a mere six pounds of entry mass, the MarsDrop would be the first micro-probe to ever land on Mars and the first vehicle to ever fly within the confines of the red planet. That seems quite interesting, indeed.

]]>
## Across7. Washer cycle |
## Down1. Engage in high flattery |

Click here to view the answer key for this edition’s crossword puzzle

Go back to the Spring 2011 Table of Contents

]]>**Robert G. Pettit, IV** (left), Senior Project Leader, Software Engineering Subdivision, has more than 20 years of experience in software engineering. He is widely recognized in the fields of model-based software engineering, real-time software systems, and the Ada programming language. He coleads the Flight Software and Embedded Systems Office, which is tasked with continuous improvement for software-related technologies. He is a senior member of IEEE and is an adjunct professor at George Mason University and Virginia Tech. He has a Ph.D. in information technology/software engineering from George Mason University.

**Elisabeth A. Nguyen **(center), Engineering Specialist, Software Systems Engineering Department, joined Aerospace in 2006. She has provided technical support to a number of programs in the areas of software systems reliability and dependability. Nguyen currently leads research efforts in assurance cases and model checking. She has a Ph.D. in computer science from the University of Virginia.

**Myron J. Hecht** (right), Senior Project Leader, Software Acquisition and Process Department, has supported GPS, SBIRS, AEHF, milsatcom, and civil and commercial programs in the areas of dependability, reliability, safety, and aviation certification. He is a senior member of IEEE and a consultant to the Nuclear Regulatory Commission and has served on standards committees for computers in nuclear power plants. Hecht has authored 90 publications on reliability as well as multiple Aerospace technical reports. He has an M.B.A., M.S. in nuclear engineering, and J.D. from UCLA.

**Bruce H. Weiller** (left), Senior Scientist, Micro/Nanotechnology Department, joined Aerospace in 1989, working in the Aerophysics Laboratory on chemical lasers, spectroscopy, and time-resolved kinetics. His technical interests include the development of nanostructured materials for chemical sensors and contamination issues in optics and high-powered lasers. Weiller is the author or coauthor of 78 scientific publications. He has a Ph.D. in physical chemistry from Cornell University.

**Alan R. Hopkins** (middle), Engineering Manager, Polymers, completed his postdoctoral training in chemistry at Caltech and the University of Florida before joining Aerospace. He is active in the American Chemical Society’s Division of Polymer Chemistry and cochaired a symposium on nanostructured polymers at the society’s 2010 national meeting; he also chaired a conference at Aerospace on carbon nanotubes for space applications. His research interests include electrically conducting materials, carbon nanotubes, structure/property relationships in blends, and nanocomposites. He has a Ph.D. in macromolecular science and engineering from the University of Michigan.

**Frank E. Livingston** (right), Research Scientist, Micro/Nanotechnology Department, studies the photophysics and chemistry of laser-material interactions, with particular expertise in laser-structured nanomaterials and photosensitive glass ceramics. Since joining Aerospace in 2001, he has supported many civil and commercial programs and is currently principal investigator of a multidisciplinary research program focused on new fabrication methods for uncooled infrared sensors and frequency-agile communication systems. He has coauthored more than 65 papers and book chapters. He has a Ph.D. in physical chemistry from UCLA.

**Erica Deionno **(left), Senior Member of the Technical Staff, Microelectronics Reliability and Radiation Effects, has designed, built, and tested polymer-based molecular electronic devices, conducted radiation testing of memristor-based memory devices, and developed experimental facilities for life-testing of MEMS spatial light modulators. She has a Ph.D. in chemistry from UCLA.

**Jon V. Osborn** (middle), Laboratory Manager, Microelectronics Reliability and Radiation Effects, has worked at Aerospace for more than 25 years. He was colead for the PicoSat-I and PicoSat-II missions, two of the smallest active networked Earth satellites to fly, and has worked in the field of radiation hardness by design. Most recently, he helped lead the development of the High Reliability Electronics Virtual Center for use in national security space systems. Osborn is a registered Professional Engineer in California and has an M.S. in electrical engineering from the University of Southern California.

**Adam W. Bushmaker** (right), Member of the Technical Staff, Microelectronics Reliability and Radiation Effects, received his Ph.D. in electrical engineering from the University of Southern California. His research interests include novel nanomaterials-based devices, terahertz electronics and photonics, and space science and technology. He received his B.S. in engineering physics from the University of Wisconsin, Platteville.

**William T. Lotshaw**, Laboratory Manager, Lidar, Atomic Clocks, and Laser Applications Section, joined Aerospace in 2005 and works in the Photonics Technology Department. He is a physical chemist with 26 years of experience in the technology and applications of ultrashort-pulse and solid-state lasers. He has a Ph.D. from the University of Chicago.

**Ching-Yao (Tony) Tang** (left), Member of the Technical Staff, Mechanics Research Department, joined Aerospace as an intern in 2001. Since becoming a regular employee in 2008, Tang has provided technical support in experimental and computational mechanics for national security space programs and is serving as coinvestigator on multiple commercial and DARPA programs involving armor, tire technology, novel materials, and underwater acoustic sources. He has a Ph.D. in aerospace engineering (propulsion) from Purdue University.

**Gary F. Hawkins** (right), Principal Director, Space Materials Laboratory, has recently been investigating the manufacture of composites with unique properties by embedding small, simple machines in a matrix material. His technical accomplishments are in the fields of materials sciences, nondestructive testing, rocket nozzle design, and manufacturing engineering. Hawkins has been granted 12 patents and has published more than 40 papers. He received his Ph.D. in physics from Wayne State University.

**Christopher P. Silva**, Senior Engineering Specialist, Communications and Networking Division, began working at Aerospace in 1989 as a member of the Electronics Research Laboratory. Throughout his career at Aerospace, Silva has supported many projects, including private/secure communications, chaotic radar, analysis of nonlinear circuits, and multicarrier satellite communication channels. In 1999, he received the Aerospace President’s Achievement Award. He is a fellow of IEEE and a senior member of AIAA. Silva has a Ph.D. in electrical engineering from the University of California, Berkeley.

**Samuel D. Gasster**, Senior Scientist, Software Engineering Environments Computer Systems Research Department, joined Aerospace in 1988. He has supported a wide range of defense and civilian programs and agencies, including DMSP, NPOESS, DARPA, NASA, and NOAA. He has taught remote sensing and computer science at UCLA Extension and has been a judge at the California State Science Fair, software and mathematics section. He has a Ph.D. in physics from the University of California, Berkeley. He is a member of the American Physical Society, IEEE, the American Geophysical Union and the International Council on Systems Engineering (INCOSE).

Back to the Spring 2011 Table of Contents

]]>In 2000, David DiVincenzo of IBM published a set of criteria or requirements for assessing the viability of any physical implementation for a quantum information processing system. They can be described as follows:

A scalable physical system with well-characterized qubits. The stable quantum state could entail the spin (up or down) of an electron or the polarization (vertical or horizontal) of a photon. It requires accurate knowledge of physical parameters, internal energy, and coupling between qubits.

The ability to initialize the state of the qubits to a simple trusted state, such as 00…>. Technology-dependent approaches would need to be developed to initialize the quantum registers, including the cooling or measurement operations; an important question involves how long this would take. Quantum error correction requires ancillary qubits in known states.

Long decoherence times. Decoherence (the collapse of the probability wave) must take longer than 10^{5} times the quantum computer clock time. The dynamics of the qubit interacting with its environment will need to be better understood and controlled. Faulty control mechanisms lead to faulty gates. Even worse, noise is essentially analog acting on 2*n* complex numbers in superposition. Also, as a result of entanglement with the environment, coherent superposition becomes incoherent. The duration of decoherence may be the biggest obstacle to quantum computing.

A universal set of quantum gates. Quantum algorithms describe a sequence of unitary transformations. It is difficult in some cases to create these operations for two and three qubits, and difficult to control the on/off interactions for these gates as a result of imperfect implementation. Not all gates would be available in each technology.

A qubit-specific measurement capability. A technology-dependent readout mechanism is required to read specific qubits without perturbing other qubits. Current techniques are much less than 100 percent efficient.

The ability to convert stationary and flying qubits. Flying qubits (e.g., photons) can be used to store and transport information; doing this at will has yet to be achieved.

The ability to faithfully transmit flying qubits between specified locations. In a real-world implementation, transmission losses could affect computation.

Back to the Spring 2011 Table of Contents

Back to main article: Qubit By Qubit: Advancing the State of Quantum Information Science and Technology

]]>

The first half of the twentieth century gave rise to two significant accomplishments that have had a profound impact well into the twenty-first century: the development of the quantum theory of matter, and Alan Turing’s foundational work on the universal computer. Quantum theory provided physicists with a deep understanding of matter on the atomic scale and has proven to be one of the most accurate theories ever developed in terms of predicting the behavior of physical systems. Turing’s work laid the groundwork for stored-program computers. The developments of quantum theory led to the invention in 1947 of the transistor, a semiconductor device that could amplify and switch electrical signals. Independently, the theory of digital computing and the implementation of full-scale general-purpose digital computers took shape in the 1950s and 1960s.

In the 1980s, physicist Richard Feynman published several papers in which he examined the ability of current computing technology to simulate physics—in particular, quantum physics. He highlighted the fact that to adequately describe a system of *n* particles, a computer would need to keep track of 2^{n} real numbers, and thus there would be an exponential scaling of the storage requirements. So, for even a small number of particles, say 50, a computer must track 2^{50} (roughly 10^{15}) real numbers. If each number consumes about 64 bits of memory, the overall memory requirement would be on the order of 10^{17} bits, or about 100,000 petabits. That would be a stressing requirement even for current supercomputers. Feynman also discussed the challenges of computational efficiency and how long it might take to complete such calculations. He argued that given the complexity of simulating quantum systems with classical computers, and the inadequacy of current technology and approximations, why not use one quantum system to simulate another? Starting with a quantum system that is well understood and characterized, it could be possible to simulate the behavior and properties of another quantum system that is not so well understood. Thus was born the basic concept of quantum computing.

This early work has spawned the field of quantum information science and technology, which deals with the manipulation, storing, and transmission of information by taking advantage of the quantum mechanical properties of light and matter. One of the key distinctions between quantum and classical information and computation is that quantum information processing deals with direct manipulation of individual quanta (single quantum objects with well-defined quantum states), whereas classical devices rely on the macroscopic behavior of a large number of quanta.

Classical computers use electric voltage levels to represent the logic states of binary digits (bits) and gates that implement Boolean logical operations that transform the bit values (0 and 1) as part of the computation. Early digital computers used vacuum tubes, and then transistors, to create the voltage levels and implement the logic gates. Eventually, these technologies gave way to very-large-scale integrated circuits, in which transistors and other components were directly patterned onto a silicon die along with the electronic pathways. To squeeze more transistors into the same amount of space, engineers successively reduced the size of these circuit elements, with the most recent chips incorporating gate sizes on the order of 30 nanometers (for comparison, the read/write head in a hard drive floats about 10 nanometers above the disk surface, and the size of a silicon atom is on the order of 0.22 nanometers). At some point, it will not be possible to shrink the gate size and increase the packaging density without having to fundamentally change the way these computer chips are designed with respect to the inherent quantum nature of matter (to say nothing about the ability to control the voltages, currents, and heat within these dense structures).

A quantum computer, on the other hand, uses individual quanta and their states as quantum bits, or *qubits*, providing the logical representation of binary information. The physical realization of a qubit is a physical system with two quantum states that can be used to represent the 0 and 1 bit states. Using Dirac notation, these states are represented as |0〉 and |1〉 . Quantum computation can be defined as the application of a unitary transformation on a set of qubits followed by some type of measurement on at least one of the qubits to obtain a classical number. A common model of quantum computation, based on classical computation, is the circuit model. Quantum computations are represented as quantum “logic circuits” whose elements are representations of qubits and quantum gates, including measurement.

The fact that qubits obey the laws of quantum theory and the associated probabilistic interpretation has several important consequences that differentiate quantum and classical computation. Some of the unique aspects that give quantum computing its power include:

**Superposition.**Unlike classical bits, qubits can exist as a superposition of basis states; thus, a qubit |ψ〉 = α|0〉 + β|1〉 can be created, with (α,β) complex numbers called “probability amplitudes,” from the 0/1 basis qubits. A classical bit can only represent a 0 or 1, never any intermediate or superposition value. The ability to create qubits that are a superposition of the basis states is what gives many quantum algorithms their power over their classical counterpart—for example, allowing the simultaneous evaluation of a function over a large number of possible values.**Measurement.**Quantum measurement involves interacting with a qubit in such a way that the state of the qubit will be different after the measurement. Results of quantum measurement are based on expectation values that depend on the square of the probability amplitudes for different quantum states.**Interference.**While waves in classical physics may interfere, this is not a phenomenon that is exploited in classical computing. In quantum computing, however, individual quanta can interact with a relative phase, so the interference among a set of qubits is quite important. Quantum interference results from the relative phase of probability amplitudes and has a direct impact on the performance of two important quantum algorithms, the quantum Fourier transform and quantum search. These quantum algorithms achieve their speed as a result of superposition and interference, such that certain states of interest for the particular problem end up having larger probability amplitudes, within the superposition, providing a quantum parallelism that examines all possible solutions simultaneously.**Entanglement.**This uniquely quantum mechanical phenomenon results in highly nonclassical correlations between qubits. Essentially, when two particles are entangled, the act of measuring one immediately determines the state of the other, regardless of separation distance. This does not violate special relativity because a classical, finite-speed communication channel is required to actually transfer information using entangled states. Consider a simple example of two qubits |A〉 and |B〉. One possible joint state of these two qubits is given by the simple product |A〉|B〉; however, it is also possible to create the following entangled state:It is not possible to write this state as the product of two separate quantum states. In this example, if qubit |A〉 is measured to find state |0〉, then this immediately determines that qubit |B〉 will be observed in the state |0〉.

**No copying of qubits.**Unlike classical bits, it is not possible to create perfect, independent copies of qubits. Creating a copy of a qubit requires knowledge of the complete state of the qubit; however, to obtain such knowledge requires performing measurements on the qubit, destroying its state. This fact has a profound impact on both quantum computing and cryptanalysis.

The realization of a quantum computer will require advances in scientific understanding of how to create and control the quantum state of individual qubits and collections of these qubits. This understanding will have to be translated into the engineering advances needed to design and implement a reliable quantum computer. While there have been several proof-of-principle demonstrations of the basic operation of simple quantum computer circuits using a small number of qubits (fewer than 15), no one currently understands how to realistically scale up to the thousands or millions of qubits that may be necessary for useful computations.

In 2000, David DiVincenzo of IBM published a set of criteria for assessing the viability of any physical implementation for a quantum information processing system(see sidebar, Criteria for a Quantum Computer). It has been challenging to find physical implementations that satisfy all of the criteria. As a result of the interactions of qubits with each other and their environment, their states can change, making them somewhat fragile and prone to errors. Thus, it has been necessary to develop approaches to correct for these errors as much as possible. The qubits in a quantum system will be entangled with one another, and with their environment, which may result in an error in their quantum state that can lead to a failure in the quantum circuit. If the rate at which these errors occur is faster than the time to complete a computation, or if the errors grow exponentially, then the quantum computer will fail. One of the early successes with assessing the feasibility of quantum computing was the discovery by mathematician Peter Shor of specific approaches for quantum error correction.

At present, researchers are exploring a variety of physical implementations for qubits and quantum gates, including trapped ions, superconducting circuits, linear optics/photonics, quantum dots, and nitrogen valence centers in synthetic diamond, to name a few. There is no clear winner, and one should expect that quantum computers of the future would, like classical computers, require a variety of technologies that are each suited for specific purposes.

A quantum algorithm is a mathematical description of how to perform certain computational tasks using quantum resources, such as qubits and quantum gates. There was only moderate interest in quantum computing until Shor published his now famous factoring algorithm in 1994. The ability to efficiently factor large numbers is an important capability that affects many areas of mathematics and cryptography.

For example, computing the prime factors of large numbers (hundreds to thousands of digits long) is extremely difficult. If one wanted to use the most efficient classical algorithm on the fastest supercomputer to factor a 2048-bit number, it would take longer than the age of the universe (classical algorithms are exponential in the size of the input). However, Shor was able to show that because factoring could be reduced to determining the period of a modular function, one could apply the quantum Fourier transform to compute this period, and thus determine the prime factors with polynomial efficiency. The theoretical algorithm requires on the order of *L* qubits, and *L ^{3}* computational time steps, where

Another important quantum algorithm deals with data searching. Given a set of *n* objects, the problem of determining whether *x* is a member of that set will in general require *n* queries. In 1997, Lov Grover proposed a quantum algorithm that could compute the result in only queries.

Another important application of quantum information science involves secure communications. Consider the following scenario in which Alice wants to send a message to Bob without letting Eve (the eavesdropping spy) know the content of the message. Alice and Bob may communicate over an open channel, so it’s possible that Eve could intercept their messages. Alice and Bob therefore need to encrypt their messages before sending.

If they use symmetric key cryptography, they each need to share the same cryptographic key, which must be kept secret and out of the hands of Eve. They must have some method of securely creating, sharing, and safeguarding this key. They then use this key with an encryption algorithm to encrypt their messages before sending them over the open channel. The only classical encryption algorithm that is mathematically proven to be secure is known as the one-time pad, which uses a unique random key for each message that is the same length as the message. Once used, that key is discarded. For short messages, this approach might be feasible, but as the message size increases, it becomes impractical. Another consideration is the practicality of creating and sharing a new key for each message.

Thus, Alice and Bob are faced with the challenge of managing their encryption keys and keeping them out of the hands of Eve. Quantum key distribution might be the solution to their problem. It uses the quantum properties of photons as part of an overall protocol for the secure generation and sharing of cryptographic keys between two parties during a single communication session. Each new session will result in a new, unique key being created and exchanged. For an ideal implementation, the security of this approach rests not on the computational complexity of some mathematical algorithm, but on the physical laws of quantum theory (in particular, the fact that measurement alters the quantum state, and that it is not possible to make a perfect copy of a quantum state).

Several possible protocols have been proposed for quantum key distribution. One such protocol, BB84 (named after its inventors, Bennett and Brassard), uses polarized photons. This protocol involves six basic steps that employ both a quantum channel and a classical communication channel.

Authentication is the first step, in which Alice and Bob must verify that they are in fact communicating with each other and not someone else. This authentication step is a security measure designed to establish the validity of a transmission, message, or originator, or a means of verifying an individual’s authorization to receive specific categories of information. It may be accomplished using classical protocols, and only needs to be done at the beginning of the session.

In the next step, Alice and Bob use the quantum channel to send and receive photons. Alice transmits a stream of photons, each of which is given one of four randomly generated polarization states. The basis of these polarization states can be measured either rectilinearly or diagonally. Bob randomly sets his basis measurement device and measures each photon according to the protocol. On average, Bob will only use the correct basis 50 percent of the time.

Next, Alice and Bob use an open classical communication channel to exchange information regarding the basis used to transmit each photon. When the transmitted and received bases agree, Alice and Bob will retain the corresponding bit, discarding about half the candidate bits on average, as mentioned above. The bit values associated with the retained data comprise the “sifted bits,” which will contain additional sources of error that must be corrected in the next step, known as reconciliation, which also results in a smaller set of bits.

Even though Alice and Bob share an identical set of bits after the sifting and reconciliation steps, an eavesdropper may have gained some information about these bits. Thus, the next step is to generate the secret key through a process known as privacy amplification, which can be described as the art of distilling highly secret shared information from a larger body of shared information that is only partially secret. This will allow Alice and Bob to start with a shared random bit sequence (about which Eve may have some information) and create a shorter shared random key (about which Eve has essentially no information).

Lastly, as part of the session, Alice and Bob can agree to save some of the shared secret bits so that they can be used as part of the authentication step for the next session.

The theoretical BB84 protocol assumes perfect devices, such as single-photon sources and noise-free detectors; however, engineered systems will have to be constructed from realistic, noisy devices and propagate photons through lossy media (optical fiber or Earth’s atmosphere). The impact of these factors on the overall security of any given quantum key distribution implementation is critical to its applicability for national security space. Depending on hardware performance and the effects of the propagation media (and possibly Eve), there can be a significant reduction in the number of usable bits in going from the number of transmitted photons to the final secure key bits. The secure key rate and quantum bit error rate are two important system performance parameters that must be optimized for any implementation.

While quantum information technology appears to offer many potential benefits for certain computational problems and secure communications, it cannot be transitioned into national security space without a detailed assessment of the underlying technologies and system implementations. Also, in evaluating any new technology, it is important for stakeholders to understand both the potential benefits for users as well as the threats that might arise, should an adversary obtain such a capability.

Aerospace has been working to develop a detailed understanding of both the benefits and threats posed by quantum information processing in order to advise customers, provide the necessary subject matter expertise, advance the state of the art (as with any technology relevant to national security space), and support transition planning.

**Potential Benefits**

Space-system development presents problems that are computationally complex, and operational needs may require the acceptance of approximate solutions. Examples include optimizing the design of a satellite constellation to suit a given set of constraints, optimizing the priority tasking of a given set of assets with specified constraints, or fusing data and extracting information from multiple sources. Solving these problems often requires state-of-the-art algorithms running on supercomputers. There is considerable research under way in evaluating those classes of problems for which known or new quantum algorithms may provide a more efficient solution than classical algorithms and computers.

One interesting problem that Aerospace is studying involves possible application of quantum computing to improve how classical programs are compiled and executed on distributed and clustered computers. Software developed for these classical systems must be compiled to run effectively and efficiently. These compilers perform various types of optimization and instruction scheduling based on knowledge of the target hardware; they employ (mostly) heuristics to arrive at a tractable solution within an acceptable period of time. It may be possible to use a quantum computer to find better solutions for the classical compiler optimizations and scheduling to allow improved use of the classical supercomputers.

Finally, returning to Feynman’s original interest in studying quantum computation—the prospect of quantum simulation—offers further potential. Many of the computational problems of interest to national security space involve the development of better materials or a better understanding of material properties and behavior in adverse conditions. This might be an area where quantum simulation could provide advantages over the classical techniques through a more direct and accurate simulation of the physical properties and behaviors of these materials.

**Potential Threats**

Technology is a two-edged sword, and the advantages of quantum computing are tempered by the possible drawbacks. The most obvious threat is that an adversary will apply the resources of a full-scale quantum computer as a cryptanalysis tool. The ability to implement Shor’s factoring algorithm could directly put at risk several classes of encryption algorithms. The application of Grover’s quantum search algorithm provides some, albeit modest, acceleration in brute-force search that could also be applied to cryptanalysis.

Clearly, it would be prudent to start implementing encryption technology that would be immune to attack from a quantum computer; however, adversaries are probably storing information that could be decrypted by a future quantum computing capability, so it is also important to perform a lifetime assessment, delineating the ramifications of having secret information compromised by a quantum computer at a later date.

Many countries have research and development programs in quantum information technology. In Europe, several quantum key distribution implementations have been deployed for secure banking and voting transactions. Japan has announced a metropolitan-scale quantum key distribution network in Tokyo. The European Union has even proposed a ground-to-space demonstration of quantum key distribution, sending a key from a ground station to a receiver on the International Space Station.

Quantum computing and key distribution are important technologies with possibly significant ramifications for future space missions. Researchers at Aerospace have been identifying near-term (5–10 years) and long-term (beyond 10 years) challenges in the area of national security space that might be addressed through quantum computing and quantum key distribution. In addition, Aerospace has been tracking general trends and conducting targeted research in anticipation of increased interest within the space system community.

**Creating Qubits with Ultracold Molecules**

One recent project focused on developing a quantum information processing testbed using ultracold molecules as physical qubits. Rubidium and cesium were selected because the laser cooling of these species is well understood, as are the quantum states of the rubidium cesium molecule (Aerospace has extensive experience in the laser cooling of these atoms as a result of the corporation’s work in atomic clocks, such as those used in navigation satellites). The project demonstrated the formation of ultracold rubidium cesium polar molecules by photoassociation. The researchers developed a practical quantum transition scheme to efficiently produce ultracold rubidium cesium molecules in the lowest quantum states. A carbon dioxide laser was used to trap and store the ultracold atoms and molecules. The next step will be to fully implement the quantum transition scheme and demonstrate qubit operation with ultracold rubidium cesium molecules in an optical trap or lattice.

**Systems Analysis and Engineering of a Quantum Computer**

In 2006, Aerospace researchers completed a study entitled, “The Effects of Quantum Information Technology on the Security of Space Systems.” The study was one of the first to assess the impact of quantum computers on space information security and the possibility of retroactive data decryption.

In 2008, researchers embarked on a project to explore possible quantum computer architectures and components. The research focused on applying a rigorous systems engineering process to the analysis of a quantum computer system to meet user requirements based on a fictitious but representative cryptanalysis mission. It examined how top-level system requirements, based on user needs, affected the subsystem-level requirements and how these compared with current and projected technology capabilities. This work also explored how classical concepts of reliability and fault-tolerance could improve the design of a quantum computer system. It also developed a quantum programming language (similar to high-level classical programming languages), a quantum computer compiler, and associated analysis tools to estimate the resource requirements (physical qubits and gates) needed to achieve reliable computation.

More recently, the researchers began a project to improve the tools and methods for evaluating quantum computer system designs. The work involves extending the Aerospace quantum computer compiler into a complete quantum computer design and analysis toolbox that will enable better prediction of the resources and overhead needed to support the necessary quantum error correction and control protocols. These tools will provide a foundation for the development and evaluation of improved error correction and control protocols that will help minimize resource overhead in realistic quantum computers.

The next step will be to extend the Aerospace quantum computer compiler to include other models of quantum computation, additional quantum error correction and control protocols, and additional compiler passes to optimize the quantum assembly code and minimize the number of steps required to implement the quantum program. Using the quantum computer design and analysis toolbox, researchers will be able to analyze the performance of a given quantum program as a function of the quantum error correction and control protocols and resource overhead. The compiler backend code generation (quantum instruction-set architecture) will be extended to incorporate a well-defined interface to accommodate other quantum instruction-set architectures based on different physical quantum computer architectures.

Finally, it will be necessary to develop methods to verify and validate the quantum computer designs as well as the analysis toolbox software and resource estimates. The correctness of these methods must be assessed before the results can be applied to more complex quantum computer system designs.

**Quantum Key Distribution Test and Evaluation Facility**

Another project is focused on developing a complete test and evaluation facility to assess the information assurance aspects of specific quantum key distribution implementations. The facility will allow assessment of both security and system performance in terms of secure key rate and quantum bit-error rate. Researchers will assess the security of specific hardware implementations of quantum key distribution protocols, including their confidentiality, integrity, and availability. The impact of side-channel attacks will also be evaluated. The facility will also provide quantitative data for the development of performance models that can be used to assess the potential for long-distance (ground-to-space) quantum key distribution.

The technology development and field-testing of quantum key distribution has progressed faster than that of quantum computing. Many of the technical challenges for scaling up quantum key distribution are reasonably well understood; however, the task of designing and implementing a secure quantum key distribution system and evaluating its security in an operational context is extremely challenging and could take many years. During this time, it will be necessary to also develop a better understanding of where, and how, to employ quantum key distribution systems to enhance space mission assurance.

Early computer systems from the late 1950s up to the 1980s were dominated by large mainframe systems and minicomputers. The early mainframes required a full-time staff to operate and maintain them. Users did not typically interact directly with these systems, but rather submitted their jobs—manually at first, and eventually through automated job submission and scheduling tools for batch processing. Given the current rate of technological progress in quantum computing, early quantum computers may employ relatively few qubits—on the order of tens to a few hundred. Programming these systems and maintaining availability will be challenging. These early implementations will provide fertile ground for experimentation with scaling these systems to large numbers of qubits and gates (on the order of millions). These large-scale quantum computers may resemble the early mainframe systems and look like large-scale quantum physics experiments in which a quantum core will be controlled by a complex classical computer network. Users will not interact directly with quantum computers, but will rely on an intermediary classical system to load, execute, and interpret quantum programs. When the job is complete, the user would receive a message with a link to the results. Other configurations may also be implemented—not as general-purpose quantum computers, but as special-purpose systems to solve specific problems. Initially, users will need a solid understanding of quantum information, quantum computing, and quantum programming to use these systems (there may be an analogy to the early machine language programming of classical computers, compared to the ubiquitous high-level programming languages seen today). Such systems will also require a full-time staff to maintain.

As small quantum computers become a reality, one might expect the field of quantum computer science to expand at an increasing rate due to the availability of actual hardware on which to experiment and test new ideas. Understanding quantum information science requires some degree of expertise in a variety of fields, including physics and computer science. Many universities have interdisciplinary programs in quantum information science, training the future generation of quantum computer scientists. These new researchers may uncover additional quantum algorithms and expose as yet additional undiscovered power for quantum computing.

As scientific knowledge of how to control and manipulate quantum systems improves and the implementation technologies are refined, one might expect that these quantum computer systems might someday follow an evolutionary path similar to that of the classical computer, in which engineers develop the techniques to create highly integrated quantum circuits that result in relatively compact physical implementations of quantum computers, ultimately resulting in a quantum computer in space.

Quantum information is a multidisciplinary field requiring skills from physics, mathematics, computing science, and engineering. The author would like to acknowledge discussions and collaboration with Tzvetan Metodi, Leo Marcus, He Wang (primary investigator for the ultracold molecule qubit research), and Benjamin Bowes (primary investigator for the quantum key distribution test and evaluation facility).

- D. Bacon and W. van Dam, “Recent Progress in Quantum Algorithms: What Quantum Algorithms Outperform Classical Computation and How Do They Do It?”
*Communications of the ACM,*Vol. 53, No. 2, pp. 84–93 (2010). - D. DiVincenzo, “The Physical Implementation of Quantum Computation,”
*Fortschritte der Physik,*Vol. 48, No. 9–11, pp. 771–783 (2000). - R. Feynman, “Quantum Mechanical Computers,”
*Foundations of Physics,*Vol. 16, No. 507–531 (1986). - R. Feynman, “Simulating Physics with Computers,”
*International Journal of Theoretical Physics,*Vol. 21, No. 6/7, pp. 467–488 (1982). - M. Nielsen and I. Chuang,
*Quantum Computation and*(Cambridge University Press, 2000).

Quantum Information - J. Nordholt and R. Hughes, “A New Face for Cryptography,”
*Los Alamos Science,*Vol. 27, pp. 69–85 (2002). - P. Shor, “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer,”
*SIAM Journal of Scientific Statistics and Computing,*Vol. 26, p. 1484 (1997). - “Ultracold Molecules,”
*Crosslink*, Vol. 6, No. 1, p. 31.

Back to the Spring 2011 Table of Contents

To sidebar: Criteria for a Scaleable Physical Technology to Implement a Quantum Computer

]]>The word “fractal” derives from the Latin word “frangere,” which means “to break.” The field marks a whole new geometrical modeling paradigm that can remarkably capture the complexity of shapes and textures found in nature, such as clouds, forests, trees, flowers, galaxies, leaves, feathers, rocks, mountains, coastlines, and even blood vessels. The study of fractals involves the notions of self-similarity, repetitive iteration, and fractional dimension. In particular, fractal geometry generalizes ordinary notions of length, scale, and dimension in interesting and subtle ways. For example, the 1-D length of a coastline depends on how finely it is measured, a 2-D spiral can be of either infinite or finite length yet occupy a finite area, and a 3-D space can be densely filled without using an ordinary solid.

Fractals require the generalization of the concept of dimension (allowing it to be noninteger) that in turn is intimately tied to the notion of scale. Fractal objects are typically generated through some form of iterative feedback employing simple geometrical or dynamic rules. At each iteration, there may be a set of rules to choose from at random, or a parameter that can take a chosen random value. Under general conditions, continued iteration will converge on a final set that is the fractal.

The study of fractals was revolutionized and popularized by the publication in 1977 of Benoit Mandelbrot’s book, *The Fractal Geometry of Nature.* Since then, the field has grown immensely, with many fruitful applications and even commercialization in areas such as data/video compression, frequency-independent antennas/arrays, computer generated imagery, random process and probability modeling, image half-toning, and cluster and crack propagation analysis.

Back to the Spring 2011 Table of Contents

Go to main article: Nonlinear Dynamics and Chaos: From Concept to Application

Go to sidebar: Dynamical Systems

]]>The field of dynamics concerns the study of systems whose internal parameters, called states, obey a set of temporal rules, essentially encompassing all observable phenomena. Over its long history, the field developed into three distinct subdisciplines: applied dynamics, which concerns the modeling process that transforms actual system observations into an idealized mathematical dynamical system; mathematical dynamics, which focuses on the qualitative analysis of the model; and experimental dynamics, which ranges from controlled laboratory experiments to numerical simulation of state equations on computers.

The study of dynamics dates back to at least Galileo (1564–1642), who essentially founded it as a branch of natural philosophy now called physics. Galileo established the close interplay between theory and experiment and was one of the first to fully study the concept of acceleration. Galileo and others, including Johannes Kepler (1571–1630), addressed the concepts of change, rate of change, and rate of rate of change as they were ubiquitously observed in natural phenomena. These researchers found that nature is fraught with basic laws relating to changes in physical states that could be observed and measured.

Research on dynamics further developed with Isaac Newton (1642–1727) and Gottfried Leibniz (1646–1716), who independently formalized the notion of derivatives that served to couch these dynamical laws or systems in a mathematical form, after which flourished the development of a whole spectrum of formal qualitative analysis tools. These early contributions were of a purely analytical form that dealt with classical differential equations and eventually gave rise to more geometrical and topological methods that have dominated the field ever since. Next, with the rapid advancement of measurement and computational technologies, the area of experiments (at first physical and then numerical) became an important component of the overall study of dynamics.

Dynamics was exclusively studied in the domain of classical physics until the 1920s, when applications to biological and social sciences began to appear. This expansion has continued to today, where dynamics is now studied in virtually all areas of science. As an example, such a perspective can be fruitfully applied to the electrical, mechanical, orbital, and propulsion subsystems found in spacecraft vehicles. It may come as a surprise that the complex behavior called chaos did not formally arrive on the scene until around 1950.

Back to the Spring 2011 Table of Contents

Go to main article: Nonlinear Dynamics and Chaos: From Concept to Application

Go to sidebar: Fractals

]]>*Traditionally, engineers sought to minimize noise and distortion in system designs. Today, Aerospace scientists are seeking to explicitly harness these effects for useful engineering purposes.*

The vast discipline of nonlinear engineering divides into two complementary practices: one that pursues the elimination of undesired nonlinear effects, and one that seeks to harness nonlinear effects for useful engineering purposes. The first practice involves either the characterization and elimination of unwanted nonlinear distortion, or the prevention of unwanted and possibly damaging anomalous behavior in nonlinear circuits and systems. This practice primarily affects current or near-term systems. The second practice seeks to develop whole new design methodologies and technologies, which in the long term may lead to future advanced systems that may be quite different from what now exists.

The most studied nonlinear phenomenon is the complex, random-like behavior called chaos, which is being addressed in fields ranging from astronomy to zoology. The Aerospace Corporation is researching how to harness the effects of chaotic signals and apply them to communication systems for military applications. Some of the most pressing issues involve privacy and security for processing communication signals. There are also potential applications involving radar and sonar, with important implications for addressing such challenges as urban warfare and the remote detection of improvised explosive devices and suicide bombers. The foundation for this new engineering practice is the dynamical system perspective, with its accompanying set of powerful analysis machinery. Overall, this practice is evolving on many fronts and levels, reaching a state of maturity where it can be applied to real-world problems.

The practice of electrical engineering has been dominated by a linear paradigm that has well served the needs of communications signal processing functions. The techniques are well established and mature and solve a large class of problems, such as linear filters, for example. These techniques are based on the classical superposition principle, which states that the response of a given system to a sum of stimuli is given by the sum of the responses to each stimulus acting alone. This view provides a first-order approximation of a naturally nonlinear world. Hence, an engineer can create designs that are intentionally linear knowing they will obey these simple principles. Established practice has also dictated that any higher-order nonlinear effects that result from the violation of the superposition principle could be safely ignored. Such effects were called noise and distortion, and have traditionally been treated more like an oddity and nuisance rather than an inherent and possibly useful feature of nature. The practice of working within a linear paradigm, however, does not allow for important, common, and explicitly nonlinear signal functions such as frequency generation, frequency synthesis, and power amplification. It is these required functions that made up the first elements of nonlinear engineering(see sidebar, Dynamical Systems).

The further development of nonlinear techniques primarily occurred in academia, and new discoveries found little practical application. However, within the last two decades, there has been a revolution of sorts that stems from three fundamental factors. These factors have synergistically acted to radically evolve and change the practice of nonlinear engineering.

The first factor arises from the demand for increased performance in limited-bandwidth channels in communication systems. As a consequence, nonlinear effects can no longer be ignored in the design of these systems, requiring techniques that are not simple extensions of linear theory. A showcase example is wideband communications for advanced military satellites. Here, there is often a “bent-pipe” architecture in which the transponder high-power amplifier is operated in or near its nonlinear saturated region to maximize power efficiency. But this gain in efficiency is offset by increased distortion in the modulated signals that pass through the amplifier. The distortion is exacerbated by the complexity of the modulations needed to attain high bandwidth efficiencies, because they often contain amplitude variations that elicit these added distortions. In this case, an accurate and formal identification of the amplifier must be accomplished before an effective nonlinear compensation strategy can be developed. There is a significant amount of activity in this arena fueled by the demand for personal wireless communications.

The second factor involves several seminal discoveries of nonlinear effects that prompted a flurry of research in applying them to communications signal processing, as well as to many other disciplines. In essence, a new modeling and analysis language emerged that captured a much larger portion of the complexity of nature. This was an about-face from the dominant engineering mindset in that these nonlinear effects were now being sought for their application potential. It marked the beginning of the second branch of nonlinear engineering, in which whole new designs would be sought based on these new effects. These discoveries have primarily occurred in the active fields of chaos, fractals, and wavelets(see sidebar, Fractals). Unlike the linear case, this activity is still relatively immature and presents a wide-open frontier for new practitioners. Because the nonlinear methodology provides a higher-order view of nature in which the superposition principle does not hold, it is typically a large leap beyond linear thinking, involving much more complex analysis on small classes of problems. This difficulty is offset by the tremendous application potential and importance of nonlinear effects.

The third factor has been the rapid development of computational power that is imperative for nonlinear study and application. The nonlinear field is characterized by complex problems, most of which do not have closed-form solutions and must be addressed qualitatively and numerically. Coupled with the qualitative arsenal of tools from the discipline of nonlinear dynamics, the computer offers a means to perform nonlinear experiments on the desktop, thereby providing the insight and knowledge needed to reduce nonlinearity to a beneficial practice.

**Chaos and Bifurcation**

One of the most well known and potentially useful nonlinear dynamical effects is the bounded, random-like behavior called chaos—in essence, deterministic noise. Chaos has been found in a myriad of dynamical systems and in frequency ranges from baseband to optical. This phenomenon, along with its closely related cousin called the fractal and the mathematical tool called wavelets, offers a new paradigm for understanding and modeling the world. It stems from the underlying principle of self-similarity at different scales, which appears to be a ubiquitous property of nature.

There are three basic dynamical properties that collectively characterize chaotic behavior. First, it exhibits an essentially continuous and possibly banded frequency spectrum that resembles random noise. Second, it is sensitive to initial conditions—that is, nearby orbits in the phase space (a geometrical perspective in which the dynamical states are plotted against each other so that time becomes implicit) diverge rapidly. Third, it contains an ergodicity and mixing of the dynamic orbits, which in essence implies the wholesale visit of the entire phase space by the chaotic behavior and a loss of information because of the loss of predictability.

Chaotic behavior can only arise in dynamical systems that are nonlinear, although these systems may be continuous or discrete, with or without dissipation. The behavior can also be transient or steady-state in nature and typically arises after a sequence of qualitative changes in behavior as a function of one or more parameters (termed “bifurcations”). A predominant manifestation of chaos occurs in the steady state of dissipative systems and is termed a “strange attractor” because its topological structure is complex and it attracts outside orbits.

**Chaotic Synchronization**

The classical synchronization or entrainment of periodic oscillators has been known since at least the seventeenth century, when Christiaan Huygens observed the coupled form of this phenomenon in adjacent clocks on a wall. The driven or injection form of synchronization was discovered later with the observation that a small periodic forcing signal could cause the large natural resonance of a system to lock to it. What was unexpected was that a similar phenomenon could be had with chaotic signals, especially given their distinctive bounded instability character. The discovery of the driven form of chaotic synchronization was announced in 1990, marking a turning point in the investigation of chaos for communication systems, for it allowed chaos to be modulated and demodulated like a generalized carrier.

There are five basic chaotic synchronization techniques, all of which relate to communication applications that are generic across national security space programs:

**Master-slave synchronization.**This was the earliest discovered version of chaotic synchronization. It occurs when an autonomous (that is, unforced) system unidirectionally drives a stable subsystem.**Nonautonomous synchronization.**Here, a nonautonomous (that is, forced) system unidirectionally drives a stable identical nonautonomous system. This form is known to be quite robust against link interference.**Inverse system synchronization.**In contrast to nonautonomous synchronization, inverse system synchronization occurs when the receiver is a formal dynamical inverse of the transmitter that will reproduce the latter’s forcing function.**Adaptive control synchronization.**By far the most prolific class of synchronization approaches, this is based on the numerous variants of adaptive control for chaotic systems (also known as “control chaos”). In fact, these techniques have demonstrated some capability (although easily defeated) of extracting information from unknown systems, or even making distinctly different dynamical systems synchronize, thereby possibly weakening the security claims often made for chaos-based communications. These techniques can also make the other forms of chaotic synchronization more suitable for practical implementation—for example, where there are link degradations and parameter mismatches.**Coupled synchronization.**This consists of bidirectionally coupled identical systems and is a simple generalization of the traditional classical form involving sinusoidal oscillators.

The first four forms of chaotic synchronization are suitable for standard communications purposes, while the fifth is suitable for network communications. It is also preferable that the linking signal between the component systems be of the scalar variety. Because of the newness of these discoveries, many studies are still needed to address important engineering and operational issues, and to compare findings with traditional synchronization approaches.

**Implications for Engineering Applications**

Until about the mid-1980s, chaos was primarily studied by physicists and mathematicians in a theoretical sense. These studies sought to determine, understand, and report the many unique properties of chaos and to analyze and predict its behavior. During this period, and especially after the pivotal discovery of chaotic synchronization in 1990, researchers began to pursue practical applications. These investigations naturally focused on applications that could exploit the deterministic, yet random-like behavior of chaos, particularly with regard to inherently secure signal processing and transmission.

Over the years, many other unique effects with their own engineering implications have been discovered and investigated in chaotic systems (e.g., nonlinear amplification with signal enhancing/extracting capabilities). The bifurcation aspects of nonlinear systems provide a profound and critical insight into current circuit and system designs, especially as it applies to their fundamental stability. As a consequence, the investigation of the engineering applications of nonlinear dynamics and chaos has become a vast, rapidly maturing, and multidisciplinary undertaking, especially on an international scale.

**Modulation Approaches**

A whole range of methods have been proposed for implementing and demonstrating analog or digital modulation of information using a chaotic carrier. Such modulations range from simple addition to more complex combinations of information with the carrier that is much more indirect and subtle than the traditional amplitude, phase, or frequency modulation of a classical sinusoidal carrier. Because of the complexity of the carrier and the modulation, these approaches can provide privacy and security for communications without even encrypting the information. In essence, the information is hidden in the “noise” in transmission and can be extracted in the receiver using the inherent determinism of chaos.

Major chaos-based modulation methods being investigated and developed internationally for communication applications include:

**Additive chaotic masking.**This was the earliest form of modulation, wherein the information is added to the carrier as a small perturbation and usually demodulated using a cascaded form of master-slave synchronization.**Chaotic switching.**In this chaos-based version of traditional digital modulation, an analog signal of finite duration represents a digital symbol consisting of one or more bits. In this case, the digital symbol is uniquely mapped to an analog waveform segment coming from distinct strange attractors (known also as “attractor-shift keying”), or an analog waveform segment from a distinct region of a single strange attractor, thereby forming a chaotic signal constellation.**Forcing function modulation.**In this approach, a sinusoidal forcing function in a nonautonomous chaotic system is analog or digitally modulated with the information in a classical manner, with the transmitted signal being some other state variable. This modulation typically involves the nonautonomous or inverse synchronization methods and is the basis for the Aerospace development effort addressing high-data-rate, chaos-based communications.**Multiplicative chaotic mixing.**This can be considered the chaos-based version of the traditional direct-sequence spread-spectrum approach, except in this case, the receiver actually divides by the chaotic carrier to extract the original information.**Parametric modulation.**In this case, the information directly modulates a circuit parameter value (such as resistance, capacitance, or inductance), and some state variable from the chaotic system is sent that contains the information in a complex manner. As with forcing function modulation, this is an indirect modulation approach that typically offers higher levels of privacy and security and can also provide chaotic multiplexing capabilities, wherein two or more messages can modulate different circuit parameters and be sent and recovered using one transmission signal.**Independent source modulation.**This is another indirect modulation form where the information becomes an independent voltage/current source that is inserted in the chaotic transmitter circuit.**Generalized modulation.**This form involves the generalization of additive masking/multiplicative modulation, where the information and chaotic carrier are combined in a more general invertible manner.

**Survey of Baseband to Optical Systems**

As chaos-based communication options were being explored in the early 1990s, a whole series of baseband communication links were demonstrated by simulation and experiment. These were primarily proof-of-concept exercises using simple modulation signals ranging from tones to speech, and were based on the various forms of chaotic synchronization and modulation. The baseband nature of these developments was primarily forced by the abundance of chaotic generators in the low-frequency range and the ease by which practical circuits can be implemented in this regime.

Some of the advantages and features of chaos-based communications that were proposed during these early studies included digital and analog implementations that synchronize more rapidly, robustly, and simply because of their natural dynamical properties; unique analog communications capabilities such as privacy, low probability of intercept (LPI), low probability of detection (LPD), and frequency reuse that are of interest to the military; and other unique signaling functions not possible with digital techniques, such as indirect chaotic modulation for enhanced security and multiplexing, chaotic signal constellations allowing for direct high-power transmitters, noise reduction with cascaded receivers, and spatial security using a ring of transmitters. These development efforts are still at an early exploratory stage.

The first reported example of chaos-based communications used a cascaded form of master-slave synchronization and additive chaotic modulation. The cascading was needed to locally and coherently regenerate the chaotic carrier. This regeneration was found to be quite resilient to noise and interference added to the linking channel, as would be needed for a practical communication system. In this case, the chaotic carrier was modulated by adding a voice message at a much lower level, and was recoverable because of the regenerated chaotic carrier. The message was buried in the “noise” when viewed in the communications channel, indicating how this approach can possibly provide for private transmissions. One must be careful about making such claims, however, because it was later shown that the additive modulation scheme is easily deciphered using so-called de-embedding techniques. These techniques have yet to be applied to traditional digital encryption schemes, which can be thought of as sophisticated mappings of the plain text.

The more sophisticated example of parametric chaotic modulation cannot be imitated by traditional modulation approaches and is much more secure. It occurs when the message modulates a chosen circuit parameter in the system, which in turn influences the state variables of the system in a complex manner. Because the state variables, or combinations thereof, are the signals sent across the communication channel, the manner in which the original message is embedded in this signal is extremely complex and thus provides a first-tier level of security without encrypting the message.

Some design forms of chaos-based communication can be used for baseband communications, whereas for radio-frequency (RF) or microwave communications, these schemes must be combined with traditional carriers and modems. In both cases, the bandwidth of the information is limited to tens of kilohertz; in the latter case, there is an additional loss of LPI capability. Similar to synchronization, there were several important engineering issues that needed to be addressed before operational application could be considered for these new communication approaches.

By the early 2000s, there was a rich set of results for chaos-based communication using more sophisticated techniques (such as adaptive receivers, pulse-position modulation, analog code-division multiple access and spread spectrum), providing more capabilities (such as supporting multiple users and suppressing multipath and jamming interference), and addressing engineering concerns (such as filtering, delay, parameter mismatch, and added channel noise). Despite these advances, the data throughput of such systems remained relatively low because of the bandwidth limitations of their constituent chaotic generators. However, since the early 2000s, the evolution of chaos-based communications has steadily continued, with advances in frequency range, data throughput, and synchronization/modulation techniques.

One example of this evolution involves the progression of chaos-based communications from the RF/microwave arena into the optical range. The motivations for the optical case were similar to those for the RF/microwave regime—namely, bandwidth efficiency, multiuser capabilities, natural large-signal operation, privacy, and security. High-dimensional chaotic behavior is quite easily generated in optical systems using optical injection, opto-electronic feedback, or optical cavities. However, the range of synchronization and modulation methods is more limited than in the RF/microwave case, as was made evident from a 2001–2004 European project called OCCULT (Optical Chaos Communications Using Laser-Diode Terminals). The initial laboratory demonstration sustained a data rate of 3 gigabits per second (Gbps) with a respectable 7 × 10^{−9} bit-error rate using a high-dimensional chaotic additive masking modulation. The demonstration was later repeated over a large commercial fiber-optic network in Athens, Greece, in 2005, with sustainable data rates of 2.4 Gbps and a similar acceptable bit-error rate.

The second example represents a commercially developed application of chaos-based techniques to the rapidly developing ultrawideband radio services in the 3.1–10.6 gigahertz (GHz) frequency band. In this standard, the minimum or typical communications bandwidth is 500 megahertz (MHz) to 2 GHz, with only a power-spectral-density mask specified and not the carrier or modulator type. The low-data-rate version requires low power consumption, low complexity, low cost, location awareness, high reliability, ad hoc networking capability, and a range of less than 100 meters or so. The dominant implementation for such radios uses a complementary metal-oxide semiconductor (CMOS) system-on-a-chip architecture. The candidate signal sources include impulse, chirp, and chaotic. A 2007 study showed that a direct chaotic approach could be simpler and less expensive than any conventional approach, with comparable bit-error-rate performance. As a consequence, Samsung in South Korea developed a chaotic ultrawideband radio transceiver on a 0.18-micron CMOS integrated circuit. The transceiver used a tunable chaotic signal source, which allowed agile changes in bandwidth and center frequency, and was based on the summation of noncommensurate triangular pulses. The unit was successfully demonstrated at a data rate of up to 15 megabits per second (Mbps).

More recently, in December 2009, the U.S. Army Research Laboratory funded an effort to explore chaos-based communications for satellite applications. Initial work is focused on developing an LPI/LPD chaotic modem system that will be suitable for satellite communications in the X, Ku, and Ka frequency bands.

**Aerospace RF/Microwave System**

Aerospace has been investigating high-frequency, high-capacity, chaos-based communications systems as alternatives to classical digital systems. The research strategy has consisted of three development phases focusing on oscillation, synchronization, and modulation.

**Oscillation**

The first research phase sought to develop a high-frequency, broadband chaotic oscillator that would be the building block of the project. This phase was challenging because of the frequency-dependent issues that naturally arise in creating such a broadband oscillator, and because there were relatively few systematic approaches for designing such oscillators at the time.

The first successful realization employed the simple baseband circuit known as Chua’s oscillator. This circuit has become a paradigm for chaos because of its generality and simplicity; its generality stems from its ability to formally realize a whole spectrum of qualitative behaviors, while its simplicity derives from the fact that it is third-order (the minimum for a continuous system) and completely linear except for a nonlinear resistor with a piecewise-linear current-voltage characteristic (the simplest form of nonlinearity). The oscillator consists of a passive portion, which is easy to scale up in frequency, and an active portion, called a negative-resistance generator, which was needed to realize the piecewise-linear resistor. The negative-resistance generator was synthesized in such a way as to allow for the tuning of the breakpoints and slopes of the resistor—an important feature needed for synchronization purposes that would be much more difficult if a general nonlinear characteristic were used instead (for example, as found in a tunnel diode). The attempt to realize the negative-resistance generator at high frequencies brought out frequency-dependent parasitic and delay effects that transformed the intended piecewise-linear resistor into a partially reactive element. This transformation essentially destroyed the strange attractor observed at baseband. Subsequent studies found that this implementation approach could not tolerate even small delays, so that an alternative methodology was required for the desired high-frequency operating range.

This alternative strategy led to a robust solution, and marked a radical shift from an autonomous to a nonautonomous approach. It also led to a U.S. patent for the so-called Young-Silva chaotic oscillator (YSCO). Two implementation topologies were developed for this oscillator—a series topology based on a controlled voltage source, and a dual parallel topology based on a controlled current source. There are several advantages of this new oscillator implementation. First, the oscillator designs are more forgiving with respect to delays and parasitics, because it is not necessary to realize a negative resistance as in the former, unforced case. Second, the unforced part of the circuit can be second order and hence easy to realize in the microwave regime (using an inductor/capacitor or cavity resonator). Third, the nonautonomous form of synchronization is also quite robust at baseband against interference in the channel—a desirable feature for communications applications. Finally, the system naturally provides for phase modulation of the forcing functions, which again translates into a complicated modulation of the chaotic carrier and hence potentially enhances message security. This implementation also provides several unique and useful features. For example, the shape of the frequency spectrum can be readily controlled by varying the amplitude and frequency of the forcing function. Future development plans include implementing these designs in an integrated circuit to make them more robust and reduce propagation delays, the latter of which leads to higher chaotic bandwidths.

**Synchronization**

The second research phase examined an inverse system approach to synchronization. This is a less commonly studied approach that extends the well-known concept of inverse linear systems in control and filter theory. In contrast to many of the other synchronization approaches, the receiver here is quite different from the transmitter, with the goal of producing a faithful replica of its forcing function. For communications applications, the forcing function is modulated in a classical style as a subcarrier, with a chosen state variable signal sent across the channel. The reproduced modulated forcing function is then demodulated to arrive at the transmitted message. The synchronization process is again a dynamical one, and general design and circuit implementation methodologies exist for the inverse system (continuous or discrete). This approach is also quite suitable for analog or digital data encryption, corresponding to more conventional self-synchronizing stream ciphering. Inverse chaotic synchronization was demonstrated for the series YSCO (baseband and RF versions) using high-fidelity SPICE (Simulation Program with Integrated Circuit Emphasis) simulations, and both versions of the inverse series YSCO have been constructed, with initial success demonstrated in their hardware synchronization.

**Modulation**

The third research phase seeks to demonstrate a working communications link using either analog or digital information with a substantial data bandwidth or rate. A phase or frequency modulation of the forcing function has been chosen. Progress in this phase has included a successful MATLAB-based simulation of a complete YSCO-based communications link, which included amplitude, frequency, and phase modulation of the forcing function, as well as the development and field demonstration of a series RF YSCO frequency-modulated transceiver operating at a 1 GHz center frequency that showed the covert nature of the transmitted signal spectrum under simple tonal modulation. The DOD has expressed interest in the latter demonstration because of its suitability for covert battlefield links, such as for remote soldier physiological monitoring and drug delivery and autonomous enemy vehicle tracking. More recently, there have been several additional successful experimental demonstrations of analog and digital modulation using a series RF YSCO.

**Basic Principles and Unique Advantages**

The basic principle of radar is to bounce an electromagnetic signal off a target to determine its location, direction of movement, and other properties. The range or distance of the target is determined from the delay between the transmitted and received signals, while the Doppler shift in these signals indicates the velocity of the target relative to the radar. The target’s direction of movement is found using continuous illumination and the angle of arrival of the return signal’s wavefront.

There are two basic modes of radar transmission, continuous and pulsed. Because the power of the received signal is inversely proportional to the fourth power of the range, pulsed transmission is often used so that coherent signal averaging can be done to reduce the effects of noise. Various classes of designed waveforms have been developed, ranging from simple sinusoids to modulated forms to more complex signals. Each type of waveform provides different target information and resolution. For example, only velocity is provided by a sinusoidal waveform, with an accuracy that is set by that of the sinusoidal source.

Motivated by many of the unique characteristics offered by chaotic signals, researchers naturally began to consider their use for radar applications. In this case, the use of such a noiselike signal would serve as a deterministic alternative to what is called random-signal or noise radar, which has been under development since the late 1960s. For both deterministic and random noise, the goal is to arrive at stealthy, LPI/LPD radar that also provides range and rate resolutions closely matching those for the ideal, but impractical, white-noise signal. These aspects are clearly advantageous for military applications because enemy targets are being continuously scanned with noise, yet their position and velocity are being determined quite accurately. The range and rate resolution properties of a given radar waveform are often illustrated in ambiguity diagrams for traditional (nonrandom), chaotic, and ideal-noise radar signals.

Why use chaotic signals instead of random noise for radar, given that they both produce desired LPI and resolution features? The answer lies in the inherent determinism of chaos that gives rise to several distinct advantages. First, simple, low-power, lightweight, compact, broadband implementations can be readily developed because such complex waveforms can be easily generated. Second, one can go well beyond the dominant correlation-based receiver signal processing found in traditional radars by exploiting the synchronization capability of chaotic signals. Such processing could very well be improved with respect to its throughput, not to mention the possibility of securely modulating the radar waveform with information that can be used to better probe the nature of the target. Third, through the natural and rapid loss of correlation found in chaotic systems, multiple radars can operate in the same frequency bands and physical locations, yet still have the return signals separable for individual processing (termed “electromagnetic compatibility”). This signal separation could be quite advantageous for processing the returns from an array of radar transmitters, like those proposed for traditional radar signals, and will also ensure multipath immunity for the radar return signals, as has been demonstrated for chaos-based communications. In the same way, some immunity to intentional jamming or other interference could be gained—similar to what a spread-spectrum system does for communications. Finally, there is great flexibility in the design of the radar waveform if it is based on a continuous or discrete chaotic dynamical system. Such flexibility can be used to arrive at very flat spectra for the time-domain waveform and sharp peaks in the ambiguity diagram at the desired target range. For example, it has been demonstrated that with a properly optimized chaotic map, one can achieve better range ambiguity properties than by using filtered Gaussian noise. For national security space systems, this has several implications for improved radar performance.

**Survey of Radar/Sonar Developments**

The notion of applying chaotic signals to radar applications dates to the mid-1990s, when the generation of such waveforms was already well evolved. Two separate paths developed on how to approach the transmitting side of the system. In one approach, the chaotic signal was generated at low power levels using an analog or digital circuit and then amplified using a traditional high-power amplifier of the solid-state or traveling-wave-tube variety. The other approach pursued the development of direct high-power and high-frequency chaotic sources, up to the millimeter-wave region, using a wide variety of vacuum electronic devices.

Researchers working on these efforts developed a chaotic radar system using a self-mixing or autodyne effect in the chaotic generator that gave rise to a novel return-signal processing method. Beyond these developments, optical chaos coming from nonlinear laser dynamics has been proposed and investigated for developing a chaotic LIDAR (light detection and ranging) system with bandwidths over 10 GHz. In essentially all of these systems, the return signal processing did not exploit the determinism of the chaotic signal via some synchronization approach, but instead used traditional correlation-based processing. In addition to these over-the-air radar systems, there has been some activity since the mid-2000s proposing and investigating the naval application of chaotic sonar.

There has been a rich panorama of applications for these radar systems. Examples include:

**Vehicular collision avoidance and ranging.**This application area seeks to take advantage of the compactness, efficiency, and low cost of chaotic signal generation, as well as its natural electromagnetic compatibility and multipath reduction capabilities. The latter benefit has been shown to be superior to that obtained from conventional direct-sequence spread-spectrum approaches.**Imaging radar for security surveillance.**This application area harnesses the broadband nature that chaos can readily provide, thereby delivering such features as good penetration into walls, high range resolution, and discrimination of closely spaced targets. Imaging performance for such systems has been proven to surpass more conventional time-modulated ultrawideband radars by reducing false alarms and imaging closely spaced targets obstructed by walls.**Other potential applications.**Other possibilities include navigation systems where high range or velocity resolutions are required, obstacle approach or intrusion sensor systems, and forward-looking aircraft radar to allow for all weather flying and landing.

**Aerospace Wideband Radar System**

Encouraged by successful research into chaos-based communications, Aerospace started investigating chaos-based radar. The basic objectives have been to develop a wideband, continuous-wave system with time-domain correlation processing and to determine its LPI and resolution capabilities. The effort has consisted of two successive undertakings: an initial study, which used the series YSCO as the analog chaotic generator, and a more mature study, which used an optimized discrete chaotic map. In both cases, proof-of-concept demonstrations were carried out, including a successful field demonstration that is undergoing external follow-up for product development and operational field testing.

The first effort was a 2005–2006 Harvey Mudd College Engineering Clinic project supported by the Aerospace Corporate University Affiliates Program. It focused on the use of a series YSCO that provided a robust chaotic signal with a bandwidth of around 150 MHz. The chaotic signal was filtered to improve its autocorrelation properties and to remove the forcing function signature in the frequency spectrum. The subsequent signal was frequency modulated onto a sinusoidal carrier in the range of 1–3 GHz, and the transmitter and receiver were switched on and off to pulse the radar signal and prevent damage to the receiver when transmitting. The determination of a known cable delay using the system without frequency modulation was demonstrated, followed by some initial attempts with the frequency modulation turned back on.

The next effort—a three-year independent research project—began with an evaluation of the characteristics of the transmitted frequency-modulated signal for the Harvey Mudd effort, which ranged in bandwidth from 380 to 520 MHz. The study revealed several shortcomings associated with the hardware implementation. As a result, attention next turned to the use of discrete chaotic maps as the signal generator, which offered much more flexibility in the waveform design while significantly easing bandwidth implementation issues. This also took advantage of the ready availability of commercial high-speed arbitrary waveform generators and digital storage oscilloscopes to provide the critical transmitter/receiver functions. A series of iterations led to a mature, bistatic, continuous-wave radar prototype having a bandwidth of more than 5 GHz with a 21 GHz carrier. It used a classical Bernoulli map—whose output was conditioned like that for the series YSCO (albeit much more easily using the software filtering capabilities of the arbitrary waveform generator)—to arrive at desired spectral and autocorrelation properties.

Another area where chaos-based techniques are being explored at Aerospace involves the challenges of countering improvised explosive devices (IEDs), which have been quite successful despite their low-tech nature. IEDs are often placed at the side of roads or embedded in them, with numerous techniques for setting them off, including simple command wires, wireless phones and remotes, and pressure-sensitive detonators. Aerospace research showed that chaos-based techniques could be used against these threats—for example, with the application of chaotic radar to detect IEDs, pressure plates, and their command wires. Under the direct support of a special corporate initiative (administered by Aerospace Intellectual Property Programs), the work has focused on an application-specific demonstration that targets the detection of command wires and suicide bombers. This chaotic radar system was upgraded to include a pan/tilt scanning capability, detection algorithms to reduce the probability of false alarm, and a graphical user interface. Field-testing of this system was conducted twice in special exercises at Camp Roberts in Paso Robles, California. The system was found to be highly effective in the detection of command wires under various conditions such as lying on gravel, hard-baked dirt, and asphalt roads. This detection continued to exhibit a low probability of false alarm even for the relatively high level of clutter or interference found with gravel. Follow-on efforts are now focused to commercially develop an operational-quality demonstration suitable for hosting on unmanned aerial or ground vehicles. Aerospace is also planning to further refine the system to exploit the full benefits of the chaos-based approach, addressing such topics as chaotic source optimization and high-speed circuit implementation, synchronization-based signal processing, the addition of signal modulation, and the use of phased-array or multiple transmitter/receiver configurations. There is also interest in applying this system to 2-D/3-D remote imaging for urban warfare environments.

Beside the two basic application areas of communications and radar reviewed here, there exists a whole range of other important nonlinear-effect-based applications that have been, and continue to be, investigated. Many of these have critical implications for national security space, with some representing disruptive capability or performance improvements, and they are all being actively pursued on an international level. Examples include pseudorandom-sequence generation for traditional digital cryptography systems, spread-spectrum systems, and multiuser communication systems; analog and digital encryption for 1-D signals, images, and real-time video; stochastic resonance for signal-in-noise enhancement (noise can be beneficial); cellular nonlinear networks for analog signal/image processing (essentially an analog computer with the equivalent of terabits-per-second processing capability); bifurcation engineering involving analysis, control, and exploitation (which encompasses the nonlinear stability issue and whole new signal processing designs); control and anticontrol chaos to eliminate or create chaos in systems (e.g., improved chemical mixing via anticontrol chaos); and chaos-based electronic measures for jamming wireless transmissions and damaging enemy circuits. A technology survey, status, and implications evaluation for these other application areas may be presented in a follow-up article in *Crosslink*.

The field of nonlinear dynamics provides an important new framework and perspective for the design and analysis of circuits and systems, offering a vast set of powerful qualitative and quantitative techniques. The many unique features and capabilities offered by nonlinear effects such as chaos provide numerous opportunities for the development of whole new technology areas. The exploitation of nonlinear effects has evolved from a state of mere knowledge advancement to product and commercial development and insertion, yielding mature products competitive with, or better than, current technologies. However, recent years have seen a troubling trend toward a dominant international role in nonlinear engineering, despite the many important U.S. national security implications that these approaches provide. This trend cuts across the whole engineering cycle, from research to product development and insertion. Motivated by these trends, Aerospace has begun research and development in the main areas of communications and radar. The importance of maintaining and continuing such investments cannot be overemphasized.

The author would like to acknowledge corporate support of this work through the Aerospace Sponsored Research program and the Mission Oriented Investigation and Experimentation program. Thanks also go to Herbert Wintroub (in whose memory this article is dedicated), Rich Haas, Al Geiger, André Montoya, Joe Straus, Andy Quintero, and Grant Aufderhaar for their advocacy and support. The author also thanks his colleagues, especially Albert Young (co-inventor of the Young-Silva chaotic oscillator) and Samuel Osofsky (project leader on chaos-based radar), who made it possible to carry out the basic development efforts described herein. The author would also like to thank his management for support, especially Jerry Michaelson, Allyson Yarbrough, Keith Soo Hoo, Robert Frueholz, Diana Johnson, Samuel Osofsky, and Yat Chan.

- Centre for Chaos Control and Synchronization, http://www.ee.cityu.edu.hk/~cccn (as of Nov. 9, 2010).
- R. Devaney,
*An Introduction to Chaotic Dynamical System*s (Benjamin/Cummings, Menlo Park, CA, 1986). - W. Ditto and L. Pecora, “Mastering Chaos,”
*Scientific American,*Vol. 269, pp. 78–84 (Aug. 1993). - U. Feldmann, M. Hassler, and W. Schwarz, “Communication by Chaotic Signals: The Inverse System Approach,”
*International Journal of Circuit Theory and Applications,*Vol. 24, No. 5, pp. 551–579 (1996). - J. Gleick,
*Chaos*(Viking Press, New York, 1987). - E. Hunt and G. Johnson, “Keeping Chaos at Bay,”
*IEEE Spectrum,*Vol. 30, pp. 32–36 (Nov. 1993). - G. Kolumbán, M. Kennedy, and L. Chua, “The Role of Synchronization in Digital Communications Using Chaos—Part I: Fundamentals of Digital Communications,”
*IEEE Transactions on Circuits and Systems,*Vol. 44, No. 10, pp. 927–936 (Oct. 1997). - L. Larson, J.-M. Liu, and L. Tsimring,
*Digital Communications Using Chaos and Nonlinear Dynamics*(Springer Science + Business Media, New York, 2006). - F. Lau and C. Tse,
*Chaos-Based Digital Communication Systems*(Springer-Verlag, Berlin, 2004). - F.-Y. Lin, “Chaotic Radar Using Nonlinear Laser Dynamics,”
*IEEE Journal of Quantum Electronics,*Vol. 40, No. 6, pp. 815–820 (Jun. 2004). - Z. Liu, X. Zhu, W. Hu, and F. Jiang, “Principles of Chaotic Signal Radar,”
*International Journal of Bifurcation and Chaos,*Vol. 17, No. 5, pp. 1735–1739 (May 2007). - L. Pecora and T. Carroll, “Synchronization in Chaotic Systems,”
*Physical Review Letters,*Vol. 64, pp. 821–824 (Feb. 19, 1990). - H.-O. Peitgen, H. Jurgens, and D. Saupe,
*Chaos and Fractals: New Frontiers of Science*(Springer-Verlag, New York, 1992). - S. Strogatz,
*Chaos*, DVD No. 1333 (The Great Courses, Chantilly, VA). - W. Tam, F. Lau, and C. Tse,
*Digital Communications with Chaos: Multiple Access Techniques and Performance*(Elsevier Science, Oxford, 2007).

Back to the Spring 2011 Table of Contents

Go to sidebar: Dynamical Systems

Go to sidebar: Fractals

]]>*Embedding simple machines in a polymer matrix yields complex materials suitable for applications ranging from launch vehicle fairings to golf clubs.*

In response to a request by the California Department of Transportation (Caltrans) in 2000, a team of materials scientists from The Aerospace Corporation considered burying shock absorbers in the rubber dampers located on top of bridge columns. This simple concept—burying a mechanism in a material—was later refined to develop materials useful for applications as diverse as launch vehicles and sports equipment.

Caltrans had asked Aerospace to help wrap composite materials around the columns to ensure they would remain standing after an earthquake. The tops of the bridge columns already had huge alternating layers of rubber and lead to help dampen earthquake vibrations. The scientists proposed burying automotive shock absorbers in the rubber to control the damping. The remedy could have worked, but more important, it prompted the scientists to consider what would happen if mechanisms were buried in a material on a much smaller scale. What if many small (millimeter-sized) mechanisms were buried in a flexible matrix? Would this material yield properties that could not be obtained any other way?

The team began developing this new concept. Traditional composites are made with a matrix material that holds together many fibers. Such composite materials are stiff and strong because the fibers they contain are stiff and strong. But, the team postulated, if many small, simple machines were embedded in a matrix, the resulting composite would have properties like the machines. These machines could augment the properties of the composite; thus, the scientists named their new concept the machine-augmented composite (MAC).

Rather than start with shock absorbers, which are quite complicated, the team decided to start with a very simple mechanism: a four-bar linkage. This simple machine converts compressive (perpendicular) forces into shear (tangential or parallel) forces and vice versa. Most normal materials simply compress when subjected to a compressive load, but a MAC made with embedded four-bar linkages would, in theory, generate substantial shear motion when compressed.

Using Aerospace’s rapid prototyping machine, the team made the first proof-of-concept samples with four-bar compliant mechanisms buried in a polyurethane matrix. They performed simple tests to prove that the material would respond as expected, based on mechanics. They used the data in a proposal to obtain their first Aerospace research and development funds, which allowed them to create mathematical models to describe the material, manufacture more realistic samples, test the accuracy of those models, and explore potential applications to space systems.

In essence, this Z-MAC—named because of the shape of the machines—diverts forces from one direction to another. The team first focused on applications where a preexisting shear force can be used to clamp down on a part. They found the clamping could be useful for locking down components during launch.

The models and experience gained in building and testing the Z-MAC inspired development of a MAC with a more complicated machine—a fluid-filled shock absorber shaped like an hourglass that would collapse during compression. The volume inside of the hourglass would be filled with a variety of fluids, each with a different viscosity, ranging from water to high-viscosity silicone. An outside vendor was contracted to manufacture these machines. The samples were tested to determine the damping capability of the different versions. The results were promising when compared with naturally good dampers (e.g., rubber), but more work was needed to mature this concept to a point where a particular application would specify this material over existing technologies.

DARPA (Defense Advanced Research Agency), which was looking for technology similar to what the Aerospace scientists had already accomplished, granted the team funding for further research. Joining a group from Texas A&M University, the team expanded research into more complicated designs.

At the same time, researchers from the University of Wisconsin, Madison, published a paper showing how they had achieved a “negative” stiffness (or negative modulus), taking advantage of an unusual phase change in a magnetic material. Unlike a typical material that deforms in the direction of the applied force, a material with negative stiffness deforms in the direction opposite that of the applied force. The Aerospace team took it as a challenge to make a more useful negative-modulus material by designing a set of machines that reverse the direction of the applied force. Pushing on the front surface of this MAC causes the reverse side to deflect toward the observer (rather than away, as a normal material would).

The negative-modulus MAC turned out to have a useful application. Sound waves passing through it would be shifted in phase by 180 degrees. If a plate combined normal material with these MACs, some of the sound waves would be phase shifted and some would not. When designed correctly, the two waves would cancel each other out through destructive interference. An entire wall of material could be made that worked like sound-canceling headphones without any electronics. A launch vehicle fairing made of the material could help reduce acoustic loads on spacecraft during launch.

While the sound material was under development, the scientists continued developing samples of Z-MACs in the laboratory. To watch how one such material would respond to impact loading, they dropped a golf ball on it. Instead of bouncing off at an angle as expected, the ball bounced off with a surprisingly high rate of spin. The top of the Z-MAC was shifting sideways from the impact and putting a torque on the ball. The torque caused the spin. By changing the stiffness of the machines, the team could control the magnitude and direction of the spin. Besides applications in golf clubs, this material should be able to put a torque on the nose of a bullet as it entered an armor plate. Tumbling bullets are easier to stop in armor.

The team wrote a series of invention disclosures, and as the patent applications wound their way through the U.S. Patent and Trademark Office, Aerospace’s Intellectual Property Programs office started marketing the MAC material—or “MACterial.” This generated interest from several golf club manufacturers as well as armor manufacturers, car companies (for bumpers), high-end bicycle manufacturers, and automotive tire companies.

DARPA issued another request for a material that could change shape on command. The material would act similarly to heliotropic plants, such as sunflowers, that follow the sun as it passes overhead (the biology term is “nastic motion”). The team looked at making little machines that would change shape inside a material. They explored using battery technology to generate hydrogen gas in a cell to change its internal pressure (plants move by changing pressure in their cells), but calculations showed the limitations of this concept. Then, some colleagues suggested that hydrogen and oxygen could be generated by electrolysis of water; then, to institute a pressure increase, the gases could be ignited to explode. When hydrogen and oxygen burn, they create water—so, the researchers could make a closed system that repeats the cycle again and again.

The team proposed this idea for DARPA’s nastic program and offered to make a generic material that contained many small cells that generated an explosive gas mixture that could be ignited on command. DARPA awarded them the contract, this time surprising the team with full funding. The team’s surprise turned to panic, however, when they realized that to fulfill the contract, they would have to change the shape (or morph) of a helicopter blade. What started as a general open-ended development of a new material turned into a point design. The design of a typical helicopter blade involves a compromise between its performance in hover, where a high blade twist is desired for improved lift, and in forward flight, where a low blade twist is needed for high speed. A morphing blade could effectively overcome this trade-off by adapting its shape to each of the two flight regimes.

After locating a group that knew about helicopter blades, the team was able to demonstrate changing the shape of a one-quarter scale V22 helicopter blade at the Bell Helicopter factory in Dallas. The work for DARPA on helicopter blades was successful. On the way to this milestone, the team manufactured the industry’s most efficient actuator, which produced 160 horsepower per pound. This success led to a follow-on contract to build a material that changed shape and caused large acoustic pressure waves in water. The Aerospace team constructed and instrumented a large cistern outside its laboratory to test this acoustic source.

The researchers are now working to develop more mature versions of the MACterials into useful applications as they design more complicated machines with exotic properties. A recent proposal involves combining negative and positive springs. In theory, when these two springs are attached in series, a material with potentially infinite modulus would result. The researchers are confident the material modulus won’t be infinite, but they are working to see how large a modulus can be obtained.

- G. Hawkins, “Augmenting the Mechanical Properties of Materials by Embedding Simple Machines,”
*Journal of Advanced Materials,*Vol. 34, No. 3, pp. 16–20 (July 2002). - G. Hawkins, “Embedding Micromachines in Materials,”
*14th International Conference on Composite Materials*(San Diego, CA, July 2003). - G. Hawkins and M. O’Brien, “Embedding Micromachines in Materials,”
*Composites in Manufacturing, Society of Manufacturing Engineers,*Vol. 20, No. 3 (2004). - G. Hawkins, M. O’Brien, R. Zaldivar, J. Schurr, and H. von Bremen, “Composites Containing Embedded Simple Machines,”
*47th International SAMPE Symposium,*Vol. 1, pp. 124–130 (May 12–16, 2002). - G. Hawkins, H. von Bremen, and M. O’Brien, “Using Embedded Micromachines to Enhance the Properties of Materials,”
*Advances in Plastic Components, Society of Automotive Engineers International,*SP-1763, pp. 109–113 (2003). - R. Lakes et al., “Extreme Damping in Composite Materials with Negative Stiffness Inclusions,”
*Nature,*Vol. 410, pp. 565–567 (2001). - D. McCutcheon, J. Reddy, M. O’Brien, T. Creasy, and G. Hawkins, “Damping Composite Materials by Machine Augmentation,”
*Journal of Sound and Vibration,*Vol. 294, p. 828 (2006). - C.-Y. Tang, M. O’Brien, and G. Hawkins, “Embedding Simple Machines to Add Novel Dynamic Functions to Composites,”
*Journal of Materials,*p. 25 (March 2005).

Back to the Spring 2011 Table of Contents

]]>*Ultrashort-pulse lasers exhibit exotic, fantastic characteristics. Aerospace scientists and engineers are researching diverse applications that can take advantage of the broad spectrum and high power delivered by these devices.*

Ultrashort-pulse lasers (USPLs) are defined by the duration of the pulses they emit, which range from a few femtoseconds (10^{−15} of a second) to a few picoseconds (10^{−12} of a second). Because of their pulse duration, USPLs have two novel characteristics: the laser spectrum they produce is broad (up to or even greater than 100 nanometers), and the power in the pulses can be high.

The broad laser spectrum is a consequence of the Fourier-transform relationship between time and frequency, and each pulse results from the coherent superposition of many frequencies. The high-peak power results from temporal confinement of the laser energy. A laser operating with a 50-femtosecond pulse and a 100-megahertz pulse rate will have a peak power that is 200,000 times higher than a continuous-wave laser operating at the same average power.

The Aerospace Corporation is researching applications that have current or anticipated high value to military and civilian aerospace interests. For example, it is easy to envision USPLs as high-data-rate transmitters in free-space optical communications. Using 50-femtosecond pulses, a message containing 10,000 bits could occupy as little as one nanosecond in time. Alternately, even a small amount of energy in an ultrashort pulse generates high power. One millijoule in a 30-femtosecond pulse represents 30 gigawatts of instantaneous power. Because of these characteristics, USPLs could potentially support a variety of military and civilian applications, such as the transmission of large amounts of data or high power to distant locations.

The penetration of USPL technology into military applications such as hyperspectral sensing or secure optical communications will be, to a significant extent, determined by the capacity to control how ultrashort pulses interact with their operational environment, so that the desired functionalities can be realized where and when they are wanted. It is not clear how or if such control may be achieved outside the laboratory; but in well-controlled laboratory environments, the use of USPLs has been growing. In particular, these devices have enabled diagnostic tools for quantitative characterization of material properties, performance testing and calibration of prototype devices, evaluation of device vulnerabilities, and exploration of new device architectures and operating schemes. USPLs are certainly exhibiting the potential to serve well-established and emerging space technologies.

In the early 1990s, microscopy using USPLs was investigated as a means to simulate the effects of space radiation on semiconductor devices in satellite payloads. Space radiation can give rise to so-called single-event effects, which can degrade the on-orbit performance of electronic devices or render them inoperable. Single-event effects can be triggered when an ionized particle penetrates an electronic component, leaving an ionized trail that can cause current or voltage transients that disrupt normal operation. Scientists have devoted significant effort to evaluating the susceptibility of devices to radiation-induced anomalies and determining their suitability for space missions.

Aerospace and the U.S. Naval Research Laboratory pioneered the use of USPLs to simulate radiation-induced current and voltage transients by generating conduction-band carriers from the absorption of photons having energy equal to or greater than the semiconductor bandgap. The bandgap defines the energy required to elevate electrons from the low-energy valence band, in which they are tightly bound to their nucleus, to the higher-energy conduction band, in which they are mobile, like electrons in a metal. The technique is a variation of laser microscopy using an integrated circuit mounted so that the pinout signals can be monitored by equipment that records the device’s response to laser illumination.

The development of laser techniques for testing single-event effects has significantly expanded and improved the capabilities needed to evaluate payload electronics for space missions. Previously, such evaluations required testing at accelerator facilities, where parts can be subjected to regulated exposures of particles known to be prevalent on common orbits.

While accelerator testing establishes a “gold standard” to qualify parts, it is limited by the type of information it can provide and by its cost and availability. Accelerators are fairly extravagant multiple-user facilities that must be scheduled months in advance and can impose many usage restrictions. They mostly provide information about whole-device susceptibility to specific particles and energies. The spatial resolution needed to identify sensitive microscopic substructures is difficult to achieve because of the large cross section and relatively low flux density of particle beams. Accelerator testing is also typically destructive—the device being tested often fails irreversibly from catastrophic material damage, making iterative and recursive tests unlikely or impossible. Such recursive tests are needed to analyze failure mechanisms and support quick design modifications that explore mitigation alternatives.

Although USPL testing of single-event effects cannot be used as a qualification standard, it is highly valued as a screening tool prior to accelerator testing. It significantly reduces the time and costs of final qualification and has several diagnostic advantages over particle testing. For example, the spatial resolution of laser testing is on the order of a fraction of the laser wavelength, enabling raster-scan images of susceptible spots. The generation of carriers via laser excitation is fast relative to their lifetimes and closely matches the temporal characteristics of a particle strike—but with precise information on event initiation (the timing of accelerator-induced events is often chaotic and not well correlated with any clock). Unlike particle testing, laser testing can be completely nondestructive—experiments can be recursive with little or no analysis latency. This testing of mechanisms at specific circuit nodes can be investigated and characterized to provide empirical data that can be used to benchmark device performance simulations and support fast design-and-fabrication cycles that explore improvements to the radiation hardness of devices.

The laser testing techniques developed in parallel at The Aerospace Corporation and the Naval Research Laboratory used picosecond dye-laser pulses at wavelengths between 600 and 800 nanometers, which are above bandgap (i.e., they have a photon energy level higher than the energy gap) in silicon and gallium arsenide and are therefore suitable for testing most integrated circuit technologies. Laser testing at wavelengths shorter than bandgap is predicated on linear optical absorption and requires an unobstructed line of sight to the device or circuit node being tested. In practice, the line of sight is often obstructed by the ubiquitous metal interconnections of modern integrated circuits as well as by the device mounting and packaging, which prevents above-bandgap light from reaching the device. Despite this limitation, above-bandgap laser testing has established the correspondence between laser and particle test results, thereby validating this technique.

In addition to the line-of-sight problem, above-bandgap testing only probes near the surface—carriers are typically deposited just a few microns deep. Because modern micro- and nanoelectronic fabrication technology is moving rapidly toward higher feature densities and multiple layers in monolithic and heterogeneous architectures, buried circuit nodes may not be accessible or resolvable by above-bandgap, linear laser-testing techniques. This problem may, however, be resolved by invoking the more extravagant performance properties of a newer generation of USPLs.

At the same time that picosecond dye-lasers were being applied to the development of techniques for testing single-event effects, a generational change in USPL technology was occurring, represented by the emergence of solid-state laser materials such as titanium-doped sapphire. These materials supported dramatically shorter pulses (in the range of 5 to 100 femtoseconds), while a new amplification scheme allowed for much higher pulse energy at much higher efficiency than dye-laser systems. The new femtosecond solid-state laser technology supports wavelength and energy conversion techniques that enable relatively simple access to wavelength sources between about 500 and 3000 nanometers, as well as new sources of coherent ultraviolet and far-infrared radiation, x-rays, and pulsed electron beams.

The ready availability of tunable femtosecond laser sources in the shortwave infrared below the bandgap of silicon permitted the development of nonlinear optical techniques for microelectronics testing and measurement. In the late 1990s, Bell Laboratories introduced two-photon optical beam-induced current as a functional imaging technique. In 2002, laser single-event effects techniques based on nonlinear absorption were developed at the Naval Research Laboratory. The critical distinction between the linear and nonlinear techniques is the mechanism of material interaction with the laser pulse. Semiconductors are essentially transparent to wavelengths below their bandgap, but can be induced to absorb two (or more) subbandgap photons simultaneously if the pulse irradiance is high and the sum of their energy exceeds the bandgap.

This can be easily exploited with USPLs equipped with a fast convergence objective. In fact, two-photon absorption and the resultant generation of conduction-band carriers occurs only at the beam focus where the irradiance is high. Using nonlinear absorption, USPL testing can overcome line-of-sight limitations by addressing the circuit nodes of a device through its substrate. Additionally, a three-dimensional capability can be obtained by controlling the depth of the beam focus in the part, with the result being volumetric images of single-event effect susceptibility or other performance attributes.

In 1998, before the Naval Research Laboratory started developing nonlinear techniques to solve the line-of-sight problem, Aerospace launched an alternative approach based on work pioneered at the University of California, San Diego. In this approach, high-energy femtosecond lasers were used to generate hard x-ray pulses at photon energies sufficient to penetrate the interconnection metallizations of integrated microelectronics. The technique would use x-ray photons capable of penetrating the obstructions and launching conduction-band carriers in the semiconductor, which is in contrast to nonlinear absorption techniques where the obstruction was evaded. The USPL mediates x-ray generation through an energetic plasma initiated by laser ablation of a metal target. Hot electrons from the laser-induced plasma interact with the target to produce K-band x-rays. If sufficient x-rays can be generated, collected, and focused, a laser capability for x-ray testing of single-event effects could be developed.

Since 2005, Aerospace’s research and development in USPL applications has been balanced to include nonlinear optical and x-ray approaches to single-event effects testing and has also broadened to address wider applications of core techniques from laser spectroscopy to testing and measurement, as well as reliability assessments and performance analyses of integrated electronic and photonic device technologies. Aerospace has also closely monitored research and development in the external peer community, where USPL technology has been actively explored for a much wider range of applications, some of which would take advantage of the extreme bandwidth of these lasers. Examples include high-capacity and secure optical communications, all-optical time and frequency standards that could be used at the core of communication and navigation systems, and adaptive ultrawideband waveform generators and signal analyzers. Other applications being investigated in the United States and abroad involve exercising the high power and irradiance of USPLs in a variety of active remote-sensing or situational-awareness schemes.

Because many of these techniques are relatively new and unoptimized and have potentially high value as performance and reliability diagnostics for emerging microelectronic and integrated photonic technologies, the scope of Aerospace research has expanded to address the physics of these techniques and the investigation of their diagnostic potential. For example, the long-term value of nonlinear optical techniques in testing single-event effects and more general device performance diagnostics will be determined by the ability to control the way that ultrashort laser pulses propagate through and interact with the materials and structures of devices undergoing testing.

This kind of control requires a detailed knowledge of the linear and nonlinear optical properties of device materials and their responses to excitation by ultrashort laser pulses. While approximate information from theoretical models and previous experiments is available, updated experimental measurements are needed to advance the theory and support the development of practical diagnostic tools. Aerospace is addressing these needs with quantitative USPL measurements of nonlinear refraction and nonlinear absorption in elemental and compound semiconductors using techniques adapted from peer literature, as these properties determine how a focused USPL beam will propagate through a device structure and where carriers will be generated.

USPLs are also ideally suited to time-resolved probes of other photonic material properties important to new technology development. The lifetimes and relaxation dynamics of conduction-band carriers establish fundamental limits on the performance of semiconductor electronic and photonic devices. Carrier lifetimes are quantified by the decay of the luminescence emitted when electrons and holes recombine across the bandgap (an interband transition), while intraband carrier relaxation can be probed by time-resolved absorption spectroscopy. USPLs provide a means of instantly generating excited electron-hole pairs and a “clock” to measure the recombination rate and evolution of the excited electron-hole population.

Two salient examples of photonic technologies that immediately benefit from carrier lifetime diagnostics are diode lasers and solar photovoltaic cells. Semiconductor quantum-well lasers are at the core of any solid-state laser system that will be used for on-orbit optical communications or satellite-based active remote sensing. These are in continual development to improve their efficiency and noise properties, and to establish new wavelengths of operation. The materials and junction structures of photovoltaic cells are similarly in constant development. One path being explored involves the use of semiconductor quantum dots, in which the nanoscale size of the material structure (the “dot”) alters the electronic structure and behavior of the material. Quantum-dot solar cells may enhance the action spectrum and efficiency of current multijunction devices, which could significantly affect the power budgets of all satellite programs. In both of these technologies, carrier lifetimes are a critical performance indicator.

Aerospace has built a time-resolved luminescence spectrometer capable of measuring lifetimes as short as tens of picoseconds and as long as a microsecond. Additionally, a time-resolved absorption probe for measuring intraband carrier dynamics has also been configured and tested. Photovoltaic devices enhanced by quantum dots represent a technology development topic for which USPL-based diagnostics provide critical support. For example, the phenomenon of carrier multiplication in quantum dots could significantly enhance solar-cell efficiency if it can be controlled in a device configuration, but there is still some controversy about the reality and efficacy of carrier multiplication.

Carrier multiplication in quantum dots is believed to result from the optical excitation of “hot” carriers into the upper conduction-band levels by photons having energy at least greater than twice the bandgap. Such highly excited carriers can “cool” by a process in which excess conduction band carrier energy greater than the bandgap energy is coupled to the generation of additional electron-hole pairs, with the result that multiple conduction-band carriers are generated and available for electrical work from the absorption of a single photon. Some recent measurements suggest that this process is much more efficient in quantum dots than in bulk semiconductor material; however, these results are controversial, and more research is needed to validate these claims.

Aerospace is working with the National Renewable Energy Laboratory to make quantitative measurements of carrier dynamics and yields in quantum dots excited by photons greater than twice the bandgap to resolve the uncertainties surrounding carrier multiplication and establish screening diagnostics for feedback to material optimization and performance evaluations of prototype devices. The Rochester Institute of Technology is designing and fabricating multijunction solar cells containing quantum-dot layer structures intended to extend the absorption spectrum of a junction and increase the use of the solar spectrum. In support of the Rochester program, Aerospace will conduct USPL studies of the optical and photonic properties of quantum-dot test structures and junctions.

Aerospace is also investigating the use of USPLs to evaluate the spatial resolution of sensors in focal-plane arrays. This performance parameter is measured by the modulation-transfer function, which quantifies the spatial frequencies that the focal-plane sensors can distinguish by measuring the array’s response to laser illumination. For visible and near-infrared focal-plane arrays, visible continuous-wave lasers at a set of discrete above-bandgap wavelengths are typically used to measure a single pixel point-spread function, which can be used to generate the two-dimensional modulation-transfer function for the array. The lasers can be focused to a spot smaller than the pixel size, which allows electronic structures within the pixel to be correlated to the measured point-spread and modulation-transfer functions. By taking multiple measurements with different lasers, researchers can build up the sensor performance over the entire wavelength range of operation.

This technique has some disadvantages that can compromise the modulation-transfer function diagnostic. For example, the spot size, particularly at long wavelengths, begins to approach the size of a single pixel. Additionally, the above-bandgap excitation cannot probe the pixel response in three dimensions, which could be important in CMOS (complementary metal-oxide semiconductor) active pixel designs where the pixels can have complex, multilayered layouts with very high aspect ratios.

One way of potentially resolving these problems is to use the nonlinear optical technique for carrier injection. For USPL sources, the area over which carriers are deposited by nonlinear absorption contracts according to the order of the nonlinear interaction, and the laser focusing geometry can vary the depth at which carriers are generated in a pixel. For example, in the case of two-photon absorption, carrier injection is proportional to the square of the beam irradiance, and the effective beam area for carrier injection should contract by relative to that of linear absorption. Furthermore, carrier injection occurs only at the beam focus where the irradiance is sufficient for multiphoton absorption. The development of the below-bandgap modulation-transfer function diagnostic shares nearly all the optimization criteria of nonlinear absorption laser single-event effects testing. However, there is a need for detailed information on the nonlinear optical properties of the materials from which these devices are made, along with advanced laser microscopy techniques to exploit these properties—and, in many cases, a way to manipulate and control the laser-material interactions.

There is a distinct advantage in being able to manipulate the temporal shape of the laser pulse or the order of wavelengths within the pulse, and a pulse-shaper subsystem capable of these manipulations is in development. Because USPLs possess a wavelength bandwidth determined by the Fourier transform of the temporal pulse, a 30-femtosecond pulse at a center wavelength of 800 nanometers will have approximately 35 nanometers of bandwidth. The pulse-shaper decomposes the bandwidth of a single USPL into discrete spectral “bins” that can be independently delayed or attenuated, and then recombines these bins to generate a new single pulse with a modified temporal shape, a specific time-ordering of wavelengths, or a series of pulses (a pulse “burst”). The pulse-shaper makes it possible to investigate alternative ways of presenting USPL energy to a material for the purpose of optimizing a desired effect or investigating the material’s response to USPL illumination. For example, in the nonlinear optical techniques under development for testing microelectronics and measuring the modulation transfer function of focal-plane arrays, temporal shaping of ultrashort pulses may enable the generation of conduction-band carriers in areas significantly smaller than the diffraction-limited spot size of the laser and allow for enhanced control over carrier deposition depth.

Another approach to improving the spatial resolution of a laser probe involves controlling the spatial characteristics of the laser beam through the curvature of the pulse spatial wavefront. Aerospace is investigating the use of adaptive optics for this purpose. Control over the effective area in which a USPL interacts with a material or device structure is critically important to the diagnostic utility of USPLs in the emerging nanoscale material and device technologies, where the length and separation of device structures is approaching a few tens of nanometers, and for which there is a shortage of nondestructive measurement tools capable of resolving the structures or their time/frequency performance characteristics.

Several other areas of research on USPL applications exist where control over the spatiotemporal formatting of the laser is known to be critical and where adaptive control of such formatting based on active feedback may dramatically enhance capability. In the x-ray technique discussed earlier for testing single-event effects, the results have shown that enhancement of x-ray yield is a complicated function of experimental and environmental parameters that significantly affects the evolution of pulse characteristics along the propagation path. A USPL with pulse-shaping capability can “precompensate” the laser pulse to invert temporal distortions and ensure that the highest-peak power is delivered where it can have its greatest effect—in this case, the highest x-ray yield. Additionally, the temporal format in which the laser energy is delivered to the metal target may have a significant effect on the generation of a plasma with optimal thermophysical characteristics for x-ray generation.

Laser remote sensing and free-space optical communications using USPLs are also likely to benefit from adaptive spatiotemporal formatting. These potential space applications receive a lot of attention, in part because USPLs have been shown capable of some unique signal-propagation characteristics. For example, parameter regimes have been identified in which the laser beam propagates without spatial diffraction (beam expansion) and in which the beam is able to penetrate obscurants such as clouds, fog, and suspended particles.

The prospects of USPL-based hyperspectral schemes for meteorological lidar and active remote sensing for chemical, biological, and hazardous-material detection or target characterization will be significantly determined by the ability (or lack thereof) to manipulate and control these propagation characteristics, which appear to arise from a balance of linear and nonlinear interactions with the atmosphere along the beam propagation path. Because these interactions are not fully understood, the fragility of this balance and the ability to control the interaction of the laser pulses with the atmosphere and the target in remote-sensing schemes represents a critical risk element that will have to be retired to realize a practical technology. The ability to control these interactions and accommodate fluctuating atmospheric conditions via adaptive pulse-shaping would have utility and benefit, but necessarily requires a detailed understanding of the underlying physics that is unlikely to emerge without a continuing research commitment.

The Aerospace research program in USPL applications is an interdepartmental team effort. The contributions and participation of David Cardoza, Kevin Gaab, Nathan Wells, Stephen LaLumondiere, and Paul Belden have been critical to the establishment and progress of the program. The author also wishes to acknowledge the support and encouragement of Steven Beck, Dean Marvin, Steven Moss, Gina Galasso, Bernardo Jaduszliwer, Rami Razouk, and Sherrie Zacharius.

- M. Beard and R. Ellingson, “Quantum Dot Solar Photovoltaics. Multiple Exciton Generation in Semiconductor Nanocrystals: Toward Efficient Solar Energy Conversion,”
*Laser and Photonics Reviews,*Vol. 2, pp. 377–399 (2008). - T. Guo, C. Spielmann, B. Walker, et al., “USPL X-ray Generation: Generation of Hard X Rays by Ultrafast Terawatt Lasers,”
*Review of Scientific Instruments,*Vol. 74, pp. 41–47 (2001). - J. Kasparian and J. Wolf, “USPL Remote Sensing and Atmospheric Propagation: Physics and Applications of Atmospheric Nonlinear Optics and Filamentation,”
*Optics Express,*Vol. 16, pp. 466–492 (2008). - D. McMorrow, W. Lotshaw, J. Mellinger, et al., “Subbandgap Laser-Induced Single Event Effects: Carrier Generation via Two-Photon Absorption,”
*IEEE Transactions on Nuclear Sciences,*Vol. 49, pp. 3002–3008 (2002). - J. Mellinger, S. Buchner, D. McMorrow, et al., “Pulsed Laser SEE Testing: Critical Evaluation of the Pulsed Laser Method for Single Event Effects Testing and Fundamental Studies,”
*IEEE Transactions on Nuclear Sciences,*Vol. 41, pp. 2574–2584 (1994). - S. Moss, S. LaLumondiere, J. Scarpulla, et al., “Correlation of Picosecond Laser-Induced Latchup and Energetic Particle-Induced Latchup in CMOS Test Structures,”
*IEEE Transactions on Nuclear Sciences,*Vol. 42, pp. 1948–1956 (1995). - A. Redo-Sanchez and X-C. Zhang, “USPLs in THz Technology and Electron Microscopy: Terahertz Science and Technology Trends,”
*IEEE Journal of Selected Topics in Quantum Electronics,*Vol. 14, pp. 260–269 (2008). - A. Rundquist, C. Durfee, Z. Chang, et al., “Femtosecond Ti:sapphire USPLs: Ultrafast Laser and Amplifier Sources,”
*Applied Physics B,*Vol. 65, pp. 61–174 (1997). - E. Van Stryland, M. Sheik-Bahae, A. Said, et al., “Material Property Characterization: Characterization of Nonlinear Absorption and Refraction,”
*Progress in Crystal Growth and Characterization,*Vol. 27, pp. 279–311 (1993). - A. Weiner, “USPL Pulse Shaping: Femtosecond Pulse Shaping Using Spatial Light Modulators,”
*Review of Scientific Instruments,*Vol. 71, pp. 1929–1960 (2000). - C. Xu and W. Denk, “Nonlinear Optical Laser SEE: Two-photon Optical Beam Induced Current Imaging Through the Backside of Integrated Circuits,”
*Applied Physics Letters,*Vol. 71, pp. 2578–2581 (1997). - A. Yurtsever and A. Zewail, “4D Nanoscale Diffraction Observed by Convergent-Beam Ultrafast Electron Diffraction,”
*Science,*Vol. 326, pp. 708–712 (2009).

Return to the Spring 2011 Table of Contents

]]>