Monday, February 22, 2010

AC RECTIFIER

A rectifier is an electrical device that converts alternating current (AC) to direct current (DC), a process known as rectification. Rectifiers have many uses including as components of power supplies and as detectors of radio signals. Rectifiers may be made of solid state diodes, vacuum tube diodes, mercury arc valves, and other components.
A device which performs the opposite function (converting DC to AC) is known as an inverter.
When only one diode is used to rectify AC (by blocking the negative or positive portion of the waveform), the difference between the term diode and the term rectifier is merely one of usage, i.e., the term rectifier describes a diode that is being used to convert AC to DC. Almost all rectifiers comprise a number of diodes in a specific arrangement for more efficiently converting AC to DC than is possible with only one diode. Before the development of silicon semiconductor rectifiers, vacuum tube diodes and copper(I) oxide or selenium rectifier stacks were used.
Early radio receivers, called crystal radios, used a "cat's whisker" of fine wire pressing on a crystal of galena (lead sulfide) to serve as a point-contact rectifier or "crystal detector". Rectification may occasionally serve in roles other than to generate D.C. current per se. For example, in gas heating systems flame rectification is used to detect presence of flame. Two metal electrodes in the outer layer of the flame provide a current path, and rectification of an applied alternating voltage will happen in the plasma, but only while the flame is present to generate it.
Applications
A rectifier diode (silicon controlled rectifier) and associated mounting hardware. The heavy threaded stud helps remove heat.
The primary application of rectifiers is to derive DC power from an AC supply. Virtually all electronic devices require DC, so rectifiers find uses inside the power supplies of virtually all electronic equipment.
Converting DC power from one voltage to another is much more complicated. One method of DC-to-DC conversion first converts power to AC (using a device called an inverter), then use a transformer to change the voltage, and finally rectifies power back to DC.
Rectifiers also find a use in detection of amplitude modulated radio signals. The signal may or may not be amplified before detection but if un-amplified a very low voltage drop diode must be used. When using a rectifier for demodulation the capacitor and load resistance must be carefully matched. Too low a capacitance will result in the high frequency carrier passing to the output and too high will result in the capacitor just charging and staying charged.
Output voltage of a full-wave rectifier with controlled thyristors.
Rectifiers are also used to supply polarised voltage for welding. In such circuits control of the output current is required and this is sometimes achieved by replacing some of the diodes in bridge rectifier with thyristors, whose voltage output can be regulated by means of phase fired controllers.
Thyristors are used in various classes of railway rolling stock systems so that fine control of the traction motors can be achieved. Gate Turn Off Thyristors (GTO) are used to produce alternating current from a DC supply, e.g. on the Eurostar Trains to power the three-phase traction motors.

VACUUM FLASK

A vacuum flask, colloquially called a thermos after a ubiquitous brand, is a storage vessel which provides thermal insulation by interposing a partial vacuum between the contents and the ambient environment. The evacuated region of the partial vacuum removes material that could serve as a heat conductor or carrier, enabling the flask to keep its contents hotter or cooler than its surroundings. Vacuum flasks are commonly used as insulated shipping containers. The vacuum flask was invented by Scottish physicist and chemist Sir James Dewar in 1892 and is sometimes referred to as a Dewar flask after its inventor. Dewar came from a small village in Fife called Kincardine where there is now a street named after him. The first vacuum flasks for commercial use were made in 1904 when a German company, Thermos GmbH, was formed. Thermos, their tradename for their flasks, remains a registered trademark in some countries but was declared a genericized trademark in the U.S. in 1963 as it is colloquially synonymous with vacuum flasks in general.
Theory of operation
A practical vacuum flask is a bottle made of metal, glass, or plastic with hollow walls; the narrow region between the inner and outer wall is evacuated of air. It can also be considered to be two thin-walled bottles nested one inside the other and sealed together at their necks. Using vacuum as an insulator avoids heat transfer by conduction or convection. Radiative heat loss can be minimized by applying a reflective coating to surfaces: Dewar used silver. The contents of the flask reach thermal equilibrium with the inner wall; the wall is thin, with low thermal capacity, so does not exchange much heat with the contents, affecting their temperature little. At the temperatures for which vacuum flasks are used (usually below the boiling point of water), and with the use of reflective coatings, there is little infrared (radiative) transfer.
The flask must, in practice, have an opening for contents to be added and removed. A vacuum cannot be maintained at the opening; therefore, a stopper made of insulating material must be used, originally cork, later plastics. Inevitably, most heat loss takes place through this stopper.
Purpose and uses
Food and drink
Vacuum flasks are used to maintain their contents (often but not always liquid) at a temperature higher or lower than ambient temperature, while retaining the ambient pressure of approximately 1 Atmosphere (14.7 Psi). Domestically and in the food industry, they are often used to keep food and drink either cold or hot. A typical domestic vacuum flask will keep liquid cool for about 24 hours, and warm for up to 8.
Laboratory and industry
In laboratories and industry, vacuum flasks are often used to store liquids which become gaseous at well below ambient temperature, such as oxygen and nitrogen; in this case, the leakage of heat into the extremely cold interior of the bottle results in a slow "boiling-off" of the liquid so that a narrow unstoppered opening, or a stoppered opening protected by a pressure relief valve, is necessary to prevent pressure from building up and shattering the flask. The excellent insulation of the Dewar flask results in a very slow "boil", and thus the contents remain liquid for a long time without the need for expensive refrigeration equipment.
Modifications
Several applications rely on the use of double Dewar flasks, such as NMR and MRI machines. These flasks have two vacuum sections. The flasks contain liquid helium in the inside flask and liquid nitrogen in the outer flask, with one vacuum section in between. The loss of expensive helium is limited in this way.
Other improvements to the Dewar flask include the vapor-cooled radiation shield and the vapor-cooled neck, which both help to reduce evaporation from the flask.

ELECTRIC MOTOR

An electric motor uses electrical energy to produce mechanical energy, through the interaction of magnetic fields and current-carrying conductors. The reverse process, producing electrical energy from mechanical energy, is accomplished by a generator or dynamo. Traction motors used on vehicles often perform both tasks. Many types of electric motors can be run as generators, and vice versa.
Electric motors are found in applications as diverse as industrial fans, blowers and pumps, machine tools, household appliances, power tools, and disk drives. They may be powered by direct current (for example a battery powered portable device or motor vehicle), or by alternating current from a central electrical distribution grid. The smallest motors may be found in electric wristwatches. Medium-size motors of highly standardized dimensions and characteristics provide convenient mechanical power for industrial uses. The very largest electric motors are used for propulsion of large ships, and for such purposes as pipeline compressors, with ratings in the millions of watts. Electric motors may be classified by the source of electric power, by their internal construction, and by their application.
The physical principle of production of mechanical force by the interactions of an electric current and a magnetic field was known as early as 1821. Electric motors of increasing efficiency were constructed throughout the 19th century, but commercial exploitation of electric motors on a large scale required efficient electrical generators and electrical distribution networks.
By convention, electric engine refers to a railroad electric locomotive, rather than an electric motor.
The principle
The conversion of electrical energy into mechanical energy by electromagnetic means was demonstrated by the British scientist Michael Faraday in 1821. A free-hanging wire was dipped into a pool of mercury, on which a permanent magnet was placed. When a current was passed through the wire, the wire rotated around the magnet, showing that the current gave rise to a circular magnetic field around the wire.This motor is often demonstrated in school physics classes, but brine (salt water) is sometimes used in place of the toxic mercury. This is the simplest form of a class of devices called homopolar motors. A later refinement is the Barlow's Wheel. These were demonstration devices only, unsuited to practical applications due to their primitive construction.
Jedlik's "lightning-magnetic self-rotor", 1827. (Museum of Applied Arts, Budapest.)
In 1827, Hungarian Ányos Jedlik started experimenting with electromagnetic rotating devices he called "lightning-magnetic self-rotors". He used them for instructive purposes in universities, and in 1828 demonstrated the first device which contained the three main components of practical direct current motors: the stator, rotor and commutator. Both the stationary and the revolving parts were electromagnetic, employing no permanent magnets.Again, the devices had no practical application.
Uses
Electric motors are used in many, if not most, modern machines. Obvious uses would be in rotating machines such as fans, turbines, drills, the wheels on electric cars, locomotives and conveyor belts. Also, in many vibrating or oscillating machines, an electric motor spins an irregular figure with more area on one side of the axle than the other, causing it to appear to be moving up and down.
Electric motors are also popular in robotics. They are used to turn the wheels of vehicular robots, and servo motors are used to turn arms and legs in humanoid robots. In flying robots, along with helicopters, a motor causes a propeller or wide, flat blades to spin and create lift force, allowing vertical motion.
Electric motors are replacing hydraulic cylinders in airplanes and military equipment.
In industrial and manufacturing businesses, electric motors are used to turn saws and blades in cutting and slicing processes, and to spin gears and mixers (the latter very common in food manufacturing). Linear motors are often used to push products into containers horizontally.
Many kitchen appliances also use electric motors to accomplish various jobs. Food processors and grinders spin blades to chop and break up foods. Blenders use electric motors to mix liquids, and microwave ovens use motors to turn the tray food sits on. Toaster ovens also use electric motors to turn a conveyor to move food over heating elements.

Friday, February 19, 2010

Electron Microscope

An electron microscope is a type of microscope that produces an electronically-magnified image of a specimen for detailed observation. The electron microscope (EM) uses a particle beam of electrons to illuminate the specimen and create a magnified image of it. The microscope has a greater resolving power (magnification) than a light-powered optical microscope, because it uses electrons that have wavelengths about 100,000 times shorter than visible light (photons), and can achieve magnifications of up to 1,000,000x, whereas light microscopes are limited to 1000x magnification.
The electron microscope uses electrostatic and electromagnetic "lenses" to control the electron beam and focus it to form an image. These lens are analogous to, but different from the glass lenses of an optical microscope that form a magnified image by focusing light on or through the specimen.
Electron microscopes are used to observe a wide range of biological and inorganic specimens including microorganisms, cells, large molecules, biopsy samples, metals, and crystals. Industrially, the electron microscope is primarily used for quality control and failure analysis in semiconductor device fabrication.
History
Electron microscope constructed by Ernst Ruska in 1933.
In 1931, the German engineers Ernst Ruska and Max Knoll constructed the prototype electron microscope, capable of four-hundred-power magnification; the apparatus was a practical application of the principles of electron microscopy.Two years later, in 1933, Ruska built an electron microscope that exceeded the resolution attainable with an optical (lens) microscope.Moreover, Reinhold Rudenberg, the scientific director of Siemens-Schuckertwerke, obtained the patent for the electron microscope in May of 1931. Family illness compelled the electrical engineer to devise an electrostatic microscope, because he wanted to make visible the poliomyelitis virus.
In 1937, the Siemens company finanaced the development work of Ernst Ruska and Bodo von Borries, and employed Helmut Ruska (Ernst’s brother) to develop applications for the microscope, especially with biologic specimens.Also in 1937, Manfred von Ardenne pioneered the scanning electron microscope.The first practical electron microscope was constructed in 1938, at the University of Toronto, by Eli Franklin Burton and students Cecil Hall, James Hillier, and Albert Prebus; and Siemens produced the first commercial Transmission Electron Microscope (TEM) in 1939.Although contemporary electron microscopes are capable of two million-power magnification, as scientific instruments, they remain based upon Ruska’s prototype.
Disadvantages
False-color SEM image of the filter setae of an Antarctic krill. (Raw electron microscope images carry no color information.)Pictured: First degree filter setae with V-shaped second degree setae pointing towards the inside of the feeding basket. The purple ball is 1 µm in diameter.
Electron microscopes are expensive to build and maintain, but the capital and running costs of confocal light microscope systems now overlaps with those of basic electron microscopes. They are dynamic rather than static in their operation, requiring extremely stable high-voltage supplies, extremely stable currents to each electromagnetic coil/lens, continuously-pumped high- or ultra-high-vacuum systems, and a cooling water supply circulation through the lenses and pumps. As they are very sensitive to vibration and external magnetic fields, microscopes designed to achieve high resolutions must be housed in stable buildings (sometimes underground) with special services such as magnetic field cancelling systems. Some desktop low voltage electron microscopes have TEM capabilities at very low voltages (around 5 kV) without stringent voltage supply, lens coil current, cooling water or vibration isolation requirements and as such are much less expensive to buy and far easier to install and maintain, but do not have the same ultra-high (atomic scale) resolution capabilities as the larger instruments.
The samples largely have to be viewed in vacuum, as the molecules that make up air would scatter the electrons. One exception is the environmental scanning electron microscope, which allows hydrated samples to be viewed in a low-pressure (up to 20 Torr/2.7 kPa), wet environment.
Scanning electron microscopes usually image conductive or semi-conductive materials best. Non-conductive materials can be imaged by an environmental scanning electron microscope. A common preparation technique is to coat the sample with a several-nanometer layer of conductive material, such as gold, from a sputtering machine; however, this process has the potential to disturb delicate samples.
Small, stable specimens such as carbon nanotubes, diatom frustules and small mineral crystals (asbestos fibres, for example) require no special treatment before being examined in the electron microscope. Samples of hydrated materials, including almost all biological specimens have to be prepared in various ways to stabilize them, reduce their thickness (ultrathin sectioning) and increase their electron optical contrast (staining). These processes may result in artifacts, but these can usually be identified by comparing the results obtained by using radically different specimen preparation methods. It is generally believed by scientists working in the field that as results from various preparation techniques have been compared and that there is no reason that they should all produce similar artifacts, it is reasonable to believe that electron microscopy features correspond with those of living cells. In addition, higher-resolution work has been directly compared to results from X-ray crystallography, providing independent confirmation of the validity of this technique.Since the 1980s, analysis of cryofixed, vitrified specimens has also become increasingly used by scientists, further confirming the validity of this technique.

LASER DIODE

A laser diode is a laser where the active medium is a semiconductor similar to that found in a light-emitting diode. The most common and practical type of laser diode is formed from a p-n junction and powered by injected electric current. These devices are sometimes referred to as injection laser diodes to distinguish them from (optically) pumped laser diodes, which are more easily manufactured in the laboratory.

Theory of operation
A laser diode, like many other semiconductor devices, is formed by doping a very thin layer on the surface of a crystal wafer. The crystal is doped to produce an n-type region and a p-type region, one above the other, resulting in a p-n junction, or diode.
Laser diodes form a subset of the larger classification of semiconductor p-n junction diodes. As with any semiconductor p-n junction diode, forward electrical bias causes the two species of charge carrier - holes and electrons - to be "injected" from opposite sides of the p-n junction into the depletion region, situated at its heart. Holes are injected from the p-doped, and electrons from the n-doped, semiconductor. (A depletion region, devoid of any charge carriers, forms automatically and unavoidably as a result of the difference in chemical potential between n- and p-type semiconductors wherever they are in physical contact.)
As charge injection is a distinguishing feature of diode lasers as compared to all other lasers, diode lasers are traditionally and more formally called "injection lasers." (This terminology differentiates diode lasers, e.g., from flashlamp-pumped solid state lasers, such as the ruby laser. Interestingly, whereas the term "solid-state" was extremely apt in differentiating 1950s-era semiconductor electronics from earlier generations of vacuum electronics, it would not have been adequate to convey unambiguously the unique characteristics defining 1960s-era semiconductor lasers.) When an electron and a hole are present in the same region, they may recombine or "annihilate" with the result being spontaneous emission — i.e., the electron may re-occupy the energy state of the hole, emitting a photon with energy equal to the difference between the electron and hole states involved. (In a conventional semiconductor junction diode, the energy released from the recombination of electrons and holes is carried away as phonons, i.e., lattice vibrations, rather than as photons.) Spontaneous emission gives the laser diode below lasing threshold similar properties to an LED. Spontaneous emission is necessary to initiate laser oscillation, but it is one among several sources of inefficiency once the laser is oscillating.
The difference between the photon-emitting semiconductor laser (or LED) and conventional phonon-emitting (non-light-emitting) semiconductor junction diodes lies in the use of a different type of semiconductor, one whose physical and atomic structure confers the possibility for photon emission. These photon-emitting semiconductors are the so-called "direct bandgap" semiconductors. The properties of silicon and germanium, which are single-element semiconductors, have bandgaps that do not align in the way needed to allow photon emission and are not considered "direct." Other materials, the so-called compound semiconductors, have virtually identical crystalline structures as silicon or germanium but use alternating arrangements of two different atomic species in a checkerboard-like pattern to break the symmetry. The transition between the materials in the alternating pattern creates the critical "direct bandgap" property. Gallium arsenide, indium phosphide, gallium antimonide, and gallium nitride are all examples of compound semiconductor materials that can be used to create junction diodes that emit light.
In the absence of stimulated emission (e.g., lasing) conditions, electrons and holes may coexist in proximity to one another, without recombining, for a certain time, termed the "upper-state lifetime" or "recombination time" (about a nanosecond for typical diode laser materials), before they recombine. Then a nearby photon with energy equal to the recombination energy can cause recombination by stimulated emission. This generates another photon of the same frequency, travelling in the same direction, with the same polarization and phase as the first photon. This means that stimulated emission causes gain in an optical wave (of the correct wavelength) in the injection region, and the gain increases as the number of electrons and holes injected across the junction increases. The spontaneous and stimulated emission processes are vastly more efficient in direct bandgap semiconductors than in indirect bandgap semiconductors; therefore silicon is not a common material for laser diodes.
As in other lasers, the gain region is surrounded with an optical cavity to form a laser. In the simplest form of laser diode, an optical waveguide is made on that crystal surface, such that the light is confined to a relatively narrow line. The two ends of the crystal are cleaved to form perfectly smooth, parallel edges, forming a Fabry-Perot resonator. Photons emitted into a mode of the waveguide will travel along the waveguide and be reflected several times from each end face before they are emitted. As a light wave passes through the cavity, it is amplified by stimulated emission, but light is also lost due to absorption and by incomplete reflection from the end facets. Finally, if there is more amplification than loss, the diode begins to "lase".
Some important properties of laser diodes are determined by the geometry of the optical cavity. Generally, in the vertical direction, the light is contained in a very thin layer, and the structure supports only a single optical mode in the direction perpendicular to the layers. In the lateral direction, if the waveguide is wide compared to the wavelength of light, then the waveguide can support multiple lateral optical modes, and the laser is known as "multi-mode". These laterally multi-mode lasers are adequate in cases where one needs a very large amount of power, but not a small diffraction-limited beam; for example in printing, activating chemicals, or pumping other types of lasers.
In applications where a small focused beam is needed, the waveguide must be made narrow, on the order of the optical wavelength. This way, only a single lateral mode is supported and one ends up with a diffraction-limited beam. Such single spatial mode devices are used for optical storage, laser pointers, and fiber optics. Note that these lasers may still support multiple longitudinal modes, and thus can lase at multiple wavelengths simultaneously.
The wavelength emitted is a function of the band-gap of the semiconductor and the modes of the optical cavity. In general, the maximum gain will occur for photons with energy slightly above the band-gap energy, and the modes nearest the gain peak will lase most strongly. If the diode is driven strongly enough, additional side modes may also lase. Some laser diodes, such as most visible lasers, operate at a single wavelength, but that wavelength is unstable and changes due to fluctuations in current or temperature.
Due to diffraction, the beam diverges (expands) rapidly after leaving the chip, typically at 30 degrees vertically by 10 degrees laterally. A lens must be used in order to form a collimated beam like that produced by a laser pointer. If a circular beam is required, cylindrical lenses and other optics are used. For single spatial mode lasers, using symmetrical lenses, the collimated beam ends up being elliptical in shape, due to the difference in the vertical and lateral divergences. This is easily observable with a red laser pointer.


Applications of laser diodes
Laser diodes can be arrayed to produce very high power (continuous wave or pulsed) outputs. Such arrays may be used to efficiently pump solid state lasers for inertial confinement fusion or high average power drilling or burning applications.
Laser diodes are numerically the most common type of laser, with 2004 sales of approximately 733 million diode lasers,as compared to 131,000 of other types of lasers.Laser diodes find wide use in telecommunication as easily modulated and easily coupled light sources for fiber optics communication. They are used in various measuring instruments, such as rangefinders. Another common use is in barcode readers. Visible lasers, typically red but later also green, are common as laser pointers. Both low and high-power diodes are used extensively in the printing industry both as light sources for scanning (input) of images and for very high-speed and high-resolution printing plate (output) manufacturing. Infrared and red laser diodes are common in CD players, CD-ROMs and DVD technology. Violet lasers are used in HD DVD and Blu-ray technology. Diode lasers have also found many applications in laser absorption spectrometry (LAS) for high-speed, low-cost assessment or monitoring of the concentration of various species in gas phase. High-power laser diodes are used in industrial applications such as heat treating, cladding, seam welding and for pumping other lasers, such as diode pumped solid state lasers.
Applications of laser diodes can be categorized in various ways. Most applications could be served by larger solid state lasers or optical parametric oscillators, but the low cost of mass-produced diode lasers makes them essential for mass-market applications. Diode lasers can be used in a great many fields; since light has many different properties (power, wavelength and spectral quality, beam quality, polarization, etc.) it is interesting to classify applications by these basic properties.
Many applications of diode lasers primarily make use of the "directed energy" property of an optical beam. In this category one might include the laser printers, bar-code readers, image scanning, illuminators, designators, optical data recording, combustion ignition, laser surgery, industrial sorting, industrial machining, and directed energy weaponry. Some of these applications are emerging while others are well-established.
Laser medicine: medicine and especially dentistry have found many new applications for diode lasers.The shrinking size of the units and their increasing user friendliness makes them very attractive to clinicians for minor soft tissue procedures. The 800 nm - 980 nm units have a high absorption rate for hemoglobin and thus make them ideal for soft tissue applications, where good hemostasis is necessary.
Applications which may today or in the future make use of the coherence of diode-laser-generated light include interferometric distance measurement, holography, coherent communications, and coherent control of chemical reactions.
Applications which may make use of "narrow spectral" properties of diode lasers include range-finding, telecommunications, infra-red countermeasures, spectroscopic sensing, generation of radio-frequency or terahertz waves, atomic clock state preparation, quantum key cryptography, frequency doubling and conversion, water purification (in the UV), and photodynamic therapy (where a particular wavelength of light would cause a substance such as porphyrin to become chemically active as an anti-cancer agent only where the tissue is illuminated by light).
Applications where the desired quality of laser diodes is their ability to generate ultra-short pulses of light by the technique known as "mode-locking" include clock distribution for high-performance integrated circuits, high-peak-power sources for laser-induced breakdown spectroscopy sensing, arbitrary waveform generation for radio-frequency waves, photonic sampling for analog-to-digital conversion, and optical code-division-multiple-access systems for secure communication.


Diode

In electronics, a diode is a two-terminal electronic component that conducts electric current in only one direction. The term usually refers to a semiconductor diode, the most common type today, which is a crystal of semiconductor connected to two electrical terminals, a P-N junction. A vacuum tube diode, now little used, is a vacuum tube with two electrodes; a plate and a cathode.
The most common function of a diode is to allow an electric current in one direction (called the diode's forward direction) while blocking current in the opposite direction (the reverse direction). Thus, the diode can be thought of as an electronic version of a check valve. This unidirectional behavior is called rectification, and is used to convert alternating current to direct current, and remove modulation from radio signals in radio receivers.
However, diodes can have more complicated behavior than this simple on-off action, due to their complex non-linear electrical characteristics, which can be tailored by varying the construction of their P-N junction. These are exploited in special purpose diodes that perform many different functions. Diodes are used to regulate voltage (Zener diodes), electronically tune radio and TV receivers (varactor diodes), generate radio frequency oscillations (tunnel diodes), and produce light (light emitting diodes).
Diodes were the first semiconductor electronic devices. The discovery of crystals' rectifying abilities was made by German physicist Ferdinand Braun in 1874. The first semiconductor diodes, called cat's whisker diodes were made of crystals of minerals such as galena. Today most diodes are made of silicon, but other semiconductors such as germanium are sometimes used.
Thermionic and gaseous state diodes
Thermionic diodes are thermionic-valve devices (also known as vacuum tubes, tubes, or valves), which are arrangements of electrodes surrounded by a vacuum within a glass envelope. Early examples were fairly similar in appearance to incandescent light bulbs.
In thermionic valve diodes, a current through the heater filament indirectly heats the cathode, another internal electrode treated with a mixture of barium and strontium oxides, which are oxides of alkaline earth metals; these substances are chosen because they have a small work function. (Some valves use direct heating, in which a tungsten filament acts as both heater and cathode.) The heat causes thermionic emission of electrons into the vacuum. In forward operation, a surrounding metal electrode called the anode is positively charged so that it electrostatically attracts the emitted electrons. However, electrons are not easily released from the unheated anode surface when the voltage polarity is reversed. Hence, any reverse flow is negligible.
For much of the 20th century, thermionic valve diodes were used in analog signal applications, and as rectifiers in many power supplies. Today, valve diodes are only used in niche applications such as rectifiers in electric guitar and high-end audio amplifiers as well as specialized high-voltage equipment.
Semiconductor diodes
A modern semiconductor diode is made of a crystal of semiconductor like silicon that has impurities added to it to create a region on one side that contains negative charge carriers (electrons), called n-type semiconductor, and a region on the other side that contains positive charge carriers (holes), called p-type semiconductor. The diode's terminals are attached to each of these regions. The boundary within the crystal between these two regions, called a PN junction, is where the action of the diode takes place. The crystal conducts conventional current in a direction from the p-type side (called the anode) to the n-type side (called the cathode), but not in the opposite direction.
Another type of semiconductor diode, the Schottky diode, is formed from the contact between a metal and a semiconductor rather than by a p-n junction.

Thursday, February 18, 2010

LED lamp

A light-emitting-diode lamp is a solid-state lamp that uses light-emitting diodes (LEDs) as the source of light. Since the light output of individual light-emitting diodes is small compared to incandescent and compact fluorescent lamps, multiple diodes are used together. LED lamps can be made interchangeable with other types. Most LED lamps must also include internal circuits to operate from standard AC voltage. LED lamps offer long life and high efficiency, but initial costs are higher than that of fluorescent lamps.
Application
LED lamps are used for both general lighting and special purpose lighting. Where colored light is required, LEDs come in multiple colors, which are produced without the need for filters. This improves the energy efficiency over a white light source that generates all colors of light then discards some of the visible energy in a filter.
White-light light-emitting diode lamps have the characteristics of long life expectancy and relatively low energy consumption. The LED sources are compact, which gives flexibility in designing lighting fixtures and good control over the distribution of light with small reflectors or lenses. LED lamps have no glass tubes to break, and their internal parts are rigidly supported, making them resistant to vibration and impact. With proper driver electronics design, an LED lamp can be made dimmable over a wide range; there is no minimum current needed to sustain lamp operation. LEDs using the color-mixing principle can produce a wide range of colors by changing the proportions of light generated in each primary color. This allows full color mixing in lamps with LEDs of different colorsLED lamps contain no mercury.
However, some current models are not compatible with standard dimmers. It is not currently economical to produce high levels of lighting. As a result, current LED screw-in light bulbs offer either low levels of light at a moderate cost, or moderate levels of light at a high cost. In contrast to other lighting technologies, LED light tends to be directional. This is a disadvantage for most general lighting applications, but can be an advantage for spot or flood lighting.

The LED light bulb
As of 2010, only a few LED light bulb options are available as replacements for the ordinary household incandescent or compact fluorescent light bulb. One drawback of the existing LED bulbs is that they offer limited brightness, with the brightest bulbs equivalent to a 45-60 W incandescent bulb. Most LED bulbs are not able to be dimmed, and their brightness retains some directionality. The bulbs are also expensive, costing $40–50 per bulb, whereas the ordinary incandescent bulb costs less than a dollar. However, these bulbs are slightly more power efficient than the compact fluorescent bulbs and offer extraordinary lifespans of 30,000 or more hours. An LED light bulb can be expected to last 25–30 years under normal use. LED bulbs will not dim over time and they are mercury free, unlike the compact fluorescent bulbs. Recent research has made bulbs available with a variety of color characteristics, much like the incandescent bulb. With the savings in energy and maintenance costs, these bulbs can be attractive. It is expected that with additional development and growing popularity, the cost of these bulbs will eventually decline.
Fluorescent tubes with modern electronic ballasts commonly average 50 to 67 lumens/W overall. Most compact fluorescents rated at 13 W or more with integral electronic ballasts achieve about 60 lumens/W, comparable to the LED bulb. A 60 W incandescent bulb offers about 850 lumens, or 14 lumens/W.
Several companies offer LED lamps for general lighting purposes. The C. Crane Company has a product called "Geobulb". The GeoBulb II uses only 7.5 W (59 lumens/W). In October 2009, the GeoBulb II was superseded by the GeoBulb-3 which is brighter and longer lasting.The company also offers wedge-base lamps for replacement in low voltage fixtures. In the Netherlands, a company called Lemnis Lighting offers a dimmable LED lamp called Pharox. The company Eternleds Inc. offers a bulb called HydraLux-4 which uses liquid cooling of the LED chips.

Light-emitting diode

A light-emitting diode (LED) is a semiconductor light source. LEDs are used as indicator lamps in many devices, and are increasingly used for lighting. Introduced as a practical electronic component in 1962,early LEDs emitted low-intensity red light, but modern versions are available across the visible, ultraviolet and infrared wavelengths, with very high brightness.
The LED is based on the semiconductor diode. When a diode is forward biased (switched on), electrons are able to recombine with holes within the device, releasing energy in the form of photons. This effect is called electroluminescence and the color of the light (corresponding to the energy of the photon) is determined by the energy gap of the semiconductor. An LED is usually small in area (less than 1 mm2), and integrated optical components are used to shape its radiation pattern and assist in reflection.LEDs present many advantages over incandescent light sources including lower energy consumption, longer lifetime, improved robustness, smaller size, faster switching, and greater durability and reliability. However, they are relatively expensive and require more precise current and heat management than traditional light sources. Current LED products for general lighting are more expensive to buy than fluorescent lamp sources of comparable output.
They also enjoy use in applications as diverse as replacements for traditional light sources in automotive lighting (particularly indicators) and in traffic signals. The compact size of LEDs has allowed new text and video displays and sensors to be developed, while their high switching rates are useful in advanced communications technology.
Technology
Physics
Like a normal diode, the LED consists of a chip of semiconducting material doped with impurities to create a p-n junction. As in other diodes, current flows easily from the p-side, or anode, to the n-side, or cathode, but not in the reverse direction. Charge-carriers—electrons and holes—flow into the junction from electrodes with different voltages. When an electron meets a hole, it falls into a lower energy level, and releases energy in the form of a photon.
The wavelength of the light emitted, and therefore its color, depends on the band gap energy of the materials forming the p-n junction. In silicon or germanium diodes, the electrons and holes recombine by a non-radiative transition which produces no optical emission, because these are indirect band gap materials. The materials used for the LED have a direct band gap with energies corresponding to near-infrared, visible or near-ultraviolet light.
LED development began with infrared and red devices made with gallium arsenide. Advances in materials science have made possible the production of devices with ever-shorter wavelengths, producing light in a variety of colors.
LEDs are usually built on an n-type substrate, with an electrode attached to the p-type layer deposited on its surface. P-type substrates, while less common, occur as well. Many commercial LEDs, especially GaN/InGaN, also use sapphire substrate.
Most materials used for LED production have very high refractive indices. This means that much light will be reflected back in to the material at the material/air surface interface. Therefore Light extraction in LEDs is an important aspect of LED production, subject to much research and development.

Sunday, February 14, 2010

ELECTRIC INVERTER

An inverter is an electrical device that converts direct current (DC) to alternating current (AC); the converted AC can be at any required voltage and frequency with the use of appropriate transformers, switching, and control circuits.
Static inverters have no moving parts and are used in a wide range of applications, from small switching power supplies in computers, to large electric utility high-voltage direct current applications that transport bulk power. Inverters are commonly used to supply AC power from DC sources such as solar panels or batteries.
The electrical inverter is a high-power electronic oscillator. It is so named because early mechanical AC to DC converters were made to work in reverse, and thus were "inverted", to convert DC to AC.
The inverter performs the opposite function of a rectifier.
Circuit description
Top: Simple inverter circuit shown with an electromechanical switchand automatic equivalentauto-switching device implemented with two transistors and split winding auto-transformer in place of the mechanical switch.
Square waveform with fundamental sine wave component, 3rd harmonic and 5th harmonic
Basic designs
In one simple inverter circuit, DC power is connected to a transformer through the centre tap of the primary winding. A switch is rapidly switched back and forth to allow current to flow back to the DC source following two alternate paths through one end of the primary winding and then the other. The alternation of the direction of current in the primary winding of the transformer produces alternating current (AC) in the secondary circuit.
The electromechanical version of the switching device includes two stationary contacts and a spring supported moving contact. The spring holds the movable contact against one of the stationary contacts and an electromagnet pulls the movable contact to the opposite stationary contact. The current in the electromagnet is interrupted by the action of the switch so that the switch continually switches rapidly back and forth. This type of electromechanical inverter switch, called a vibrator or buzzer, was once used in vacuum tube automobile radios. A similar mechanism has been used in door bells, buzzers and tattoo guns.
As they became available with adequate power ratings, transistors and various other types of semiconductor switches have been incorporated into inverter circuit designs.
Output waveforms
The switch in the simple inverter described above produces a square voltage waveform as opposed to the sinusoidal waveform that is the usual waveform of an AC power supply. Using Fourier analysis, periodic waveforms are represented as the sum of an infinite series of sine waves. The sine wave that has the same frequency as the original waveform is called the fundamental component. The other sine waves, called harmonics, that are included in the series have frequencies that are integral multiples of the fundamental frequency.
The quality of the inverter output waveform can be expressed by using the Fourier analysis data to calculate the total harmonic distortion (THD). The total harmonic distortion is the square root of the sum of the squares of the harmonic voltages divided by the fundamental voltage:
The quality of output waveform that is needed from an inverter depends on the characteristics of the connected load. Some loads need a nearly perfect sine wave voltage supply in order to work properly. Other loads may work quite well with a square wave voltage.

JET ENGINE

A jet engine is a reaction engine that discharges a fast moving jet of fluid to generate thrust in accordance with Newton's laws of motion. This broad definition of jet engines includes turbojets, turbofans, rockets, ramjets, pulse jets and pump-jets. In general, most jet engines are internal combustion engines but non-combusting forms also exist.
In some common parlance, the term 'jet engine' loosely refers to an internal combustion duct engine. These typically consist of an engine with a rotary (rotating) air compressor powered by a turbine ("Brayton cycle"), with the leftover power providing thrust via a propelling nozzle. These types of jet engines are primarily used by jet aircraft for long distance travel. Early jet aircraft used turbojet engines which were relatively inefficient for subsonic flight. Modern subsonic jet aircraft usually use high-bypass turbofan engines which give high speeds, as well as (over long distances) better fuel efficiency than many other forms of transport.
Uses
Jet engines are usually used as aircraft engines for jet aircraft. They are also used for cruise missiles and unmanned aerial vehicles.
In the form of rocket engines they are used for fireworks, model rocketry, spaceflight, and military missiles.
Jet engines have also been used to propel high speed cars, particularly drag racers, with the all-time record held by a rocket car. A turbofan powered car ThrustSSC currently holds the land speed record.
Jet engine designs are frequently modified to turn them into gas turbine engines which are used in a wide variety of industrial applications. These include electrical power generation, powering water, natural gas, or oil pumps, and providing propulsion for ships and locomotives. Industrial gas turbine can create up to 50,000 shaft horsepower. Many of these engines are derived from older military turbojets such as the Pratt & Whitney J57 and J75 models. There is also a derivative of the P&W JT8D low-bypass turbofan that creates up to 35,000 HP.


General physical principles
All jet engines are reaction engines that generate thrust by emitting a jet of fluid rearwards at relatively high speed. The forces on the inside of the engine needed to create this jet give a strong thrust on the engine which pushes the craft forwards.
Jet engines make their jet from propellant from tankage that is attached to the engine (as in a 'rocket') as well as in duct engines (those commonly used on aircraft) by ingesting an external fluid (very typically air) and expelling it at higher speed.
Thrust
The motion impulse of the engine is equal to the fluid mass multiplied by the speed at which the engine emits this mass:
I = mc
where m is the fluid mass per second and c is the exhaust speed. In other words, a vehicle gets the same thrust if it outputs a lot of exhaust very slowly, or a little exhaust very quickly. (In practice parts of the exhaust may be faster than others, but it is the average momentum that matters, and thus the important quantity is called the effective exhaust speed - c here.)
However, when a vehicle moves with certain velocity v, the fluid moves towards it, creating an opposing ram drag at the intake:
mv
Most types of jet engine have an intake, which provides the bulk of the fluid exiting the exhaust. Conventional rocket motors, however, do not have an intake, the oxidizer and fuel both being carried within the vehicle. Therefore, rocket motors do not have ram drag; the gross thrust of the nozzle is the net thrust of the engine. Consequently, the thrust characteristics of a rocket motor are different from that of an air breathing jet engine, and thrust is independent of speed.
The jet engine with an intake duct is only useful if the velocity of the gas from the engine, c, is greater than the vehicle velocity, v, as the net engine thrust is the same as if the gas were emitted with the velocity c − v. So the thrust is actually equal to
S = m(c − v)
This equation shows that as v approaches c, a greater mass of fluid must go through the engine to continue to accelerate at the same rate, but all engines have a designed limit on this. Additionally, the equation implies that the vehicle can't accelerate past its exhaust velocity as it would have negative thrust.
Energy efficiency


Dependence of the energy efficiency (η) upon the vehicle speed/exhaust speed ratio (v/c) for air-breathing jet and rocket engines
Energy efficiency (η) of jet engines installed in vehicles has two main components, cycle efficiency (ηc)- how efficiently the engine can accelerate the jet, and propulsive efficiency (ηp)-how much of the energy of the jet ends up in the vehicle body rather than being carried away as kinetic energy of the jet.
Even though overall energy efficiency η is simply:
η = ηpηc
For all jet engines the propulsive efficiency is highest when the engine emits an exhaust jet at a speed that is the same as, or nearly the same as, the vehicle velocity as this gives the smallest residual kinetic energy.
In addition to propulsive efficiency, another factor is cycle efficiency; essentially a jet engine is typically a form of heat engine. Heat engine efficiency is determined by the ratio of temperatures that are reached in the engine to that they are exhausted at from the nozzle, which in turn is limited by the overall pressure ratio that can be achieved. Cycle efficiency is highest in rocket engines (~60+%), as they can achieve extremely high combustion temperatures and can have very large, energy efficient nozzles. Cycle efficiency in turbojet and similar is nearer to 30%, the practical combustion temperatures and nozzle efficiencies are much lower.

Thursday, February 11, 2010

Space-based solar power

Space-based solar power (SBSP) (or historically space solar power (SSP)) is a system for the collection of solar power in space, for use on Earth. SBSP differs from the usual method of solar power collection in that the solar panels used to collect the energy would reside on a satellite in orbit, often referred to as a solar power satellite (SPS), rather than on Earth's surface. In space, collection of the Sun's energy is unaffected by the day/night cycle, weather, seasons, or the filtering effect of Earth's atmospheric gases.
The world Radiation Centre's 1985 standard extraterrestrial spectrum for solar irradiance is 1367 W/m2. The integrated total terrestrial solar irradiance is 950 W/m2. Therefore, extraterrestrial solar irradiance is 144% of the maximum terrestrial irradiance. A major interest in SBSP stems from the length of time the solar collection panels can be exposed to a consistently high amount of solar radiation. For most of the year, a satellite-based solar panel can collect power 24 hours per day, whereas a land-based station can collect for only 12 hours per day, yielding lower power collection rates around the sunrise and sunset hours.
The collection of solar energy in space for use on Earth introduces the new problem of transmitting energy from the collection point, in space, to the place where the energy would be used, on Earth's surface. Since wires extending from Earth's surface to an orbiting satellite would be impractical, many SBSP designs have proposed the use of microwave beams for wireless power transmission. The collecting satellite would convert solar energy into electrical energy, which would then be used to power a microwave emitter directed at a collector on the Earth's surface. Dynamic solar thermal power systems are also being investigated.
Some problems normally associated with terrestrial solar power collection would be eliminated by such a design, such as dependence on meteorological and weather conditions, and the panels being prone to corrosion. Other problems may take their place though, such as cumulative radiation damage or micrometeoroid impacts.
Advantages
The SBSP concept is attractive because space has several major advantages over the Earth's surface for the collection of solar power. There is no air in space, so the collecting surfaces would receive much more intense sunlight, unaffected by weather. In geostationary orbit, an SPS would be illuminated over 99% of the time. The SPS would be in Earth's shadow on only a few days at the spring and fall equinoxes; and even then for a maximum of 75 minutes late at night when power demands are at their lowest. This characteristic of SBSP avoids the expense of storage facilities (dams, oil storage tanks, coal dumps) necessary in many Earth-based power generation systems. Additionally, SBSP would have fewer or none of the ecological (or political) consequences of fossil fuel systems.
SBSP would also be applicable on a global scale. Nuclear power especially is something many governments are reluctant to sell to developing nations, where political pressures might lead to proliferation of nuclear weapons technology. SBSP poses no known potential threat.

Hydraulic Hybrid Vehicles

In today’s day and age we need to consider the alternatives when it comes to the environment. We need to realize how important it is to choose the more environmentally friendly options available to us. The ozone layer is what keeps us alive and once that is destroyed we are toast. Therefore we need to consider what we can do to save and preserve this earth from turning into a weapon of destruction. A hydraulic hybrid vehicle is the answer to making a difference.
People always the dispute whether doing something small would actually make a difference. The answer to that would be that every small deed makes a difference. If everyone on this planet would do something every day to preserve the earth be it buying environmentally friendly toilet spray cans or taking the buss to work instead of your car, a small deed always makes the biggest difference.
The best alternative to a standard car would be the hydraulic hybrid vehicle. These vehicles are the new way to drive without causing damage to the ozone layer. Hybrid cars use fluid as an additional source of power. A hydraulic hybrid vehicle is better compared to the battery powered car as it is hazardous to the environment. In hydraulic hybrid vehicles, the engine of the car, run by diesel or gas, influences the hydraulic pump, which ultimately charges the high pressure collector.
As the years drag on we become more and more concerned about the earth and we are doing to it. This is why new technologies always try the alternatives to a hazardous new invention. Technologists also come up with new ways we can save our money to have a better quality of life. In addition to the latter, technologists came up with the hydraulic hybrid that not clean the environment but also saves you money on fuel. When a hybrid car stops or when it runs on idle mode, it automatically shuts off the gasoline engine. However, the car continues running on the electric motor. If everyone could buy this car the air will be a lot less polluted in a couple of year’s time.
Hydraulic pump drives has a highly important role in different kinds of pumping systems because they determine how much fluid can be derived each time the pump is operated. Over the years hydraulic pumps have developed, and since 2009 are electronically operated, and are used not only in producing and distributing water, but also in removing oil and fuel.
It is important to be environmentally conscious of how you spend your day. Always try help by doing something small as small deeds go a long way. Hydraulic hybrid cars are the way of the future and technologists are coming up with new ways every day to save our precious earth.

Wednesday, February 10, 2010

INTEGRATED CIRCUIT (IC)

In electronics, an integrated circuit (also known as IC, microcircuit, microchip, silicon chip, or chip) is a miniaturized electronic circuit (consisting mainly of semiconductor devices, as well as passive components) that has been manufactured in the surface of a thin substrate of semiconductor material. Integrated circuits are used in almost all electronic equipment in use today and have revolutionized the world of electronics.
A hybrid integrated circuit is a miniaturized electronic circuit constructed of individual semiconductor devices, as well as passive components, bonded to a substrate or circuit board.
Introduction
Synthetic detail of an integrated circuit through four layers of planarized copper interconnect, down to the polysilicon (pink), wells (greyish), and substrate (green)
Integrated circuits were made possible by experimental discoveries which showed that semiconductor devices could perform the functions of vacuum tubes, and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using electronic components. The integrated circuit's mass production capability, reliability, and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors.
There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography and not constructed one transistor at a time. Furthermore, much less material is used to construct a circuit as a packaged IC die than as a discrete circuit. Performance is high since the components switch quickly and consume little power (compared to their discrete counterparts) because the components are small and close together. As of 2006, chip areas range from a few square millimeters to around 350 mm2, with up to 1 million transistors per mm2.
Invention
Jack Kilby's original integrated circuit
The idea of an integrated circuit was conceived by a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence, Geoffrey W.A. Dummer (1909-2002), who published it at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on May 7, 1952. He gave many symposia publicly to propagate his ideas.
Dummer unsuccessfully attempted to build such a circuit in 1956.
The integrated circuit can be credited as being invented by both Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconductor working independently of each other. Kilby recorded his initial ideas concerning the integrated circuit in July 1958 and successfully demonstrated the first working integrated circuit on September 12, 1958. In his patent application of February 6, 1959, Kilby described his new device as “a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated.”
Kilby won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit. Robert Noyce also came up with his own idea of integrated circuit, half a year later than Kilby. Noyce's chip had solved many practical problems that the microchip developed by Kilby had not. Noyce's chip, made at Fairchild, was made of silicon, whereas Kilby's chip was made of germanium.
Early developments of the integrated circuit go back to 1949, when the German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate arranged in a 2-stage amplifier arrangement. Jacobi discloses small and cheap hearing aids as typical industrial applications of his patent. A commercial use of his patent has not been reported.
A precursor idea to the IC was to create small ceramic squares (wafers), each one containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which looked very promising in 1957, was proposed to the US Army by Jack Kilby, and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy). However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC.
The aforementioned Noyce credited Kurt Lehovec of Sprague Electric for the principle of p-n junction isolation caused by the action of a biased p-n junction (the diode) as a key concept behind the IC.
Generations
SSI, MSI and LSI
The first integrated circuits contained only a few transistors. Called "Small-Scale Integration" (SSI), digital circuits containing transistors numbering in the tens provided a few logic gates for example, while early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The term Large Scale Integration was first used by IBM scientist Rolf Landauer when describing the theoretical concept, from there came the terms for SSI, MSI, VLSI, and ULSI.
SSI circuits were crucial to early aerospace projects, and vice-versa. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems; the Apollo guidance computer led and motivated the integrated-circuit technology[citation needed], while the Minuteman missile forced it into mass-production.
These programs purchased almost all of the available integrated circuits from 1960 through 1963, and almost alone provided the demand that funded the production improvements to get the production costs from $1000/circuit (in 1960 dollars) to merely $25/circuit (in 1963 dollars).[citation needed] They began to appear in consumer products at the turn of the decade, a typical application being FM inter-carrier sound processing in television receivers.
The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "Medium-Scale Integration" (MSI).
They were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.
Further development, driven by the same economic factors, led to "Large-Scale Integration" (LSI) in the mid 1970s, with tens of thousands of transistors per chip.
Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4000 transistors. True LSI circuits, approaching 10000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.
VLSI
The final step in the development process, starting in the 1980s and continuing through the present, was "very large-scale integration" (VLSI). The development started with hundreds of thousands of transistors in the early 1980s, and continues beyond several billion transistors as of 2009.
There was no single breakthrough that allowed this increase in complexity, though many factors helped. Manufacturing moved to smaller rules and cleaner fabs, allowing them to produce chips with more transistors with adequate yield, as summarized by the International Technology Roadmap for Semiconductors (ITRS). Design tools improved enough to make it practical to finish these designs in a reasonable time. The more energy efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. Better texts such as the landmark textbook by Mead and Conway helped schools educate more designers, among other factors.
In 1986 the first one megabit RAM chips were introduced, which contained more than one million transistors. Microprocessor chips passed the million transistor mark in 1989 and the billion transistor mark in 2005. The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors .
ULSI, WSI, SOC and 3D-IC
To reflect further growth of the complexity, the term ULSI that stands for "ultra-large-scale integration" was proposed for chips of complexity of more than 1 million transistors.
Wafer-scale integration (WSI) is a system of building very-large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed.
A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and building disparate components on a single piece of silicon may compromise the efficiency of some elements. However, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging).
A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation.
Advances in integrated circuit
Among the most advanced integrated circuits are the microprocessors or "cores", which control everything from computers to cellular phones to digital microwave ovens. Digital memory chips and ASICs are examples of other families of integrated circuits that are important to the modern information society. While the cost of designing and developing a complex integrated circuit is quite high, when spread across typically millions of production units the individual IC cost is minimized. The performance of ICs is high because the small size allows short traces which in turn allows low power logic (such as CMOS) to be used at fast switching speeds.
ICs have consistently migrated to smaller feature sizes over the years, allowing more circuitry to be packed on each chip. This increased capacity per unit area can be used to decrease cost and/or increase functionality—see Moore's law which, in its modern interpretation, states that the number of transistors in an integrated circuit doubles every two years. In general, as the feature size shrinks, almost everything improves—the cost per unit and the switching power consumption go down, and the speed goes up. However, ICs with nanometer-scale devices are not without their problems, principal among which is leakage current (see subthreshold leakage for a discussion of this), although these problems are not insurmountable and will likely be solved or at least ameliorated by the introduction of high-k dielectrics. Since these speed and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. This process, and the expected progress over the next few years, is well described by the International Technology Roadmap for Semiconductors (ITRS).
classification
Integrated circuits can be classified into analog, digital and mixed signal (both analog and digital on the same chip).
Digital integrated circuits can contain anything from one to millions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and micro controllers work using binary mathematics to process "one" and "zero" signals.
Analog ICs, such as sensors, power management circuits, and operational amplifiers, work by processing continuous signals. They perform functions like amplification, active filtering, demodulation, mixing, etc. Analog ICs ease the burden on circuit designers by having expertly designed analog circuits available instead of designing a difficult analog circuit from scratch.
ICs can also combine analog and digital circuits on a single chip to create functions such as A/D converters and D/A converters. Such circuits offer smaller size and lower cost, but must carefully account for signal interference.
Manufacturing
Fabrication
Rendering of a small standard cell with three metal layers (dielectric has been removed). The sand-colored structures are metal interconnect, with the vertical pillars being contacts, typically plugs of tungsten. The reddish structures are polysilicon gates, and the solid at the bottom is the crystalline silicon bulk.
Schematic structure of a CMOS chip, as built in the early 2000s. The graphic shows LDD-MISFET's on an SOI substrate with five metallization layers and solder bump for flip-chip bonding. It also shows the section for FEOL (front-end of line), BEOL (back-end of line) and first parts of back-end process.
The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid state vacuum tube by researchers like William Shockley at Bell Laboratories starting in the 1930s. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, silicon monocrystals are the main substrate used for integrated circuits (ICs) although some III-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals without defects in the crystalline structure of the semiconducting material.
Semiconductor ICs are fabricated in a layer process which includes these key process steps:
.Imaging
.Deposition
.Etching
The main process steps are supplemented by doping and cleaning.
Mono-crystal silicon wafers (or for special applications, silicon on sapphire or gallium arsenide wafers) are used as the substrate. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminium) tracks deposited on them.
.Integrated circuits are composed of many overlapping layers, each defined by photolithography, and normally shown in different colors. Some layers mark where various dopants are diffused into the substrate (called diffusion layers), some define where additional ions are implanted (implant layers), some define the conductors (polysilicon or metal layers), and some define the connections between the conducting layers (via or contact layers). All components are constructed from a specific combination of these layers.
.In a self-aligned CMOS process, a transistor is formed wherever the gate layer (polysilicon or metal) crosses a diffusion layer.
.Capacitive structures, in form very much like the parallel conducting plates of a traditional electrical capacitor, are formed according to the area of the "plates", with insulating material between the plates. Capacitors of a wide range of sizes are common on ICs.
.Meandering stripes of varying lengths are sometimes used to form on-chip resistors, though most logic circuits do not need any resistors. The ratio of the length of the resistive structure to its width, combined with its sheet resistivity, determines the resistance.
.More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators.
Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar devices.
A random access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process.
Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminium (or gold) bond wires which are welded to pads, usually found around the edge of the die. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Test cost can account for over 25% of the cost of fabrication on lower cost products, but can be negligible on low yielding, larger, and/or higher cost devices.
As of 2005, a fabrication facility (commonly known as a semiconductor lab) costs over a billion US Dollars to construct, because much of the operation is automated. The most advanced processes employ the following techniques:
.The wafers are up to 300 mm in diameter (wider than a common dinner plate).
.Use of 65 nanometer or smaller chip manufacturing process. Intel, IBM, NEC, and AMD are using 45 nanometers for their CPU chips. IBM and AMD are in development of a 45 nm process using immersion lithography.
.Copper interconnects where copper wiring replaces aluminium for interconnects.
.Low-K dielectric insulators.
.Silicon on insulator (SOI).
.Strained silicon in a process used by IBM known as strained silicon directly on insulator (SSDOI).

GEO-ENGINEERING POSSIBLE SOLUTIONS

The modern concept of Geoengineering (or Climate Engineering) is usually taken to mean proposals to deliberately manipulate the Earth's climate to counteract the effects of global warming from greenhouse gas emissions. The National Academy of Sciences defined geoengineering as
"options that would involve large-scale engineering of our environment in order to combat or counteract the effects of changes in atmospheric chemistry." Geoengineering accompanies mitigation and adaptation to form a 3-stranded 'MAG' approach to tackling global warming, notably advocated by the Institution of Mechanical Engineers.Some geoengineering techniques are based on carbon sequestration. These techniques seek to reduce greenhouse gases in the atmosphere directly. These include direct methods (e.g. carbon dioxide air capture) and indirect methods (e.g. ocean iron fertilization). These techniques can be regarded as mitigation of global warming. Alternatively, solar radiation management techniques (e.g. stratospheric sulfur aerosols) do not reduce greenhouse gas concentrations, and can only address the warming effects of carbon dioxide and other gases; they cannot address problems such as ocean acidification, which are expected as a result of rising carbon dioxide levels. Examples of proposed geoengineering techniques include the production of stratospheric sulfur aerosols, which was suggested by Paul Crutzen,and cloud reflectivity enhancement. Most techniques have at least some side effects.To date, no large-scale geoengineering projects have been undertaken. Some limited tree planting and cool roof projects are already underway, and ocean iron fertilization is at an advanced stage of research, with small-scale research trials and global modelling having been completed.Field research into sulfur aerosols has also started. Some commentators have suggested that consideration of geoengineering presents a moral hazard because it threatens to reduce the political and popular pressure for emissions reduction. Typically, the scientists and engineers proposing geoengineering strategies do not suggest that they are an alternative to emissions control, but rather an accompanying strategy.Reviews of geoengineering techniques have emphasised that they are not substitutes for emission controls and have identified potentially stronger and weaker schemes.
POSSIBLE GEO-ENGINEERING SOLUTION
.STRATOSPHERIC AEROSOLE
Spray shiny sulphur compounds into the high aymosphere to reflect sunlight.Possible side effects include change to global rainfall.
.ARTIFICAL TREES
Devices that use a chemical process to soak up carbon dioxide from the air.Technically possible but very expensive on a meaning ful scale.
.SPACE MIRROR
A collection of millions or even trillions of small mirrors rather than a gaint orbiting parasol in space to block the sun.Very expensive and impracticle with current technology.
.OCEAN FERTILISATION
Dump iron into the sea to boost plankton growth and soak up carbon dioxide from the atmosphere.Doubts about how deep the plankton would sink have rised doubts about how long the carbon would be secured.
.CLOUD WHITENING
Fleets of sailing ships string across the world's oceans could spray seawater into the sky to evaporate and leave behind shiny salt crystals to brighten clouds,which would then reflect sunlight back into space.Might interfere with wind and rain patterns.

Friday, February 5, 2010

AIDS

Acquired immune deficiency syndrome or acquired immunodeficiency syndrome (AIDS) is a disease of the human immune system caused by the human immunodeficiency virus (HIV).
This condition progressively reduces the effectiveness of the immune system and leaves individuals susceptible to opportunistic infections and tumors. HIV is transmitted through direct contact of a mucous membrane or the bloodstream with a bodily fluid containing HIV, such as blood, semen, vaginal fluid, preseminal fluid, and breast milk.
This transmission can involve anal, vaginal or oral sex, blood transfusion, contaminated hypodermic needles, exchange between mother and baby during pregnancy, childbirth, breastfeeding or other exposure to one of the above bodily fluids.
AIDS is now a pandemic. In 2007, it was estimated that 33.2 million people lived with the disease worldwide, and that AIDS killed an estimated 2.1 million people, including 330,000 children. Over three-quarters of these deaths occurred in sub-Saharan Africa, retarding economic growth and destroying human capital.
Genetic research indicates that HIV originated in west-central Africa during the late nineteenth or early twentieth century. was first recognized by the U.S. Centers for Disease Control and Prevention in 1981 and its cause, HIV, identified in the early 1980s.
Although treatments for AIDS and HIV can slow the course of the disease, there is currently no vaccine or cure. Antiretroviral treatment reduces both the mortality and the morbidity of HIV infection, but these drugs are expensive and routine access to antiretroviral medication is not available in all countries. Due to the difficulty in treating HIV infection, preventing infection is a key aim in controlling the AIDS pandemic, with health organizations promoting safe sex and needle-exchange programmes in attempts to slow the spread of the virus.
symptoms
The symptoms of AIDS are primarily the result of conditions that do not normally develop in individuals with healthy immune systems. Most of these conditions are infections caused by bacteria, viruses, fungi and parasites that are normally controlled by the elements of the immune system that HIV damages.
Opportunistic infections are common in people with AIDS. These infections affect nearly every organ system.
People with AIDS also have an increased risk of developing various cancers such as Kaposi's sarcoma, cervical cancer and cancers of the immune system known as lymphomas. Additionally, people with AIDS often have systemic symptoms of infection like fevers, sweats (particularly at night), swollen glands, chills, weakness, and weight loss. The specific opportunistic infections that AIDS patients develop depend in part on the prevalence of these infections in the geographic area in which the patient lives.
cause
AIDS is the most severe acceleration of infection with HIV. HIV is a retrovirus that primarily infects vital organs of the human immune system such as CD4+ T cells (a subset of T cells), macrophages and dendritic cells. It directly and indirectly destroys CD4+ T cells.
Once HIV has killed so many CD4+ T cells that there are fewer than 200 of these cells per microliter (µL) of blood, cellular immunity is lost. Acute HIV infection progresses over time to clinical latent HIV infection and then to early symptomatic HIV infection and later to AIDS, which is identified either on the basis of the amount of CD4+ T cells remaining in the blood, and/or the presence of certain infections, as noted above.
In the absence of antiretroviral therapy, the median time of progression from HIV infection to AIDS is nine to ten years, and the median survival time after developing AIDS is only 9.2 months., the rate of clinical disease progression varies widely between individuals, from two weeks up to 20 years.
Many factors affect the rate of progression. These include factors that influence the body's ability to defend against HIV such as the infected person's general immune function. Older people have weaker immune systems, and therefore have a greater risk of rapid disease progression than younger people.
Poor access to health care and the existence of coexisting infections such as tuberculosis also may predispose people to faster disease progression. The infected person's genetic inheritance plays an important role and some people are resistant to certain strains of HIV. An example of this is people with the homozygous CCR5-Δ32 variation are resistant to infection with certain strains of HIV. is genetically variable and exists as different strains, which cause different rates of clinical disease progression.
aids new treatment
Herbal immune system tea: a mix of the most powerful traditional Chinese herbs to activate the immune system to fight viruses
This herbal immune system tea is developed in Southern China where HIV/AIDS has become a serious epidemic, starting years ago through migrant workers to Thailand. There, in Northern Thailand, the herbal immune system tea has been tested first and is since then becoming popular. The tea may be restoring the immune system function of the body efficiently and a large number of viral secondary infections seem to be taken care of at the same time. While this is a new way of looking at the treatment of HIV, the formula itself is but a sophisticated mixture of traditional Chinese herbs known to have positive effects on the immune system.Please note that we have developed a new formulation called i-drops, which is designed as honey droplets. These droplets are easier to take and the honey increases resorption of the ingredients.

Thursday, February 4, 2010

CANCER

Cancer (medical term: malignant neoplasm) is a class of diseases in which a group of cells display uncontrolled growth (division beyond the normal limits), invasion (intrusion on and destruction of adjacent tissues), and sometimes metastasis (spread to other locations in the body via lymph or blood). These three malignant properties of cancers differentiate them from benign tumors, which are self-limited, and do not invade or metastasize. Most cancers form a tumor but some, like leukemia, do not. The branch of medicine concerned with the study, diagnosis, treatment, and prevention of cancer is oncology.Cancer affects people at all ages with the risk for most types increasing with age.Cancer caused about 13% of all human deaths in 2007(7.6 million).Cancers are caused by abnormalities in the genetic material of the transformed cells.These abnormalities may be due to the effects of carcinogens, such as tobacco smoke, radiation, chemicals, or infectious agents. Other cancer-promoting genetic abnormalities may randomly occur through errors in DNA replication, or are inherited, and thus present in all cells from birth. The heritability of cancers is usually affected by complex interactions between carcinogens and the host's genome.Genetic abnormalities found in cancer typically affect two general classes of genes. Cancer-promoting oncogenes are typically activated in cancer cells, giving those cells new properties, such as hyperactive growth and division, protection against programmed cell death, loss of respect for normal tissue boundaries, and the ability to become established in diverse tissue environments. Tumor suppressor genes are then inactivated in cancer cells, resulting in the loss of normal functions in those cells, such as accurate DNA replication, control over the cell cycle, orientation and adhesion within tissues, and interaction with protective cells of the immune system.Definitive diagnosis requires the histologic examination of a biopsy specimen, although the initial indication of malignancy can be symptomatic or radiographic imaging abnormalities. Most cancers can be treated and some cured, depending on the specific type, location, and stage. Once diagnosed, cancer is usually treated with a combination of surgery, chemotherapy and radiotherapy. As research develops, treatments are becoming more specific for different varieties of cancer. There has been significant progress in the development of targeted therapy drugs that act specifically on detectable molecular abnormalities in certain tumors, and which minimize damage to normal cells. The prognosis of cancer patients is most influenced by the type of cancer, as well as the stage, or extent of the disease. In addition, histologic grading and the presence of specific molecular markers can also be useful in establishing prognosis, as well as in determining individual treatments.
Classification
Cancers are classified by the type of cell that resembles the tumor and, therefore, the tissue presumed to be the origin of the tumor. These are the histology and the location, respectively. Examples of general categories include:.Carcinoma: Malignant tumors derived from epithelial cells. This group represents the most common cancers, including the common forms of breast, prostate, lung and colon cancer..Sarcoma: Malignant tumors derived from connective tissue, or mesenchymal cells..Lymphoma and leukemia: Malignancies derived from hematopoietic (blood-forming) cells.Germ cell tumor: Tumors derived from totipotent cells. In adults most often found in the testicle and ovary; in fetuses, babies, and young children most often found on the body midline, particularly at the tip of the tailbone; in horses most often found at the poll (base of the skull)..Blastic tumor or blastoma: A tumor (usually malignant) which resembles an immature or embryonic tissue. Many of these tumors are most common in children.Malignant tumors (cancers) are usually named using -carcinoma, -sarcoma or -blastoma as a suffix, with the Latin or Greek word for the organ of origin as the root. For instance, a cancer of the liver is called hepatocarcinoma; a cancer of the fat cells is called liposarcoma. For common cancers, the English organ name is used. For instance, the most common type of breast cancer is called ductal carcinoma of the breast or mammary ductal carcinoma. Here, the adjective ductal refers to the appearance of the cancer under the microscope, resembling normal breast ducts.Benign tumors (which are not cancers) are named using -oma as a suffix with the organ name as the root. For instance, a benign tumor of the smooth muscle of the uterus is called leiomyoma (the common name of this frequent tumor is fibroid). Unfortunately, some cancers also use the -oma suffix, examples being melanoma and seminoma.
Signs and symptoms
Symptoms of cancer metastasis depend on the location of the tumor.Roughly, cancer symptoms can be divided into three groups:.Local symptoms: unusual lumps or swelling (tumor), hemorrhage (bleeding), pain and/or ulceration. Compression of surrounding tissues may cause symptoms such as jaundice (yellowing the eyes and skin)..Symptoms of metastasis (spreading): enlarged lymph nodes, cough and hemoptysis, hepatomegaly (enlarged liver), bone pain, fracture of affected bones and neurological symptoms. Although advanced cancer may cause pain, it is often not the first symptom..Systemic symptoms: weight loss, poor appetite, fatigue and cachexia (wasting), excessive sweating (night sweats), anemia and specific paraneoplastic phenomena, i.e. specific conditions that are due to an active cancer, such as thrombosis or hormonal changes.Every symptom in the above list can be caused by a variety of conditions (a list of which is referred to as the differential diagnosis). Cancer may be a common or uncommon cause of each item.


Causes


Cancer is a diverse class of diseases which differ widely in their causes and biology. Any organism, even plants, can acquire cancer. Nearly all known cancers arise gradually, as errors build up in the cancer cell and its progeny .Anything which replicates (our cells) will probabilistically suffer from errors (mutations). Unless error correction and prevention is properly carried out, the errors will survive, and might be passed along to daughter cells. Normally, the body safeguards against cancer via numerous methods, such as: apoptosis, helper molecules (some DNA polymerases), possibly senescence, etc. However these error-correction methods often fail in small ways, especially in environments that make errors more likely to arise and propagate. For example, such environments can include the presence of disruptive substances called carcinogens, or periodic injury (physical, heat, etc.), or environments that cells did not evolve to withstand, such as hypoxia.Cancer is thus a progressive disease, and these progressive errors slowly accumulate until a cell begins to act contrary to its function in the animal.The errors which cause cancer are often self-amplifying, eventually compounding at an exponential rate. For example:.A mutation in the error-correcting machinery of a cell might cause that cell and its children to accumulate errors more rapidly.A mutation in signaling (endocrine) machinery of the cell can send error-causing signals to nearby cells.A mutation might cause cells to become neoplastic, causing them to migrate and disrupt more healthy cells.A mutation may cause the cell to become immortal (see telomeres), causing them to disrupt healthy cells foreverThus cancer often explodes in something akin to a chain reaction caused by a few errors, which compound into more severe errors. Errors which produce more errors are effectively the root cause of cancer, and also the reason that cancer is so hard to treat: even if there were 10,000,000,000 cancerous cells and one killed all but 10 of those cells, those cells (and other error-prone precancerous cells) could still self-replicate or send error-causing signals to other cells, starting the process over again. This rebellion-like scenario is an undesirable survival of the fittest, where the driving forces of evolution itself work against the body's design and enforcement of order. In fact, once cancer has begun to develop, this same force continues to drive the progression of cancer towards more invasive stages, and is called clonal evolutionResearch about cancer causes often falls into the following categories:.Agents (e.g. viruses) and events (e.g. mutations) which cause or facilitate genetic changes in cells destined to become cancer..The precise nature of the genetic damage, and the genes which are affected by it..The consequences of those genetic changes on the biology of the cell, both in generating the defining properties of a cancer cell, and in facilitating additional genetic events which lead to further progression of the cancer.