0% found this document useful (0 votes)
1K views334 pages

Física Contemporânea Cap 2B

Física Contemporânea Cap 2B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views334 pages

Física Contemporânea Cap 2B

Física Contemporânea Cap 2B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 334

Capítulo 2

Da Contemporaneidade aos Materiais

Parte B

2.5 Técnicas de Caracterização de Materiais


2.5.a X-ray
2.5.b Infrared Spectroscopy
2.5.c Raman Spectroscopy
2.5.d Nuclear Magnetic Resonance Spectroscopy
2.5.e Electron Paramagnetic Resonance
2.5.f Absorption Spectroscopy
2.5.g Photoluminescence
2.5.h Microscopy
2.5.h1 Electron Microscope
2.5.h2 Atomic-Force Microscopy
2.5.h3 Transmission Electron Microscopy
2.5.h4 Phase-Contrast Microscopy
2.5.h5 Fluorescence Microscopy
2.5.h6 Confocal Microscopy
2.6 Cristalografia
2.6.a X-ray Crystallography
2.6.b Crystallization
2.6.c Thin film
2.6.d Epitaxy
2.6.e Crystal Growth
2.6.f Protein Crystallization
2.6.f1 Protein Data Bank (PDB)
2.7 Teoria do Funcional da Densidade
2.8 Sistemas Complexos
2.8.a Chaos Theory
2.8.b Fractal
2.8.c Econophysics
2.9 O que é a Vida?
7/24/2017 Characterization (materials science) - Wikipedia

Characterization (materials science)


From Wikipedia, the free encyclopedia

[1][2]

Characterization, when used in materials science, refers to


the broad and general process by which a material's
structure and properties are probed and measured. It is a
fundamental process in the field of materials science,
without which no scientific understanding of engineering
materials could be ascertained. The scope of the term often
differs; some definitions limit the term's use to techniques
which study the microscopic structure and properties of
materials,[3] while others use the term to refer to any
materials analysis process including macroscopic
techniques such as mechanical testing, thermal analysis and
density calculation.[4] The scale of the structures observed
in materials characterization ranges from angstroms, such
as in the imaging of individual atoms and chemical bonds, The characterization technique optical microscopy
up to centimeters, such as in the imaging of coarse grain showing the micron scale dendritic microstructure
structures in metals. of a bronze alloy.

While many characterization techniques have been


practiced for centuries, such as basic optical microscopy, new techniques and methodologies are constantly
emerging. In particular the advent of the electron microscope and Secondary ion mass spectrometry in the 20th
century has revolutionized the field, allowing the imaging and analysis of structures and compositions on much
smaller scales than was previously possible, leading to a huge increase in the level of understanding as to why
different materials show different properties and behaviors.[5] More recently, atomic force microscopy has
further increased the maximum possible resolution for analysis of certain samples in the last 30 years.[6]

Contents
1 Microscopy
2 Spectroscopy
3 Macroscopic testing
4 See also
5 References

Microscopy
Microscopy is a category of characterization techniques which probe and map the surface and sub-surface
structure of a material. These techniques can use photons , electrons , ions or physical cantilever probes to
gather data about a sample's structure on a range of length scales. Some common examples of microscopy
instruments include:

Optical Microscope
Scanning Electron Microscope (SEM)
Transmission Electron Microscope (TEM)
Field Ion Microscope (FIM)
Scanning Tunneling Microscope (STM)
https://en.wikipedia.org/wiki/Characterization_(materials_science) 1/3
7/24/2017 Characterization (materials science) - Wikipedia

Scanning probe microscopy (SPM)


Atomic Force Microscope (AFM)
X-ray diffraction topography (XRT)

Spectroscopy
This group of techniques use a range of principles to reveal the
chemical composition, composition variation, crystal structure and
photoelectric properties of materials. Some common instruments
include:
Image of a graphite surface at an
Energy-dispersive X-ray spectroscopy (EDX) atomic level obtained by an STM.
Wavelength dispersive X-ray spectroscopy (WDX)
X-ray diffraction (XRD)
Mass spectrometry
Nuclear magnetic resonance spectroscopy (NMR)
Secondary ion mass spectrometry (SIMS)
Electron energy loss spectroscopy (EELS)
Auger electron spectroscopy
X-ray photoelectron spectroscopy (XPS)
Ultraviolet-visible spectroscopy (UV-vis)
Fourier transform infrared spectroscopy (FTIR)
Thermoluminescence (TL)
Photoluminescence (PL)
Photon correlation spectroscopy/Dynamic light scattering
(DLS)
Terahertz spectroscopy
Small-angle X-ray scattering (SAXS)
Small-angle neutron scattering (SANS) First X-ray diffraction view of Martian
X-ray Photon Correlation Spectroscopy[7] (XPCS) soil - CheMin analysis reveals feldspar,
pyroxenes, olivine and more (Curiosity
rover at "Rocknest", October 17, 2012).
Macroscopic testing [65]

A huge range of techniques are used to characterize


various macroscopic properties of materials, including:

Mechanical testing, including tensile,


compressive, torsional, creep, fatigue, toughness
and hardness testing
Differential thermal analysis (DTA)
Dielectric thermal analysis
Thermogravimetric analysis (TGA)
Differential scanning calorimetry (DSC)
Impulse excitation technique (IET)
Ultrasound techniques, including resonant
ultrasound spectroscopy and time domain
ultrasonic testing methods[9]

See also
Minatec Material characterization using terahertz spectroscopy:
the counterfeit IC is detected by revealing the
References differences between the constituent materials.[8]

https://en.wikipedia.org/wiki/Characterization_(materials_science) 2/3
7/24/2017 Characterization (materials science) - Wikipedia

1. Kumar, Sam Zhang, Lin Li, Ashok (2009). Materials characterization techniques. Boca Raton: CRC Press.
ISBN 1420042947.
2. . ISBN 978-0-470-82299-9. Missing or empty |title= (help)
3. Leng, Yang (2009). Materials Characterization: Introduction to Microscopic and Spectroscopic Methods. Wiley.
ISBN 978-0-470-82299-9.
4. Zhang, Sam (2008). Materials Characterization Techniques. CRC Press. ISBN 1420042947.
5. Mathys, Daniel, Zentrum für Mikroskopie, University of Basel: Die Entwicklung der Elektronenmikroskopie vom
Bild über die Analyse zum Nanolabor, p. 8
6. Patent US4724318 - Atomic force microscope and method for imaging surfaces with atomic resolution - Google
Patents (https://www.google.com/patents/US4724318?dq=4724318&hl=en&sa=X&ei=RByQU8KZFe_Q7AbK_4Fg
&ved=0CCoQ6AEwAA)
7. "What is X-ray Photon Correlation Spectroscopy (XPCS)?" (http://sector7.xray.aps.anl.gov/~dufresne/UofM/xpcs.ht
ml). sector7.xray.aps.anl.gov. Retrieved 2016-10-29.
8. Ahi, Kiarash (May 26, 2016). "Advanced terahertz techniques for quality control and counterfeit detection" (https://
www.researchgate.net/publication/303563438_Advanced_terahertz_techniques_for_quality_control_and_counterfeit
_detection). Proc. SPIE 9856, Terahertz Physics, Devices, and Systems X: Advanced Applications in Industry and
Defense, 98560G. doi:10.1117/12.2228684 (https://doi.org/10.1117%2F12.2228684). Retrieved May 26, 2016.
9. R. Truell, C. Elbaum and C.B. Chick., Ultrasonic methods in solid state physics New York, Academic Press Inc.,
1969.

Retrieved from "https://en.wikipedia.org/w/index.php?


title=Characterization_(materials_science)&oldid=786582569"

Categories: Materials science

This page was last edited on 20 June 2017, at 10:12.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Characterization_(materials_science) 3/3
7/19/2017 X-ray - Wikipedia

X-ray
From Wikipedia, the free encyclopedia

X-radiation (composed of X-rays) is a


form of electromagnetic radiation. Most X-
rays have a wavelength ranging from 0.01
to 10 nanometers, corresponding to
frequencies in the range 30 petahertz to 30
exahertz (3×1016 Hz to 3×1019 Hz) and
energies in the range 100 eV to 100 keV. X-
ray wavelengths are shorter than those of
UV rays and typically longer than those of
gamma rays. In many languages, X-
radiation is referred to with terms meaning
Röntgen radiation, after the German
scientist Wilhelm Röntgen,[1] who usually X-rays are part of the electromagnetic spectrum, with wavelengths
is credited as its discoverer, and who had shorter than visible light. Different applications use different parts of
named it X-radiation to signify an unknown the X-ray spectrum.
[2]
type of radiation. Spelling of X-ray(s) in
the English language includes the variants x-ray(s), xray(s), and X ray(s).[3]

Contents
1 Energy ranges
1.1 Soft and hard X-rays
1.2 Gamma rays
2 Properties
3 Interaction with matter
3.1 Photoelectric absorption
3.2 Compton scattering
3.3 Rayleigh scattering
4 Production
4.1 Production by electrons
4.2 Production by fast positive ions
5 Detectors
6 Medical uses
6.1 Projectional radiographs
6.2 Computed tomography
6.3 Fluoroscopy
6.4 Radiotherapy
7 Adverse effects
8 Other uses
9 History
9.1 Discovery
9.2 Early research
9.3 Wilhelm Röntgen
9.4 Advances in radiology
9.5 Hazards discovered
9.6 20th century and beyond
10 Visibility
11 Units of measure and exposure
12 See also
13 References
https://en.wikipedia.org/wiki/X-ray 1/20
7/19/2017 X-ray - Wikipedia

14 External links

Energy ranges
Soft and hard X-rays

X-rays with high photon energies (above 5–10 keV, below 0.2–0.1 nm wavelength) are called hard X-rays,
while those with lower energy are called soft X-rays.[4] Due to their penetrating ability, hard X-rays are widely
used to image the inside of objects, e.g., in medical radiography and airport security. The term X-ray is
metonymically used to refer to a radiographic image produced using this method, in addition to the method
itself. Since the wavelengths of hard X-rays are similar to the size of atoms they are also useful for determining
crystal structures by X-ray crystallography. By contrast, soft X-rays are easily absorbed in air; the attenuation
length of 600 eV (~2 nm) X-rays in water is less than 1 micrometer.[5]

Gamma rays

There is no consensus for a definition distinguishing between X-rays and gamma rays. One common practice is
to distinguish between the two types of radiation based on their source: X-rays are emitted by electrons, while
gamma rays are emitted by the atomic nucleus.[6][7][8][9] This definition has several problems: other processes
also can generate these high-energy photons, or sometimes the method of generation is not known. One
common alternative is to distinguish X- and gamma radiation on the basis of wavelength (or, equivalently,
frequency or photon energy), with radiation shorter than some arbitrary wavelength, such as 10−11 m (0.1 Å),
defined as gamma radiation.[10] This criterion assigns a photon to an unambiguous category, but is only
possible if wavelength is known. (Some measurement techniques do not distinguish between detected
wavelengths.) However, these two definitions often coincide since the electromagnetic radiation emitted by X-
ray tubes generally has a longer wavelength and lower photon energy than the radiation emitted by radioactive
nuclei.[6] Occasionally, one term or the other is used in specific contexts due to historical precedent, based on
measurement (detection) technique, or based on their intended use rather than their wavelength or source. Thus,
gamma-rays generated for medical and industrial uses, for example radiotherapy, in the ranges of 6–20 MeV,
can in this context also be referred to as X-rays.

Properties
X-ray photons carry enough energy to ionize atoms and disrupt molecular bonds. This
makes it a type of ionizing radiation, and therefore harmful to living tissue. A very high
radiation dose over a short period of time causes radiation sickness, while lower doses
can give an increased risk of radiation-induced cancer. In medical imaging this
increased cancer risk is generally greatly outweighed by the benefits of the
examination. The ionizing capability of X-rays can be utilized in cancer treatment to
kill malignant cells using radiation therapy. It is also used for material characterization
Ionizing radiation
using X-ray spectroscopy. hazard symbol
Hard X-rays can traverse relatively thick objects without being much absorbed or
scattered. For this reason, X-rays are widely used to image the inside of visually opaque objects. The most
often seen applications are in medical radiography and airport security scanners, but similar techniques are also
important in industry (e.g. industrial radiography and industrial CT scanning) and research (e.g. small animal
CT). The penetration depth varies with several orders of magnitude over the X-ray spectrum. This allows the
photon energy to be adjusted for the application so as to give sufficient transmission through the object and at
the same time good contrast in the image.

https://en.wikipedia.org/wiki/X-ray 2/20
7/19/2017 X-ray - Wikipedia

X-rays have much shorter wavelengths than visible light, which makes
it possible to probe structures much smaller than can be seen using a
normal microscope. This property is used in X-ray microscopy to
acquire high resolution images, and also in X-ray crystallography to
determine the positions of atoms in crystals.

Interaction with matter


X-rays interact with matter in three main ways, through
photoabsorption, Compton scattering, and Rayleigh scattering. The
strength of these interactions depends on the energy of the X-rays and Attenuation length of X-rays in water
the elemental composition of the material, but not much on chemical showing the oxygen absorption edge
properties, since the X-ray photon energy is much higher than chemical at 540 eV, the energy−3 dependence
binding energies. Photoabsorption or photoelectric absorption is the of photoabsorption, as well as a
dominant interaction mechanism in the soft X-ray regime and for the leveling off at higher photon energies
lower hard X-ray energies. At higher energies, Compton scattering due to Compton scattering. The
dominates. attenuation length is about four orders
of magnitude longer for hard X-rays
Photoelectric absorption (right half) compared to soft X-rays
(left half).
The probability of a photoelectric absorption per unit mass is
approximately proportional to Z3/E3, where Z is the atomic number and
E is the energy of the incident photon.[11] This rule is not valid close to inner shell electron binding energies
where there are abrupt changes in interaction probability, so called absorption edges. However, the general
trend of high absorption coefficients and thus short penetration depths for low photon energies and high atomic
numbers is very strong. For soft tissue, photoabsorption dominates up to about 26 keV photon energy where
Compton scattering takes over. For higher atomic number substances this limit is higher. The high amount of
calcium (Z=20) in bones together with their high density is what makes them show up so clearly on medical
radiographs.

A photoabsorbed photon transfers all its energy to the electron with which it interacts, thus ionizing the atom to
which the electron was bound and producing a photoelectron that is likely to ionize more atoms in its path. An
outer electron will fill the vacant electron position and produce either a characteristic photon or an Auger
electron. These effects can be used for elemental detection through X-ray spectroscopy or Auger electron
spectroscopy.

Compton scattering

Compton scattering is the predominant interaction between X-rays and soft tissue in medical imaging.[12]
Compton scattering is an inelastic scattering of the X-ray photon by an outer shell electron. Part of the energy
of the photon is transferred to the scattering electron, thereby ionizing the atom and increasing the wavelength
of the X-ray. The scattered photon can go in any direction, but a direction similar to the original direction is
more likely, especially for high-energy X-rays. The probability for different scattering angles are described by
the Klein–Nishina formula. The transferred energy can be directly obtained from the scattering angle from the
conservation of energy and momentum.

Rayleigh scattering

Rayleigh scattering is the dominant elastic scattering mechanism in the X-ray regime.[13] Inelastic forward
scattering gives rise to the refractive index, which for X-rays is only slightly below 1.[14]

Production
https://en.wikipedia.org/wiki/X-ray 3/20
7/19/2017 X-ray - Wikipedia

Whenever charged particles (electrons or ions) of sufficient energy hit a material, X-rays are produced.

Production by electrons

X-rays can be generated by an X-ray tube, a


vacuum tube that uses a high voltage to Characteristic X-ray emission lines for some common
accelerate the electrons released by a hot cathode anode materials.[15][16]
to a high velocity. The high velocity electrons Anode Atomic Photon energy [keV] Wavelength [nm]
collide with a metal target, the anode, creating material number K Kβ1 Kα1 Kβ1
α1
the X-rays.[17] In medical X-ray tubes the target
is usually tungsten or a more crack-resistant W 74 59.3 67.2 0.0209 0.0184
alloy of rhenium (5%) and tungsten (95%), but Mo 42 17.5 19.6 0.0709 0.0632
sometimes molybdenum for more specialized Cu 29 8.05 8.91 0.154 0.139
applications, such as when softer X-rays are
needed as in mammography. In crystallography, Ag 47 22.2 24.9 0.0559 0.0497
a copper target is most common, with cobalt Ga 31 9.25 10.26 0.134 0.121
often being used when fluorescence from iron In 49 24.2 27.3 0.0512 0.455
content in the sample might otherwise present a
problem.

The maximum energy of the produced X-ray photon is limited by the


energy of the incident electron, which is equal to the voltage on the tube
times the electron charge, so an 80 kV tube cannot create X-rays with
an energy greater than 80 keV. When the electrons hit the target, X-rays
are created by two different atomic processes:

1. Characteristic X-ray emission (X-ray fluorescence): If the


electron has enough energy it can knock an orbital electron out of
the inner electron shell of a metal atom, and as a result electrons Spectrum of the X-rays emitted by an
from higher energy levels then fill up the vacancy and X-ray X-ray tube with a rhodium target,
photons are emitted. This process produces an emission spectrum operated at 60 kV. The smooth,
of X-rays at a few discrete frequencies, sometimes referred to as continuous curve is due to
the spectral lines. The spectral lines generated depend on the bremsstrahlung, and the spikes are
target (anode) element used and thus are called characteristic characteristic K lines for rhodium
lines. Usually these are transitions from upper shells into K shell atoms.
(called K lines), into L shell (called L lines) and so on.
2. Bremsstrahlung: This is radiation given off by the electrons as
they are scattered by the strong electric field near the high-Z (proton number) nuclei. These X-rays have
a continuous spectrum. The intensity of the X-rays increases linearly with decreasing frequency, from
zero at the energy of the incident electrons, the voltage on the X-ray tube.

So the resulting output of a tube consists of a continuous bremsstrahlung spectrum falling off to zero at the tube
voltage, plus several spikes at the characteristic lines. The voltages used in diagnostic X-ray tubes range from
roughly 20 kV to 150 kV and thus the highest energies of the X-ray photons range from roughly 20 keV to 150
keV.[18]

Both of these X-ray production processes are inefficient, with a production efficiency of only about one
percent, and thus most of the electric power consumed by the tube is released as waste heat. When producing a
usable flux of X-rays, the X-ray tube must be designed to dissipate the excess heat.

Short nanosecond bursts of X-rays peaking at 15-keV in energy may be reliably produced by peeling pressure-
sensitive adhesive tape from its backing in a moderate vacuum. This is likely to be the result of recombination
of electrical charges produced by triboelectric charging. The intensity of X-ray triboluminescence is sufficient
for it to be used as a source for X-ray imaging.[19]

https://en.wikipedia.org/wiki/X-ray 4/20
7/19/2017 X-ray - Wikipedia

A specialized source of X-rays which is becoming widely used in research is synchrotron radiation, which is
generated by particle accelerators. Its unique features are X-ray outputs many orders of magnitude greater than
those of X-ray tubes, wide X-ray spectra, excellent collimation, and linear polarization.[20]

Production by fast positive ions

X-rays can also be produced by fast protons or other positive ions. The proton-induced X-ray emission or
particle-induced X-ray emission is widely used as an analytical procedure. For high energies, the production
cross section is proportional to Z12Z2−4, where Z1 refers to the atomic number of the ion, Z2 to that of the target
atom.[21] An overview of these cross sections is given in the same reference.

Detectors
X-ray detectors vary in shape and function depending on their purpose. Imaging detectors such as those used
for radiography were originally based on photographic plates and later photographic film, but are now mostly
replaced by various digital detector types such as image plates and flat panel detectors. For radiation protection
direct exposure hazard is often evaluated using ionization chambers, while dosimeters are used to measure the
radiation dose a person has been exposed to. X-ray spectra can be measured either by energy dispersive or
wavelength dispersive spectrometers.

Medical uses
Since Röntgen's discovery that X-rays can identify bone structures, X-
rays have been used for medical imaging. The first medical use was less
than a month after his paper on the subject.[22] Up to 2010, 5 billion
medical imaging examinations had been conducted worldwide.[23]
Radiation exposure from medical imaging in 2006 made up about 50%
of total ionizing radiation exposure in the United States.[24]

Projectional radiographs

Projectional radiography is the practice of producing two-dimensional X-ray.


images using x-ray radiation. Bones contain much calcium, which due
to its relatively high atomic number absorbs x-rays efficiently. This
reduces the amount of X-rays reaching the detector in the shadow of the
bones, making them clearly visible on the radiograph. The lungs and
trapped gas also show up clearly because of lower absorption compared
to tissue, while differences between tissue types are harder to see.

Projectional radiographs are useful in the detection of pathology of the


skeletal system as well as for detecting some disease processes in soft
tissue. Some notable examples are the very common chest X-ray, which
can be used to identify lung diseases such as pneumonia, lung cancer, or
pulmonary edema, and the abdominal x-ray, which can detect bowel (or
intestinal) obstruction, free air (from visceral perforations) and free
fluid (in ascites). X-rays may also be used to detect pathology such as
gallstones (which are rarely radiopaque) or kidney stones which are
often (but not always) visible. Traditional plain X-rays are less useful in A chest radiograph of a female,
the imaging of soft tissues such as the brain or muscle. demonstrating a hiatus hernia

Dental radiography is commonly used in the diagnoses of common oral


problems, such as cavities.
https://en.wikipedia.org/wiki/X-ray 5/20
7/19/2017 X-ray - Wikipedia

In medical diagnostic applications, the low energy (soft) X-rays are


unwanted, since they are totally absorbed by the body, increasing the
radiation dose without contributing to the image. Hence, a thin metal
sheet, often of aluminium, called an X-ray filter, is usually placed over
the window of the X-ray tube, absorbing the low energy part in the
spectrum. This is called hardening the beam since it shifts the center of
the spectrum towards higher energy (or harder) x-rays.

To generate an image of the cardiovascular system, including the


arteries and veins (angiography) an initial image is taken of the
anatomical region of interest. A second image is then taken of the same
region after an iodinated contrast agent has been injected into the blood
vessels within this area. These two images are then digitally subtracted,
leaving an image of only the iodinated contrast outlining the blood
vessels. The radiologist or surgeon then compares the image obtained to
normal anatomical images to determine whether there is any damage or
blockage of the vessel.
An arm radiograph, demonstrating
Computed tomography broken ulna and radius with
implanted internal fixation.
Computed tomography (CT scanning) is a medical imaging modality
where tomographic images or slices of specific areas of the body are
obtained from a large series of two-dimensional X-ray images taken in
different directions.[25] These cross-sectional images can be combined
into a three-dimensional image of the inside of the body and used for
diagnostic and therapeutic purposes in various medical disciplines.

Fluoroscopy

Fluoroscopy is an imaging technique commonly used by physicians or


radiation therapists to obtain real-time moving images of the internal
structures of a patient through the use of a fluoroscope. In its simplest
form, a fluoroscope consists of an X-ray source and a fluorescent
screen, between which a patient is placed. However, modern
fluoroscopes couple the screen to an X-ray image intensifier and CCD
video camera allowing the images to be recorded and played on a
monitor. This method may use a contrast material. Examples include
Head CT scan (transverse plane) slice
cardiac catheterization (to examine for coronary artery blockages) and
-– a modern application of medical
barium swallow (to examine for esophageal disorders).
radiography

Radiotherapy

The use of X-rays as a treatment is known as radiation therapy and is largely used for the management
(including palliation) of cancer; it requires higher radiation doses than those received for imaging alone. X-rays
beams are used for treating skin cancers using lower energy x-ray beams while higher energy beams are used
for treating cancers within the body such as brain, lung, prostate, and breast.[26][27]

Adverse effects
Diagnostic X-rays (primarily from CT scans due to the large dose used) increase the risk of developmental
problems and cancer in those exposed.[28][29][30] X-rays are classified as a carcinogen by both the World Health
Organization's International Agency for Research on Cancer and the U.S. government.[23][31] It is estimated

https://en.wikipedia.org/wiki/X-ray 6/20
7/19/2017 X-ray - Wikipedia

that 0.4% of current cancers in the United States are due to computed
tomography (CT scans) performed in the past and that this may increase to as
high as 1.5-2% with 2007 rates of CT usage.[32]

Experimental and epidemiological data currently do not support the


proposition that there is a threshold dose of radiation below which there is no
increased risk of cancer.[33] However, this is under increasing doubt.[34] It is
estimated that the additional radiation will increase a person's cumulative risk
of getting cancer by age 75 by 0.6–1.8%.[35] The amount of absorbed radiation
depends upon the type of X-ray test and the body part involved.[36] CT and
fluoroscopy entail higher doses of radiation than do plain X-rays.
Abdominal radiograph of a
pregnant woman, a procedure To place the increased risk in perspective, a plain chest X-ray will expose a
that should be performed only person to the same amount from background radiation that people are exposed
after proper assessment of to (depending upon location) every day over 10 days, while exposure from a
benefit versus risk dental X-ray is approximately equivalent to 1 day of environmental
background radiation.[37] Each such X-ray would add less than 1 per 1,000,000
to the lifetime cancer risk. An abdominal or chest CT would be the
equivalent to 2–3 years of background radiation to the whole body, or
4–5 years to the abdomen or chest, increasing the lifetime cancer risk
between 1 per 1,000 to 1 per 10,000.[37] This is compared to the roughly
40% chance of a US citizen developing cancer during their lifetime.[38]
For instance, the effective dose to the torso from a CT scan of the chest
is about 5 mSv, and the absorbed dose is about 14 mGy.[39] A head CT
scan (1.5mSv, 64mGy)[40] that is performed once with and once without
contrast agent, would be equivalent to 40 years of background radiation
to the head. Accurate estimation of effective doses due to CT is difficult
with the estimation uncertainty range of about ±19% to ±32% for adult
head scans depending upon the method used.[41]
Deformity of hand due to an X-ray The risk of radiation is greater to a fetus, so in pregnant patients, the
burn. These burns are accidents. X- benefits of the investigation (X-ray) should be balanced with the
rays were not shielded when they
potential hazards to the fetus.[42][43] In the US, there are an estimated
were first discovered and used, and
62 million CT scans performed annually, including more than 4 million
people received radiation burns.
on children.[36] Avoiding unnecessary X-rays (especially CT scans)
reduces radiation dose and any associated cancer risk.[44]

Medical X-rays are a significant source of man-made radiation exposure. In 1987, they accounted for 58% of
exposure from man-made sources in the United States. Since man-made sources accounted for only 18% of the
total radiation exposure, most of which came from natural sources (82%), medical X-rays only accounted for
10% of total American radiation exposure; medical procedures as a whole (including nuclear medicine)
accounted for 14% of total radiation exposure. By 2006, however, medical procedures in the United States were
contributing much more ionizing radiation than was the case in the early 1980s. In 2006, medical exposure
constituted nearly half of the total radiation exposure of the U.S. population from all sources. The increase is
traceable to the growth in the use of medical imaging procedures, in particular computed tomography (CT), and
to the growth in the use of nuclear medicine.[24][45]

Dosage due to dental X-rays varies significantly depending on the procedure and the technology (film or
digital). Depending on the procedure and the technology, a single dental X-ray of a human results in an
exposure of 0.5 to 4 mrem. A full mouth series may therefore result in an exposure of up to 6 (digital) to 18
(film) mrem, for a yearly average of up to 40 mrem.[46][47][48][49][50][51][52]

https://en.wikipedia.org/wiki/X-ray 7/20
7/19/2017 X-ray - Wikipedia

Other uses
Other notable uses of X-rays include

X-ray crystallography in which the pattern produced by the diffraction of


X-rays through the closely spaced lattice of atoms in a crystal is
recorded and then analysed to reveal the nature of that lattice. A related
technique, fiber diffraction, was used by Rosalind Franklin to discover
the double helical structure of DNA.[53]
X-ray astronomy, which is an observational branch of astronomy, which
deals with the study of X-ray emission from celestial objects.
X-ray microscopic analysis, which uses electromagnetic radiation in the
soft X-ray band to produce images of very small objects.
X-ray fluorescence, a technique in which X-rays are generated within a
specimen and detected. The outgoing energy of the X-ray can be used to
Each dot, called a reflection,
identify the composition of the sample.
Industrial radiography uses X-rays for inspection of industrial parts, in this diffraction pattern
particularly welds. forms from the constructive
interference of scattered X-
Authentication and quality control, X- rays passing through a
ray is used for authentication and crystal. The data can be used
quality control of packaged items. to determine the crystalline
Industrial CT (computed tomography) structure.
is a process which uses X-ray
equipment to produce three-
dimensional representations of components both externally and
internally. This is accomplished through computer processing of
Using X-ray for inspection and projection images of the scanned object in many directions.
quality control: the differences in the Paintings are often X-rayed to reveal underdrawings and pentimenti,
alterations in the course of painting or by later restorers. Many pigments
structures of the die and bond wires
such as lead white show well in radiographs.
reveal the left chip to be
X-ray spectromicroscopy has been used to analyse the reactions of
counterfeit.[54] pigments in paintings. For example, in analysing colour degradation in
the paintings of van Gogh[55]
Airport security luggage scanners use X-rays for inspecting the interior of luggage for security threats
before loading on aircraft.
Border control truck scanners use X-rays for inspecting the interior of trucks.

X-ray art and fine art photography, artistic use of X-rays, for example
the works by Stane Jagodič
X-ray hair removal, a method popular in the 1920s but now banned by
the FDA.[56]
Shoe-fitting fluoroscopes were popularized in the 1920s, banned in the
US in the 1960s, banned in the UK in the 1970s, and even later in
continental Europe.
Roentgen stereophotogrammetry is used to track movement of bones
based on the implantation of markers X-ray fine art photography of
X-ray photoelectron spectroscopy is a chemical analysis technique needlefish by Peter Dazeley
relying on the photoelectric effect, usually employed in surface science.
Radiation implosion is the use of high energy X-rays generated from a
fission explosion (an A-bomb) to compress nuclear fuel to the point of fusion ignition (an H-bomb).

History
Discovery

https://en.wikipedia.org/wiki/X-ray 8/20
7/19/2017 X-ray - Wikipedia

German physicist Wilhelm Röntgen is usually credited as the discoverer of X-


rays in 1895, because he was the first to systematically study them, though he is
not the first to have observed their effects. He is also the one who gave them the
name "X-rays" (signifying an unknown quantity[57]) though many others
referred to these as "Röntgen rays" (and the associated X-ray radiograms as,
"Röntgenograms") for several decades after their discovery and even to this day
in some languages, including Röntgen's native German.

X-rays were found emanating from


Crookes tubes, experimental discharge
tubes invented around 1875, by scientists
investigating the cathode rays, that is
energetic electron beams, that were first
created in the tubes. Crookes tubes created
Wilhelm Röntgen
free electrons by ionization of the residual
air in the tube by a high DC voltage of
anywhere between a few kilovolts and 100 kV. This voltage accelerated the
electrons coming from the cathode to a high enough velocity that they
created X-rays when they struck the anode or the glass wall of the tube.
Many of the early Crookes tubes undoubtedly radiated X-rays, because
early researchers noticed effects that were attributable to them, as detailed
below. Wilhelm Röntgen was the first to systematically study them, in
1895.[60]

Hand mit Ringen (Hand with The discovery of X-rays stimulated a veritable sensation. Röntgen's
Rings): print of Wilhelm biographer Otto Glasser estimated that, in 1896 alone, as many as 49
Röntgen's first "medical" X-ray, essays and 1044 articles about the new rays were published.[61] This was
of his wife's hand, taken on 22 probably a conservative estimate, if one considers that nearly every paper
December 1895 and presented to around the world extensively reported about the new discovery, with a
Ludwig Zehnder of the Physik magazine such as Science dedicating as many as 23 articles to it in that
Institut, University of Freiburg, on year alone.[62] Sensationalist reactions to the new discovery included
1 January 1896[58][59] publications linking the new kind of rays to occult and paranormal
theories, such as telepathy.[63][64]

Early research

In 1876, Eugen Goldstein proved that they came from the cathode, and named them cathode rays
(Kathodenstrahlen).[65] Both William Crookes (in the 1880s)[66] and German physicist Johann Hittorf,[67] a co-
inventor and early researcher of the Crookes tube, found that paper wrapped photographic plates placed near
the tube became unaccountably fogged or flawed by shadows, although they had not been exposed to light.
Neither found the cause nor investigated this effect.

In 1877 Ukrainian-born Ivan Pulyui, a lecturer in experimental physics at the University of Vienna, constructed
various designs of vacuum discharge tube to investigate their properties.[68] He continued his investigations
when appointed professor at the Prague Polytechnic and in 1886 he found that sealed photographic plates
became dark when exposed to the emanations from the tubes. Early in 1896, just a few weeks after Röntgen
published his first X-ray photograph, Pulyui published high-quality X-ray images in journals in Paris and
London.[68] Although Pulyui had studied with Röntgen at the University of Strasbourg in the years 1873–75,
his biographer Gaida (1997) asserts that his subsequent research was conducted independently.[68]

X-rays were generated and detected by Fernando Sanford (1854–1948), the foundation Professor of Physics at
Stanford University, in 1891. From 1886 to 1888 he had studied in the Hermann Helmholtz laboratory in
Berlin, where he became familiar with the cathode rays generated in vacuum tubes when a voltage was applied
https://en.wikipedia.org/wiki/X-ray 9/20
7/19/2017 X-ray - Wikipedia

across separate electrodes, as previously studied by Heinrich Hertz and Philipp


Lenard. His letter of January 6, 1893 (describing his discovery as "electric
photography") to The Physical Review was duly published and an article
entitled Without Lens or Light, Photographs Taken With Plate and Object in
Darkness appeared in the San Francisco Examiner.[69]

Starting in 1888, Philipp Lenard, a student of Heinrich Hertz, conducted


experiments to see whether cathode rays could pass out of the Crookes tube into
the air. He built a Crookes tube (later called a "Lenard tube") with a "window"
in the end made of thin aluminum, facing the cathode so the cathode rays would
strike it. He found that something came through, that would expose
photographic plates and cause fluorescence. He measured the penetrating power
of these rays through various materials. It has been suggested that at least some
of these "Lenard rays" were actually X-rays.[70]

Hermann von Helmholtz formulated mathematical equations for X-rays. He


postulated a dispersion theory before Röntgen made his discovery and
announcement. It was formed on the basis of the electromagnetic theory of
light.[71] However, he did not work with actual X-rays.

In 1894 Nikola Tesla noticed damaged film in his lab that seemed to be
associated with Crookes tube experiments and began investigating this radiant
energy of "invisible" kinds.[72][73] After Röntgen identified the X-ray Tesla
began making X-ray images of his own using high voltages and tubes of his
own design,[74] as well as Crookes tubes.
Pulyui's X-ray images

Wilhelm Röntgen

On November 8, 1895, German physics


professor Wilhelm Röntgen stumbled on X-
rays while experimenting with Lenard and
Crookes tubes and began studying them. He
wrote an initial report "On a new kind of
ray: A preliminary communication" and on
December 28, 1895 submitted it to
Würzburg's Physical-Medical Society
journal.[75] This was the first paper written
on X-rays. Röntgen referred to the radiation
as "X", to indicate that it was an unknown
type of radiation. The name stuck, although
(over Röntgen's great objections) many of
Taking an X-ray image with early Crookes tube apparatus, late 1800s.
his colleagues suggested calling them
The Crookes tube is visible in center. The standing man is viewing
Röntgen rays. They are still referred to as
his hand with a fluoroscope screen. The seated man is taking a
such in many languages, including German,
radiograph of his hand by placing it on a photographic plate. No
Danish, Polish, Swedish, Finnish, Estonian,
precautions against radiation exposure are taken; its hazards were not
Russian, Japanese, Dutch, and Norwegian.
known at the time.
Röntgen received the first Nobel Prize in
Physics for his discovery.[76]

There are conflicting accounts of his discovery because Röntgen had his lab notes burned after his death, but
this is a likely reconstruction by his biographers:[77][78] Röntgen was investigating cathode rays from a Crookes
tube which he had wrapped in black cardboard so that the visible light from the tube would not interfere, using
a fluorescent screen painted with barium platinocyanide. He noticed a faint green glow from the screen, about 1

https://en.wikipedia.org/wiki/X-ray 10/20
7/19/2017 X-ray - Wikipedia

meter away. Röntgen realized some invisible rays coming from the tube
were passing through the cardboard to make the screen glow. He found
they could also pass through books and papers on his desk. Röntgen
threw himself into investigating these unknown rays systematically.
Two months after his initial discovery, he published his paper.

Röntgen discovered their medical use when he made a picture of his


wife's hand on a photographic plate formed due to X-rays. The
photograph of his wife's hand was the first photograph of a human body 1896 plaque published in "Nouvelle
part using X-rays. When she saw the picture, she said "I have seen my Iconographie de la Salpetrière", a
death."[79] medical journal. In the left a hand
deformity, in the right same hand
Advances in radiology seen using radiography. The authors
designated the technique as Röntgen
In 1895, Thomas Edison investigated materials' ability to fluoresce photography.
when exposed to X-rays, and found that calcium tungstate was the most
effective substance. Around March 1896, the fluoroscope he developed
became the standard for medical X-ray examinations. Nevertheless,
Edison dropped X-ray research around 1903, even before the death of
Clarence Madison Dally, one of his glassblowers. Dally had a habit of
testing X-ray tubes on his hands, and acquired a cancer in them so
tenacious that both arms were amputated in a futile attempt to save his
life.

The first use of X-rays under clinical conditions was by John Hall- A simplified diagram of a water-
Edwards in Birmingham, England on 11 January 1896, when he cooled X-ray tube
radiographed a needle stuck in the hand of an associate.[80] On 14
February 1896 Hall-Edwards was also the first to use X-rays in a surgical operation.[81] In early 1896, several
weeks after Röntgen's discovery, Ivan Romanovich Tarkhanov irradiated frogs and insects with X-rays,
concluding that the rays "not only photograph, but also affect the living function".[82]

The first medical X-ray made in the United States was obtained using a discharge tube of Pulyui's design. In
January 1896, on reading of Röntgen's discovery, Frank Austin of Dartmouth College tested all of the discharge
tubes in the physics laboratory and found that only the Pulyui tube produced X-rays. This was a result of
Pulyui's inclusion of an oblique "target" of mica, used for holding samples of fluorescent material, within the
tube. On 3 February 1896 Gilman Frost, professor of medicine at the college, and his brother Edwin Frost,
professor of physics, exposed the wrist of Eddie McCarthy, whom Gilman had treated some weeks earlier for a
fracture, to the X-rays and collected the resulting image of the broken bone on gelatin photographic plates
obtained from Howard Langill, a local photographer also interested in Röntgen's work.[22]

In 1901, U.S. President William McKinley was shot twice in an assassination attempt. While one bullet only
grazed his sternum, another had lodged somewhere deep inside his abdomen and could not be found. "A
worried McKinley aide sent word to inventor Thomas Edison to rush an X-ray machine to Buffalo to find the
stray bullet. It arrived but wasn't used." While the shooting itself had not been lethal, "gangrene had developed
along the path of the bullet, and McKinley died of septic shock due to bacterial infection" six days later.[83]

Hazards discovered

With the widespread experimentation with x‑rays after their discovery in 1895 by scientists, physicians, and
inventors came many stories of burns, hair loss, and worse in technical journals of the time. In February 1896,
Professor John Daniel and Dr. William Lofland Dudley of Vanderbilt University reported hair loss after Dr.
Dudley was X-rayed. A child who had been shot in the head was brought to the Vanderbilt laboratory in 1896.
Before trying to find the bullet an experiment was attempted, for which Dudley "with his characteristic

https://en.wikipedia.org/wiki/X-ray 11/20
7/19/2017 X-ray - Wikipedia

devotion to science"[84][85][86] volunteered. Daniel reported that 21 days after taking a picture of Dudley's skull
(with an exposure time of one hour), he noticed a bald spot 2 inches (5.1 cm) in diameter on the part of his head
nearest the X-ray tube: "A plate holder with the plates towards the side of the skull was fastened and a coin
placed between the skull and the head. The tube was fastened at the other side at a distance of one-half inch
from the hair."[87]

In August 1896 Dr. HD. Hawks, a graduate of Columbia College, suffered severe hand and chest burns from an
x-ray demonstration. It was reported in Electrical Review and led to many other reports of problems associated
with x-rays being sent in to the publication.[88] Many experimenters including Elihu Thomson at Edison's lab,
William J. Morton, and Nikola Tesla also reported burns. Elihu Thomson deliberately exposed a finger to an x-
ray tube over a period of time and suffered pain, swelling, and blistering.[89] Other effects were sometimes
blamed for the damage including ultraviolet rays and (according to Tesla) ozone.[90] Many physicians claimed
there were no effects from x-ray exposure at all.[89] On 3 August 1905 at San Francisco, California, Elizabeth
Fleischman, American woman X-ray pioneer, died from complications as a result of her work with X-
rays.[91][92][93]

20th century and beyond

The many applications of X-rays immediately generated enormous


interest. Workshops began making specialized versions of Crookes
tubes for generating X-rays and these first-generation cold cathode or
Crookes X-ray tubes were used until about 1920.

Crookes tubes were unreliable. They had to contain a small quantity of


gas (invariably air) as a current will not flow in such a tube if they are
fully evacuated. However, as time passed, the X-rays caused the glass to
absorb the gas, causing the tube to generate "harder" X-rays until it
soon stopped operating. Larger and more frequently used tubes were A patient being examined with a
provided with devices for restoring the air, known as "softeners". These thoracic fluoroscope in 1940, which
often took the form of a small side tube which contained a small piece displayed continuous moving images.
of mica, a mineral that traps relatively large quantities of air within its This image was used to argue that
structure. A small electrical heater heated the mica, causing it to release radiation exposure during the X-ray
a small amount of air, thus restoring the tube's efficiency. However, the procedure would be negligible.
mica had a limited life, and the restoration process was difficult to
control.

In 1904, John Ambrose Fleming invented the thermionic diode, the first kind of vacuum tube. This used a hot
cathode that caused an electric current to flow in a vacuum. This idea was quickly applied to X-ray tubes, and
hence heated-cathode X-ray tubes, called "Coolidge tubes", completely replaced the troublesome cold cathode
tubes by about 1920.

In about 1906, the physicist Charles Barkla discovered that X-rays could be scattered by gases, and that each
element had a characteristic X-ray spectrum. He won the 1917 Nobel Prize in Physics for this discovery.

In 1912, Max von Laue, Paul Knipping, and Walter Friedrich first observed the diffraction of X-rays by
crystals. This discovery, along with the early work of Paul Peter Ewald, William Henry Bragg, and William
Lawrence Bragg, gave birth to the field of X-ray crystallography.

The Coolidge X-ray tube was invented during the following year by William D. Coolidge. It made possible the
continuous emissions of X-rays. X-ray tubes similar to this are still in use in 2012.

The use of X-rays for medical purposes (which developed into the field of radiation therapy) was pioneered by
Major John Hall-Edwards in Birmingham, England. Then in 1908, he had to have his left arm amputated
because of the spread of X-ray dermatitis on his arm.[94]
https://en.wikipedia.org/wiki/X-ray 12/20
7/19/2017 X-ray - Wikipedia

The X-ray microscope was developed during the 1950s.

The Chandra X-ray Observatory, launched on July 23, 1999, has been allowing
the exploration of the very violent processes in the universe which produce X-
rays. Unlike visible light, which gives a relatively stable view of the universe,
the X-ray universe is unstable. It features stars being torn apart by black holes,
galactic collisions, and novae, and neutron stars that build up layers of plasma
that then explode into space.

An X-ray laser device was proposed as part of the Reagan Administration's


Chandra's image of the
Strategic Defense Initiative in the 1980s, but the only test of the device (a sort
of laser "blaster" or death ray, powered by a thermonuclear explosion) gave
galaxy cluster Abell 2125
inconclusive results. For technical and political reasons, the overall project
reveals a complex of several
(including the X-ray laser) was de-funded (though was later revived by the
massive multimillion-degree-
second Bush Administration as National Missile Defense using different
Celsius gas clouds in the
technologies).
process of merging.
Phase-contrast X-ray imaging refers to a
variety of techniques that use phase
information of a coherent x-ray beam to image
soft tissues. It has become an important
method for visualizing cellular and
histological structures in a wide range of
biological and medical studies. There are
several technologies being used for x-ray
phase-contrast imaging, all utilizing different
Phase-contrast x-ray image of
principles to convert phase variations in the x-
spider
rays emerging from an object into intensity
variations.[95][96] These include propagation-
based phase contrast,[97] talbot interferometry,[96] refraction-enhanced
Dog hip xray posterior view
imaging,[98] and x-ray interferometry.[99] These methods provide higher
contrast compared to normal absorption-contrast x-ray imaging, making it
possible to see smaller details. A disadvantage is that these methods require more sophisticated equipment,
such as synchrotron or microfocus x-ray sources, X-ray optics, and high resolution x-ray detectors.

Visibility
While generally considered invisible to the human eye, in special circumstances X-rays can be visible. Brandes,
in an experiment a short time after Röntgen's landmark 1895 paper, reported after dark adaptation and placing
his eye close to an X-ray tube, seeing a faint "blue-gray" glow which seemed to originate within the eye
itself.[100] Upon hearing this, Röntgen reviewed his record books and found he too had seen the effect. When
placing an X-ray tube on the opposite side of a wooden door Röntgen had noted the same blue glow, seeming to
emanate from the eye itself, but thought his observations to be spurious because he only saw the effect when he
used one type of tube. Later he realized that the tube which had created the effect was the only one powerful
enough to make the glow plainly visible and the experiment was thereafter readily repeatable. The knowledge
that X-rays are actually faintly visible to the dark-adapted naked eye has largely been forgotten today; this is
probably due to the desire not to repeat what would now be seen as a recklessly dangerous and potentially
harmful experiment with ionizing radiation. It is not known what exact mechanism in the eye produces the
visibility: it could be due to conventional detection (excitation of rhodopsin molecules in the retina), direct
excitation of retinal nerve cells, or secondary detection via, for instance, X-ray induction of phosphorescence in
the eyeball with conventional retinal detection of the secondarily produced visible light.

https://en.wikipedia.org/wiki/X-ray 13/20
7/19/2017 X-ray - Wikipedia

Though X-rays are otherwise invisible, it is possible to see the ionization of the air molecules if the intensity of
the X-ray beam is high enough. The beamline from the wiggler at the ID11 (https://web.archive.org/web/20081
114111144/http://www.esrf.eu/UsersAndScience/Experiments/MaterialsScience/faisceau) at the European
Synchrotron Radiation Facility is one example of such high intensity.[101]

Units of measure and exposure


The measure of X-rays ionizing ability is called the exposure:

The coulomb per kilogram (C/kg) is the SI unit of ionizing radiation exposure, and it is the amount of
radiation required to create one coulomb of charge of each polarity in one kilogram of matter.
The roentgen (R) is an obsolete traditional unit of exposure, which represented the amount of radiation
required to create one electrostatic unit of charge of each polarity in one cubic centimeter of dry air. 1
roentgen= 2.58×10−4 C/kg.

However, the effect of ionizing radiation on matter (especially living tissue) is more closely related to the
amount of energy deposited into them rather than the charge generated. This measure of energy absorbed is
called the absorbed dose:

The gray (Gy), which has units of (joules/kilogram), is the SI unit of absorbed dose, and it is the amount
of radiation required to deposit one joule of energy in one kilogram of any kind of matter.
The rad is the (obsolete) corresponding traditional unit, equal to 10 millijoules of energy deposited per
kilogram. 100 rad= 1 gray.

The equivalent dose is the measure of the biological effect of radiation on human tissue. For X-rays it is equal
to the absorbed dose.

The Roentgen equivalent man (rem) is the traditional unit of equivalent dose. For X-rays it is equal to the
rad, or, in other words, 10 millijoules of energy deposited per kilogram. 100 rem = 1 Sv.
The sievert (Sv) is the SI unit of equivalent dose, and also of effective dose. For X-rays the "equivalent
dose" is numerically equal to a Gray (Gy). 1 Sv= 1 Gy. For the "effective dose" of X-rays, it is usually
not equal to the Gray (Gy).

See also
Abnormal reflection X-ray marker
Backscatter X-ray X-ray nanoprobe
Detective quantum efficiency X-ray reflectivity
High-energy X-rays X-ray vision
N ray X-ray welding
Neutron radiation Macintyre's X-Ray Film – 1896 documentary
NuSTAR radiography film
Radiographer The X-Rays – 1897 British short silent comedy
Resonant inelastic X-ray scattering (RIXS) film
Small-angle X-ray scattering (SAXS)
X-ray absorption spectroscopy
X-ray marker
References
1. "X-Rays" (http://missionscience.nasa.gov/ems/11_xrays.html). NASA. Retrieved November 7, 2012.
2. Novelline, Robert (1997). Squire's Fundamentals of Radiology. Harvard University Press. 5th edition. ISBN 0-674-
83339-2.
3. "X-ray" (http://oed.com/search?searchType=dictionary&q=X-ray). Oxford English Dictionary (3rd ed.). Oxford
University Press. September 2005. (Subscription or UK public library membership (https://global.oup.com/oxforddnb/info/fre
eodnb/libraries/) required.)

https://en.wikipedia.org/wiki/X-ray 14/20
7/19/2017 X-ray - Wikipedia

4. Attwood, David (1999). Soft X-rays and extreme ultraviolet radiation (http://ast.coe.berkeley.edu/sxreuv/).
Cambridge University. p. 2. ISBN 978-0-521-65214-8.
5. "Physics.nist.gov" (http://physics.nist.gov/cgi-bin/ffast/ffast.pl?Formula=H2O&gtype=5&range=S&lower=0.300&u
pper=2.00&density=1.00). Physics.nist.gov. Retrieved 2011-11-08.
6. Denny, P. P.; Heaton, B. (1999). Physics for Diagnostic Radiology (https://books.google.com/?id=1BTQvsQIs4wC&
pg=PA12). USA: CRC Press. p. 12. ISBN 0-7503-0591-6.
7. Feynman, Richard; Leighton, Robert; Sands, Matthew (1963). The Feynman Lectures on Physics, Vol.1. USA:
Addison-Wesley. pp. 2–5. ISBN 0-201-02116-1.
8. L'Annunziata, Michael; Abrade, Mohammad (2003). Handbook of Radioactivity Analysis (https://books.google.co
m/?id=b519e10OPT0C&pg=PA58&dq=gamma+x-ray). Academic Press. p. 58. ISBN 0-12-436603-1.
9. Grupen, Claus; Cowan, G.; Eidelman, S. D.; Stroh, T. (2005). Astroparticle Physics. Springer. p. 109. ISBN 3-540-
25312-2.
10. Hodgman, Charles, ed. (1961). CRC Handbook of Chemistry and Physics, 44th Ed. USA: Chemical Rubber Co.
p. 2850.
11. Bushberg, Jerrold T.; Seibert, J. Anthony; Leidholdt, Edwin M.; Boone, John M. (2002). The essential physics of
medical imaging. Lippincott Williams & Wilkins. p. 42. ISBN 978-0-683-30118-2.
12. Bushberg, Jerrold T.; Seibert, J. Anthony; Leidholdt, Edwin M.; Boone, John M. (2002). The essential physics of
medical imaging. Lippincott Williams & Wilkins. p. 38. ISBN 978-0-683-30118-2.
13. "RTAB: the Rayleigh scattering database" (http://adg.llnl.gov/Research/scattering/RTAB.html). Lynn Kissel. 2000-
09-02. Retrieved 2012-11-08.
14. Attwood, David (1999). "3". Soft X-rays and extreme ultraviolet radiation (http://ast.coe.berkeley.edu/sxreuv/).
Cambridge University Press. ISBN 978-0-521-65214-8.
15. "X-ray Transition Energies Database" (http://physics.nist.gov/PhysRefData/XrayTrans/Html/search.html). NIST
Physical Measurement Laboratory. 2011-12-09. Retrieved 2016-02-19.
16. "X-Ray Data Booklet Table 1-3" (https://web.archive.org/web/20090423224919/http://xdb.lbl.gov/Section1/Table_1-
3.pdf) (PDF). Center for X-ray Optics and Advanced Light Source, Lawrence Berkeley National Laboratory. 2009-
10-01. Archived from the original (http://xdb.lbl.gov/Section1/Table_1-3.pdf) (PDF) on 23 April 2009. Retrieved
2016-02-19.
17. Whaites, Eric; Cawson, Roderick (2002). Essentials of Dental Radiography and Radiology (https://books.google.co
m/?id=x6ThiifBPcsC&dq=radiography+kilovolt+x-ray+machine). Elsevier Health Sciences. pp. 15–20. ISBN 0-443-
07027-X.
18. Bushburg, Jerrold; Seibert, Anthony; Leidholdt, Edwin; Boone, John (2002). The Essential Physics of Medical
Imaging (https://books.google.com/?id=VZvqqaQ5DvoC&pg=PT33&dq=radiography+kerma+rem+Sievert). USA:
Lippincott Williams & Wilkins. p. 116. ISBN 0-683-30118-7.
19. Camara, C. G.; Escobar, J. V.; Hird, J. R.; Putterman, S. J. (2008). "Correlation between nanosecond X-ray flashes
and stick–slip friction in peeling tape" (http://homepage.usask.ca/~jrm011/X-ray_tape.pdf) (PDF). Nature. 455
(7216): 1089–1092. Bibcode:2008Natur.455.1089C (http://adsabs.harvard.edu/abs/2008Natur.455.1089C).
doi:10.1038/nature07378 (https://doi.org/10.1038%2Fnature07378). Retrieved 2 February 2013.
20. Emilio, Burattini; Ballerna, Antonella (1994). "Preface" (https://books.google.com/books?id=VEld4080nekC&pg=P
A129). Biomedical Applications of Synchrotron Radiation: Proceedings of the 128th Course at the International
School of Physics -Enrico Fermi- 12–22 July 1994, Varenna, Italy. IOS Press. p. xv. ISBN 90-5199-248-3.
21. Helmut Paul and Johannes Muhr, Physics Reports 135 (1986) pp. 47–97
22. Spiegel PK (1995). "The first clinical X-ray made in America—100 years". American Journal of Roentgenology.
Leesburg, VA: American Roentgen Ray Society. 164 (1): 241–243. ISSN 1546-3141 (https://www.worldcat.org/issn/
1546-3141). PMID 7998549 (https://www.ncbi.nlm.nih.gov/pubmed/7998549). doi:10.2214/ajr.164.1.7998549 (http
s://doi.org/10.2214%2Fajr.164.1.7998549).
23. Roobottom CA, Mitchell G, Morgan-Hughes G (2010). "Radiation-reduction strategies in cardiac computed
tomographic angiography". Clin Radiol. 65 (11): 859–67. PMID 20933639 (https://www.ncbi.nlm.nih.gov/pubmed/2
0933639). doi:10.1016/j.crad.2010.04.021 (https://doi.org/10.1016%2Fj.crad.2010.04.021). "Of the 5 billion imaging
investigations performed worldwide..."
24. Medical Radiation Exposure Of The U.S. Population Greatly Increased Since The Early 1980s (http://www.scienced
aily.com/releases/2009/03/090303125809.htm), Science Daily, March 5, 2009
25. Herman, Gabor T. (2009). Fundamentals of Computerized Tomography: Image Reconstruction from Projections (2nd
ed.). Springer. ISBN 978-1-85233-617-2.
26. Advances in kilovoltage x-ray beam dosimetry in doi:10.1088/0031-9155/59/6/R183 (https://dx.doi.org/10.1088%2F
0031-9155%2F59%2F6%2FR183)
27. Back to the future: the history and development of the clinical linear accelerator in doi:10.1088/0031-
9155/51/13/R20 (https://dx.doi.org/10.1088%2F0031-9155%2F51%2F13%2FR20)
28. Hall EJ, Brenner DJ (2008). "Cancer risks from diagnostic radiology". Br J Radiol. 81 (965): 362–78.
PMID 18440940 (https://www.ncbi.nlm.nih.gov/pubmed/18440940). doi:10.1259/bjr/01948454 (https://doi.org/10.1
259%2Fbjr%2F01948454).
https://en.wikipedia.org/wiki/X-ray 15/20
7/19/2017 X-ray - Wikipedia

29. Brenner DJ (2010). "Should we be concerned about the rapid increase in CT usage?". Rev Environ Health. 25 (1):
63–8. PMID 20429161 (https://www.ncbi.nlm.nih.gov/pubmed/20429161). doi:10.1515/REVEH.2010.25.1.63 (http
s://doi.org/10.1515%2FREVEH.2010.25.1.63).
30. De Santis M, Cesari E, Nobili E, Straface G, Cavaliere AF, Caruso A (2007). "Radiation effects on development".
Birth Defects Res. C Embryo Today. 81 (3): 177–82. PMID 17963274 (https://www.ncbi.nlm.nih.gov/pubmed/17963
274). doi:10.1002/bdrc.20099 (https://doi.org/10.1002%2Fbdrc.20099).
31. "11th Report on Carcinogens" (http://ntp.niehs.nih.gov/ntp/roc/toc11.html). Ntp.niehs.nih.gov. Retrieved
2010-11-08.
32. Brenner DJ, Hall EJ (2007). "Computed tomography—an increasing source of radiation exposure". N. Engl. J. Med.
357 (22): 2277–84. PMID 18046031 (https://www.ncbi.nlm.nih.gov/pubmed/18046031).
doi:10.1056/NEJMra072149 (https://doi.org/10.1056%2FNEJMra072149).
33. Upton AC (2003). "The state of the art in the 1990s: NCRP report No. 136 on the scientific bases for linearity in the
dose-response relationship for ionizing radiation". Health Physics. 85 (1): 15–22. PMID 12852466 (https://www.ncb
i.nlm.nih.gov/pubmed/12852466). doi:10.1097/00004032-200307000-00005 (https://doi.org/10.1097%2F00004032-
200307000-00005).
34. Calabrese EJ, Baldwin LA (2003). "Toxicology rethinks its central belief" (http://www.cerrie.org/committee_papers/I
NFO_9-F.pdf) (PDF). Nature. 421 (6924): 691–2. Bibcode:2003Natur.421..691C (http://adsabs.harvard.edu/abs/2003
Natur.421..691C). PMID 12610596 (https://www.ncbi.nlm.nih.gov/pubmed/12610596). doi:10.1038/421691a (http
s://doi.org/10.1038%2F421691a).
35. Berrington de González A, Darby S (2004). "Risk of cancer from diagnostic X-rays: estimates for the UK and 14
other countries". Lancet. 363 (9406): 345–351. PMID 15070562 (https://www.ncbi.nlm.nih.gov/pubmed/15070562).
doi:10.1016/S0140-6736(04)15433-0 (https://doi.org/10.1016%2FS0140-6736%2804%2915433-0).
36. Brenner DJ, Hall EJ (2007). "Computed tomography- an increasing source of radiation exposure" (http://www.nejm.
org/doi/full/10.1056/NEJMra072149). New England Journal of Medicine. 357 (22): 2277–2284. PMID 18046031 (ht
tps://www.ncbi.nlm.nih.gov/pubmed/18046031). doi:10.1056/NEJMra072149 (https://doi.org/10.1056%2FNEJMra0
72149).
37. Radiologyinfo.org (http://www.radiologyinfo.org/en/safety/index.cfm?pg=sfty_xray), Radiological Society of North
America and American College of Radiology
38. "National Cancer Institute: Surveillance Epidemiology and End Results (SEER) data" (http://seer.cancer.gov/csr/197
5_2006/browse_csr.php?section=2&page=sect_02_table.11.html#table1). Seer.cancer.gov. 2010-06-30. Retrieved
2011-11-08.
39. Caon, M., Bibbo, G. & Pattison, J. (2000). "Monte Carlo calculated effective dose to teenage girls from computed
tomography examinations". Radiation Protection Dosimetry. 90 (4): 445–448.
doi:10.1093/oxfordjournals.rpd.a033172 (https://doi.org/10.1093%2Foxfordjournals.rpd.a033172).
40. Shrimpton, P.C; Miller, H.C; Lewis, M.A; Dunn, M. Doses from Computed Tomography (CT) examinations in the
UK – 2003 Review (http://www.hpa.org.uk/web/HPAwebFile/HPAweb_C/1194947420292) Archived (https://web.ar
chive.org/web/20110922122151/http://www.hpa.org.uk/web/HPAwebFile/HPAweb_C/1194947420292) September
22, 2011, at the Wayback Machine.
41. Gregory KJ, Bibbo G, Pattison JE (2008). "On the uncertainties in effective dose estimates of adult CT head scans".
Medical physics. 35 (8): 3501–10. Bibcode:2008MedPh..35.3501G (http://adsabs.harvard.edu/abs/2008MedPh..35.3
501G). PMID 18777910 (https://www.ncbi.nlm.nih.gov/pubmed/18777910). doi:10.1118/1.2952359 (https://doi.org/
10.1118%2F1.2952359).
42. Giles D, Hewitt D, Stewart A, Webb J (1956). "Preliminary Communication: Malignant Disease in Childhood and
Diagnostic Irradiation In-Utero". Lancet. 271 (6940): 447. PMID 13358242 (https://www.ncbi.nlm.nih.gov/pubmed/
13358242). doi:10.1016/S0140-6736(56)91923-7 (https://doi.org/10.1016%2FS0140-6736%2856%2991923-7).
43. "Pregnant Women and Radiation Exposure" (https://web.archive.org/web/20090123034228/http://emedicinelive.co
m/index.php/Women-s-Health/pregnant-women-and-radiation-exposure.html). eMedicine Live online medical
consultation. Medscape. 28 December 2008. Archived from the original (http://emedicinelive.com/index.php/Wome
n-s-Health/pregnant-women-and-radiation-exposure.html) on January 23, 2009. Retrieved 2009-01-16.
44. Donnelly LF (2005). "Reducing radiation dose associated with pediatric CT by decreasing unnecessary
examinations". American Journal of Roentgenology. 184 (2): 655–7. PMID 15671393 (https://www.ncbi.nlm.nih.go
v/pubmed/15671393). doi:10.2214/ajr.184.2.01840655 (https://doi.org/10.2214%2Fajr.184.2.01840655).
45. US National Research Council (2006). Health Risks from Low Levels of Ionizing Radiation, BEIR 7 phase 2 (https://
books.google.com/?id=Uqj4OzBKlHwC&pg=PA5). National Academies Press. pp. 5, fig.PS–2. ISBN 0-309-09156-
X., data credited to NCRP (US National Committee on Radiation Protection) 1987
46. "ANS / Public Information / Resources / Radiation Dose Calculator" (http://www.new.ans.org/pi/resources/dosechar
t/).
47. The Nuclear Energy Option, Bernard Cohen, Plenum Press 1990 Ch. 5 (http://www.phyast.pitt.edu/~blc/book/chapte
r5.html) Archived (https://web.archive.org/web/20131120115753/http://www.phyast.pitt.edu/~blc/book/chapter5.htm
l) November 20, 2013, at the Wayback Machine.
48. Muller, Richard. Physics for Future Presidents, Princeton University Press, 2010
https://en.wikipedia.org/wiki/X-ray 16/20
7/19/2017 X-ray - Wikipedia

49. X-Rays (http://www.doctorspiller.com/Dental%20_X-Rays.htm). Doctorspiller.com (2007-05-09). Retrieved on


2011-05-05.
50. X-Ray Safety (http://www.dentalgentlecare.com/x-ray_safety.htm) Archived (https://web.archive.org/web/20070404
222105/http://www.dentalgentlecare.com/x-ray_safety.htm) April 4, 2007, at the Wayback Machine..
Dentalgentlecare.com (2008-02-06). Retrieved on 2011-05-05.
51. "Dental X-Rays" (http://www.physics.isu.edu/radinf/dental.htm). Idaho State University. Retrieved November 7,
2012.
52. D.O.E. – About Radiation (http://www.oakridge.doe.gov/external/PublicActivities/EmergencyPublicInformation/Ab
outRadiation/tabid/319/Default.aspx) Archived (https://web.archive.org/web/20120427175013/http://www.oakridge.
doe.gov/external/PublicActivities/EmergencyPublicInformation/AboutRadiation/tabid/319/Default.aspx) April 27,
2012, at the Wayback Machine.
53. Kasai, Nobutami; Kakudo, Masao (2005). X-ray diffraction by macromolecules (https://books.google.com/books?id=
_y5hM5_vx0EC&pg=PA291). Tokyo: Kodansha. pp. 291–2. ISBN 3-540-25317-3.
54. Ahi, Kiarash (May 26, 2016). "Advanced terahertz techniques for quality control and counterfeit detection" (https://
www.researchgate.net/publication/303563438_Advanced_terahertz_techniques_for_quality_control_and_counterfeit
_detection). Proc. SPIE 9856, Terahertz Physics, Devices, and Systems X: Advanced Applications in Industry and
Defense, 98560G. Terahertz Physics, Devices, and Systems X: Advanced Applications in Industry and Defense.
9856: 98560G. Bibcode:2016SPIE.9856E..0GA (http://adsabs.harvard.edu/abs/2016SPIE.9856E..0GA).
doi:10.1117/12.2228684 (https://doi.org/10.1117%2F12.2228684). Retrieved May 26, 2016.
55. Monico L, Van der Snickt G, Janssens K, De Nolf W, Miliani C, Verbeeck J, Tian H, Tan H, Dik J, Radepont M,
Cotte M (2011). "Degradation Process of Lead Chromate in Paintings by Vincent van Gogh Studied by Means of
Synchrotron X-ray Spectromicroscopy and Related Methods. 1. Artificially Aged Model Samples". Analytical
Chemistry. 83 (4): 1214–1223. PMID 21314201 (https://www.ncbi.nlm.nih.gov/pubmed/21314201).
doi:10.1021/ac102424h (https://doi.org/10.1021%2Fac102424h). Monico L, Van der Snickt G, Janssens K, De Nolf
W, Miliani C, Dik J, Radepont M, Hendriks E, Geldof M, Cotte M (2011). "Degradation Process of Lead Chromate
in Paintings by Vincent van Gogh Studied by Means of Synchrotron X-ray Spectromicroscopy and Related Methods.
2. Original Paint Layer Samples" (http://www.vangogh.ua.ac.be/ac1025122_part2.pdf) (PDF). Analytical Chemistry.
83 (4): 1224–1231. PMID 21314202 (https://www.ncbi.nlm.nih.gov/pubmed/21314202). doi:10.1021/ac1025122 (htt
ps://doi.org/10.1021%2Fac1025122).
56. Bickmore, Helen (2003). Milady's Hair Removal Techniques: A Comprehensive Manual (https://books.google.com/?i
d=oAFSBBLU_UUC&pg=PT250&lpg=PT250). ISBN 1401815553.
57. "Science Diction: How 'X-Ray' Got Its 'X' " (http://www.npr.org/templates/story/story.php?storyId=127932774).
2010-06-18. Retrieved 2014-03-23.
58. Kevles, Bettyann Holtzmann (1996). Naked to the Bone Medical Imaging in the Twentieth Century. Camden, NJ:
Rutgers University Press. pp. 19–22. ISBN 0-8135-2358-3.
59. Sample, Sharro (2007-03-27). "X-Rays" (http://science.hq.nasa.gov/kids/imagers/ems/xrays.html). The
Electromagnetic Spectrum. NASA. Retrieved 2007-12-03.
60. The history, development, and impact of computed imaging in neurological diagnosis and neurosurgery: CT, MRI,
DTI: Nature Precedings (http://precedings.nature.com/documents/3267/version/5) doi:10.1038/npre.2009.3267.5 (htt
ps://dx.doi.org/10.1038%2Fnpre.2009.3267.5).
61. Glasser, Otto (1958). Dr. W. C. Ro ̈ntgen. Springfield: Thomas.
62. Natale, Simone (2011-11-01). "The Invisible Made Visible" (http://dx.doi.org/10.1080/13688804.2011.602856).
Media History. 17 (4): 345–358. ISSN 1368-8804 (https://www.worldcat.org/issn/1368-8804).
doi:10.1080/13688804.2011.602856 (https://doi.org/10.1080%2F13688804.2011.602856).
63. Natale, Simone (2011-08-04). "A Cosmology of Invisible Fluids: Wireless, X-Rays, and Psychical Research Around
1900" (http://www.cjc-online.ca/index.php/journal/article/view/2368). Canadian Journal of Communication. 36 (2).
ISSN 0705-3657 (https://www.worldcat.org/issn/0705-3657).
64. Grove, Allen W. (1997-01-01). "Rontgen's Ghosts: Photography, X-Rays, and the Victorian Imagination" (https://mu
se.jhu.edu/journals/literature_and_medicine/v016/16.2grove.html). Literature and Medicine. 16 (2): 141–173.
ISSN 1080-6571 (https://www.worldcat.org/issn/1080-6571). doi:10.1353/lm.1997.0016 (https://doi.org/10.1353%2
Flm.1997.0016).
65. Thomson, Joseph J. (1903). The Discharge of Electricity through Gasses (https://books.google.com/?id=Ryw4AAA
AMAAJ&pg=PA138). USA: Charles Scribner's Sons. p. 138.
66. Michette, Alan G. and Pfauntsch, Slawka (1996) "X-rays: the first hundred years". John Wiley & Sons. ISBN
0471965022. p.1
67. Pais, Abraham (1986). Inward Bound: Of Matter and Forces in the Physical World (https://books.google.com/?id=m
REnwpAqz-YC&pg=PA81). UK: Oxford Univ. Press. p. 79. ISBN 0-19-851997-4.
68. Gaida, Roman; et al. (1997). "Ukrainian Physicist Contributes to the Discovery of X-Rays" (https://web.archive.org/
web/20080528172938/http://www.meduniv.lviv.ua/oldsite/puluj.html). Mayo Foundation for Medical Education and
Research. Archived from the original (http://www.meduniv.lviv.ua/oldsite/puluj.html) on 2008-05-28. Retrieved
2008-04-06.
https://en.wikipedia.org/wiki/X-ray 17/20
7/19/2017 X-ray - Wikipedia

69. Wyman, Thomas (Spring 2005). "Fernando Sanford and the Discovery of X-rays". "Imprint", from the Associates of
the Stanford University Libraries: 5–15.
70. Thomson, Joseph J. (1903). The Discharge of Electricity through Gasses (https://books.google.com/?id=Ryw4AAA
AMAAJ&pg=PA182). USA: Charles Scribner's Sons. pp. 182–186.
71. Wiedmann's Annalen, Vol. XLVIII
72. Hrabak, M.; Padovan, R. S.; Kralik, M; Ozretic, D; Potocki, K (2008). "Scenes from the past: Nikola Tesla and the
discovery of X-rays". RadioGraphics. 28 (4): 1189–92. PMID 18635636 (https://www.ncbi.nlm.nih.gov/pubmed/186
35636). doi:10.1148/rg.284075206 (https://doi.org/10.1148%2Frg.284075206).
73. Chadda, P. K. (2009). Hydroenergy and Its Energy Potential (https://books.google.com/books?id=Fi18dCHfbLkC&p
g=PT88). Pinnacle Technology. pp. 88–. ISBN 978-1-61820-149-2.
74. From his technical publications, it is indicated that he invented and developed a special single-electrode X-ray tube:
Morton, William James and Hammer, Edwin W. (1896) American Technical Book Co., p. 68., U.S. Patent 514,170 (h
ttps://www.google.com/patents/US514170), "Incandescent Electric Light", and U.S. Patent 454,622 (https://www.go
ogle.com/patents/US454622) "System of Electric Lighting". These differed from other X-ray tubes in having no
target electrode and worked with the output of a Tesla Coil.
75. Stanton, Arthur (1896-01-23). "Wilhelm Conrad Röntgen On a New Kind of Rays: translation of a paper read before
the Würzburg Physical and Medical Society, 1895". Nature. 53 (1369): 274–6. Bibcode:1896Natur..53R.274. (http://
adsabs.harvard.edu/abs/1896Natur..53R.274.). doi:10.1038/053274b0 (https://doi.org/10.1038%2F053274b0). see
also pp. 268 and 276 of the same issue.
76. Karlsson, Erik B. (9 February 2000). "The Nobel Prizes in Physics 1901–2000" (https://www.nobelprize.org/nobel_p
rizes/physics/articles/karlsson/). Stockholm: The Nobel Foundation. Retrieved 24 November 2011.
77. Peters, Peter (1995). "W. C. Roentgen and the discovery of x-rays" (http://www.medcyclopaedia.com/library/radiolo
gy/chapter01.aspx). Textbook of Radiology. Medcyclopedia.com, GE Healthcare. Retrieved 5 May 2008.
78. Glasser, Otto (1993). Wilhelm Conrad Röntgen and the early history of the roentgen rays (https://books.google.com/
books?id=5GJs4tyb7wEC&pg=PA10). Norman Publishing. pp. 10–15. ISBN 0930405226.
79. Markel, Howard (20 December 2012). " 'I Have Seen My Death': How the World Discovered the X-Ray" (http://ww
w.pbs.org/newshour/rundown/2012/12/i-have-seen-my-death-how-the-world-discovered-the-x-ray.html). PBS
NewsHour. PBS. Retrieved 27 April 2013.
80. Meggitt, Geoff (2008). Taming the Rays: a history of radiation and protection. lulu.com. p. 3. ISBN 1409246671.
81. "Major John Hall-Edwards" (https://web.archive.org/web/20120928204852/http://www.birmingham.gov.uk/xray).
Birmingham City Council. Archived from the original (http://www.birmingham.gov.uk/xray) on September 28, 2012.
Retrieved 2012-05-17.
82. Kudriashov, Y. B. (2008). Radiation Biophysics. Nova Publishers. p. xxi. ISBN 9781600212802.
83. National Library of Medicine. "Could X-rays Have Saved President William McKinley? (https://www.nlm.nih.gov/vi
sibleproofs/galleries/cases/mckinley.html)" Visible Proofs: Forensic Views of the Body.
84. Daniel, J. (April 10, 1896). "The X-Rays" (http://www.sciencemag.org/content/3/67/562). Science. 3 (67): 562.
Bibcode:1896Sci.....3..562D (http://adsabs.harvard.edu/abs/1896Sci.....3..562D). doi:10.1126/science.3.67.562 (http
s://doi.org/10.1126%2Fscience.3.67.562).
85. Fleming, Walter Lynwood. The South in the Building of the Nation: Biography A-J. Pelican Publishing. p. 300.
ISBN 1589809467.
86. Ce4Rt (Mar 2014). Understanding Ionizing Radiation and Protection (https://books.google.com/books?id=IioKBAA
AQBAJ&pg=PA174). p. 174.
87. Glasser, Otto (1934). Wilhelm Conrad Röntgen and the Early History of the Roentgen Rays (https://books.google.co
m/?id=5GJs4tyb7wEC&pg=PA294). Norman Publishing. p. 294. ISBN 0930405226.
88. Sansare K, Khanna V, Karjodkar F (2011). "Early victims of X-rays: A tribute and current perception" (https://www.
ncbi.nlm.nih.gov/pmc/articles/PMC3520298). Dentomaxillofacial Radiology. 40 (2): 123–125. PMC 3520298 (http
s://www.ncbi.nlm.nih.gov/pmc/articles/PMC3520298)  . PMID 21239576 (https://www.ncbi.nlm.nih.gov/pubmed/2
1239576). doi:10.1259/dmfr/73488299 (https://doi.org/10.1259%2Fdmfr%2F73488299).
89. Kathern, Ronald L. and Ziemer, Paul L. The First Fifty Years of Radiation Protection (http://www.physics.isu.edu/ra
dinf/50yrs.htm), physics.isu.edu
90. Hrabak M, Padovan RS, Kralik M, Ozretic D, Potocki K (July 2008). "Nikola Tesla and the Discovery of X-rays".
RadioGraphics. 28 (4): 1189–92. PMID 18635636 (https://www.ncbi.nlm.nih.gov/pubmed/18635636).
doi:10.1148/rg.284075206 (https://doi.org/10.1148%2Frg.284075206).
91. California, San Francisco Area Funeral Home Records, 1835–1979. Database with images. FamilySearch. Jacob
Fleischman in entry for Elizabeth Aschheim. 03 Aug 1905. Citing funeral home J.S. Godeau, San Francisco, San
Francisco, California. Record book Vol. 06, p. 1-400, 1904–1906. San Francisco Public Library. San Francisco
History and Archive Center.
92. Editor. (5 August 1905). Aschheim. Obituaries. San Francisco Examiner. San Francisco, California.
93. Editor. (5 August 1905). Obituary Notice. Elizabeth Fleischmann. San Francisco Chronicle. Page 10.

https://en.wikipedia.org/wiki/X-ray 18/20
7/19/2017 X-ray - Wikipedia

94. Birmingham City Council: Major John Hall-Edwards (http://www.birmingham.gov.uk/xray) Archived (https://web.ar
chive.org/web/20120928204852/http://www.birmingham.gov.uk/xray) September 28, 2012, at the Wayback
Machine.
95. Fitzgerald, Richard (2000). "Phase-sensitive x-ray imaging". Physics Today. 53 (7): 23. Bibcode:2000PhT....53g..23F
(http://adsabs.harvard.edu/abs/2000PhT....53g..23F). doi:10.1063/1.1292471
(https://doi.org/10.1063%2F1.1292471).
96. David, C, Nohammer, B, Solak, H H, & Ziegler E (2002). "Differential x-ray phase contrast imaging using a
shearing interferometer". Applied Physics Letters. 81 (17): 3287–3289. Bibcode:2002ApPhL..81.3287D (http://adsab
s.harvard.edu/abs/2002ApPhL..81.3287D). doi:10.1063/1.1516611 (https://doi.org/10.1063%2F1.1516611).
97. Wilkins, S W, Gureyev, T E, Gao, D, Pogany, A & Stevenson, A W (1996). "Phase-contrast imaging using
polychromatic hard X-rays". Nature. 384 (6607): 335–338. Bibcode:1996Natur.384..335W (http://adsabs.harvard.ed
u/abs/1996Natur.384..335W). doi:10.1038/384335a0 (https://doi.org/10.1038%2F384335a0).
98. Davis, T J, Gao, D, Gureyev, T E, Stevenson, A W & Wilkins, S W (1995). "Phase-contrast imaging of weakly
absorbing materials using hard X-rays". Nature. 373 (6515): 595–598. Bibcode:1995Natur.373..595D (http://adsabs.
harvard.edu/abs/1995Natur.373..595D). doi:10.1038/373595a0 (https://doi.org/10.1038%2F373595a0).
99. Momose A, Takeda T, Itai Y, Hirano K (1996). "Phase-contrast X-ray computed tomography for observing biological
soft tissues". Nature Medicine. 2 (4): 473–475. PMID 8597962 (https://www.ncbi.nlm.nih.gov/pubmed/8597962).
doi:10.1038/nm0496-473 (https://doi.org/10.1038%2Fnm0496-473).
100. Frame, Paul. "Wilhelm Röntgen and the Invisible Light" (http://www.orau.org/ptp/articlesstories/invisiblelight.htm).
Tales from the Atomic Age. Oak Ridge Associated Universities. Retrieved 2008-05-19.
101. Als-Nielsen, Jens; Mcmorrow, Des (2001). Elements of Modern X-Ray Physics. John Wiley & Sons Ltd,. pp. 40–41.
ISBN 0-471-49858-0.

External links
Historical X-ray tubes (http://www.crtsite.com/page5.html)
Röntgen's 1895 article, on line and analyzed on BibNum (https:// Wikimedia Commons has
media related to X-ray.
www.bibnum.education.fr/physique/electricite-electromagnetisme/
la-decouverte-des-rayons-x-par-roentgen) [click 'à télécharger' for
English analysis] Look up x-ray in
Example Radiograph: Fractured Humerus (http://www.rtstudents. Wiktionary, the free
com/x-rays/broken-humerus-xray.htm) dictionary.
A Photograph of an X-ray Machine (https://web.archive.org/web/
20070810063814/http://www.iuk.edu/~koalhe/img/Equipment/xray.jpg)
X-ray Safety (http://www.x-raysafety.com/)
An X-ray tube demonstration (Animation) (http://www.ionactive.co.uk/multi-media_video.html?m=4)
1896 Article: "On a New Kind of Rays" (https://web.archive.org/web/20070710033139/http://deutsche.n
ature.com/physics/7.pdf)
"Digital X-Ray Technologies Project" (https://docs.google.com/fileview?id=0B89CZuXbiY7mNmQxYm
VlNDktNjBiZS00NjcwLTg0ODgtZjc3NWUwOWUxZDg5&hl=tr)
What is Radiology? (https://web.archive.org/web/20100729022133/http://rad.usuhs.mil/rad/home/whatis.
html) a simple tutorial
50,000 X-ray, MRI, and CT pictures (https://web.archive.org/web/20100306044636/http://rad.usuhs.edu/
medpix/) MedPix medical image database
Index of Early Bremsstrahlung Articles (http://www.datasync.com/~rsf1/bremindx.htm)
Extraordinary X-Rays (http://www.life.com/image/first/in-gallery/44881/extraordinary-x-rays) –
slideshow by Life
X-rays and crystals (http://www.xtal.iqfr.csic.es/Cristalografia/index-en.html)

Retrieved from "https://en.wikipedia.org/w/index.php?title=X-ray&oldid=790138323"

Categories: X-rays Electromagnetic spectrum IARC Group 1 carcinogens Medical physics Radiography
Radiation Wilhelm Röntgen

This page was last edited on 11 July 2017, at 20:43.

https://en.wikipedia.org/wiki/X-ray 19/20
7/19/2017 X-ray - Wikipedia

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/X-ray 20/20
7/19/2017 Infrared spectroscopy - Wikipedia

Infrared spectroscopy
From Wikipedia, the free encyclopedia

Infrared spectroscopy (IR spectroscopy or Vibrational Spectroscopy) involves the interaction of infrared radiation with
matter. It covers a range of techniques, mostly based on absorption spectroscopy. As with all spectroscopic techniques, it can
be used to identify and study chemicals. Sample may be solid, liquid, or gas. The method or technique of infrared
spectroscopy is conducted with an instrument called an infrared spectrometer (or spectrophotometer) to produce an infrared
spectrum. An IR spectrum is essentially a graph of infrared light absorbance (or transmittance) on the vertical axis vs.
frequency or wavelength on the horizontal axis. Typical units of frequency used in IR spectra are reciprocal centimeters
(sometimes called wave numbers), with the symbol cm−1. Units of IR wavelength are commonly given in micrometers
(formerly called "microns"), symbol μm, which are related to wave numbers in a reciprocal way. A common laboratory
instrument that uses this technique is a Fourier transform infrared (FTIR) spectrometer. Two-dimensional IR is also possible
as discussed below.

The infrared portion of the electromagnetic spectrum is usually divided into three regions; the near-, mid- and far- infrared,
named for their relation to the visible spectrum. The higher-energy near-IR, approximately 14000–4000 cm−1 (0.8–2.5 μm
wavelength) can excite overtone or harmonic vibrations. The mid-infrared, approximately 4000–400 cm−1 (2.5–25 μm) may
be used to study the fundamental vibrations and associated rotational-vibrational structure. The far-infrared, approximately
400–10 cm−1 (25–1000 μm), lying adjacent to the microwave region, has low energy and may be used for rotational
spectroscopy. The names and classifications of these subregions are conventions, and are only loosely based on the relative
molecular or electromagnetic properties.

Contents
1 Theory
1.1 Number of vibrational modes
1.2 Special effects
2 Practical IR spectroscopy
2.1 Sample preparation
2.2 Comparing to a reference
2.3 FTIR
3 Absorption bands
3.1 Badger's rule
4 Uses and applications
5 Isotope effects
6 Two-dimensional IR
7 See also
8 References
9 External links

Theory
Infrared spectroscopy exploits the fact that molecules absorb frequencies that are characteristic of their structure. These
absorptions occur at resonant frequencies, i.e. the frequency of the absorbed radiation matches the vibrational frequency. The
energies are affected by the shape of the molecular potential energy surfaces, the masses of the atoms, and the associated
vibronic coupling.

In particular, in the Born–Oppenheimer and harmonic approximations, i.e. when the molecular Hamiltonian corresponding to
the electronic ground state can be approximated by a harmonic oscillator in the neighborhood of the equilibrium molecular
geometry, the resonant frequencies are associated with the normal modes corresponding to the molecular electronic ground
state potential energy surface. The resonant frequencies are also related to the strength of the bond and the mass of the atoms
at either end of it. Thus, the frequency of the vibrations are associated with a particular normal mode of motion and a
particular bond type.

Number of vibrational modes

https://en.wikipedia.org/wiki/Infrared_spectroscopy 1/9
7/19/2017 Infrared spectroscopy - Wikipedia

In order for a
vibrational
mode in a
sample to be
"IR active", it
must be
associated with
changes in the
dipole moment.
A permanent
dipole is not
necessary, as
the rule
requires only a
change in
3D animation of the symmetric stretching of the Sample of an IR spec. reading; this one is from
dipole
C–H bonds of bromomethane bromomethane (CH3Br), showing peaks around 3000, 1300,
moment.[1]
and 1000 cm−1 (on the horizontal axis).
A molecule can vibrate in many ways, and each way is called a
vibrational mode. For molecules with N number of atoms, linear
molecules have 3N – 5 degrees of vibrational modes, whereas nonlinear molecules have 3N – 6 degrees of vibrational modes
(also called vibrational degrees of freedom). As an example H2O, a non-linear molecule, will have 3 × 3 – 6 = 3 degrees of
vibrational freedom, or modes.

Simple diatomic molecules have only one bond and only one vibrational band. If the molecule is symmetrical, e.g. N2, the
band is not observed in the IR spectrum, but only in the Raman spectrum. Asymmetrical diatomic molecules, e.g. CO, absorb
in the IR spectrum. More complex molecules have many bonds, and their vibrational spectra are correspondingly more
complex, i.e. big molecules have many peaks in their IR spectra.

The atoms in a CH2X2 group, commonly found in organic compounds and where X can represent any other atom, can vibrate
in nine different ways. Six of these vibrations involve only the CH2 portion: symmetric and antisymmetric stretching,
scissoring, rocking, wagging and twisting, as shown below. Structures that do not have the two additional X groups attached
have fewer modes because some modes are defined by specific relationships to those other attached groups. For example, in
water, the rocking, wagging, and twisting modes do not exist because these types of motions of the H represent simple
rotation of the whole molecule rather than vibrations within it.

https://en.wikipedia.org/wiki/Infrared_spectroscopy 2/9
7/19/2017 Infrared spectroscopy - Wikipedia

Symmetry
Symmetric Antisymmetric
Direction

Radial

Symmetric stretching Antisymmetric stretching

Latitudal

Scissoring Rocking

Longitudal

Wagging Twisting

These figures do not represent the "recoil" of the C atoms, which, though necessarily present to balance the overall
movements of the molecule, are much smaller than the movements of the lighter H atoms.

Special effects

The simplest and most important IR bands arise from the "normal modes," the simplest distortions of the molecule. In some
cases, "overtone bands" are observed. These bands arise from the absorption of a photon that leads to a doubly excited
vibrational state. Such bands appear at approximately twice the energy of the normal mode. Some vibrations, so-called
'combination modes," involve more than one normal mode. The phenomenon of Fermi resonance can arise when two modes
are similar in energy; Fermi resonance results in an unexpected shift in energy and intensity of the bands etc.

Practical IR spectroscopy
The infrared spectrum of a sample is recorded by passing a beam of infrared light through the sample. When the frequency of
the IR is the same as the vibrational frequency of a bond or collection of bonds, absorption occurs. Examination of the
transmitted light reveals how much energy was absorbed at each frequency (or wavelength). This measurement can be
achieved by scanning the wavelength range using a monochromator. Alternatively, the entire wavelength range is measured
using a Fourier transform instrument and then a transmittance or absorbance spectrum is generated using a dedicated
procedure.

This technique is commonly used for analyzing samples with covalent bonds. Simple spectra are obtained from samples with
few IR active bonds and high levels of purity. More complex molecular structures lead to more absorption bands and more
complex spectra.

Sample preparation

Gaseous samples require a sample cell with a long pathlength to compensate for the diluteness. The pathlength of the sample
cell depends on the concentration of the compound of interest. A simple glass tube with length of 5 to 10 cm equipped with
infrared-transparent windows at the both ends of the tube can be used for concentrations down to several hundred ppm.

https://en.wikipedia.org/wiki/Infrared_spectroscopy 3/9
7/19/2017 Infrared spectroscopy - Wikipedia

Sample gas concentrations well below ppm can be measured with a White's cell in
which the infrared light is guided with mirrors to travel through the gas. White's cells
are available with optical pathlength starting from 0.5 m up to hundred meters.

Liquid samples can be sandwiched between two plates of a salt (commonly sodium
chloride, or common salt, although a number of other salts such as potassium bromide
or calcium fluoride are also used).[2] The plates are transparent to the infrared light and
do not introduce any lines onto the spectra.

Solid samples can be prepared in a variety of ways. One common method is to crush
the sample with an oily mulling agent (usually mineral oil Nujol). A thin film of the
mull is applied onto salt plates and measured. The second method is to grind a quantity
of the sample with a specially purified salt (usually potassium bromide) finely (to
remove scattering effects from large crystals). This powder mixture is then pressed in a Typical IR solution cell. The
mechanical press to form a translucent pellet through which the beam of the windows are CaF2.
spectrometer can pass.[2] A third technique is the "cast film" technique, which is used
mainly for polymeric materials. The sample is first dissolved in a suitable, non hygroscopic solvent. A drop of this solution is
deposited on surface of KBr or NaCl cell. The solution is then evaporated to dryness and the film formed on the cell is
analysed directly. Care is important to ensure that the film is not too thick otherwise light cannot pass through. This technique
is suitable for qualitative analysis. The final method is to use microtomy to cut a thin (20–100 µm) film from a solid sample.
This is one of the most important ways of analysing failed plastic products for example because the integrity of the solid is
preserved.

In photoacoustic spectroscopy the need for sample treatment is minimal. The sample, liquid or solid, is placed into the sample
cup which is inserted into the photoacoustic cell which is then sealed for the measurement. The sample may be one solid
piece, powder or basically in any form for the measurement. For example, a piece of rock can be inserted into the sample cup
and the spectrum measured from it.

Comparing to a reference

It is typical to record spectrum of both the sample and a


"reference". This step controls for a number of variables,
e.g. infrared detector, which may affect the spectrum. The
reference measurement makes it possible to eliminate the
instrument influence.

The appropriate "reference" depends on the measurement


and its goal. The simplest reference measurement is to
simply remove the sample (replacing it by air). However,
sometimes a different reference is more useful. For
example, if the sample is a dilute solute dissolved in water
in a beaker, then a good reference measurement might be Schematics of a two-beam absorption spectrometer. A beam of
to measure pure water in the same beaker. Then the infrared light is produced, passed through an interferometer (not
reference measurement would cancel out not only all the shown), and then split into two separate beams. One is passed
instrumental properties (like what light source is used), but through the sample, the other passed through a reference. The beams
also the light-absorbing and light-reflecting properties of are both reflected back towards a detector, however first they pass
the water and beaker, and the final result would just show through a splitter, which quickly alternates which of the two beams
the properties of the solute (at least approximately). enters the detector. The two signals are then compared and a printout
is obtained. This "two-beam" setup gives accurate spectra even if the
A common way to compare to a reference is sequentially: intensity of the light source drifts over time.
first measure the reference, then replace the reference by
the sample and measure the sample. This technique is not
perfectly reliable; if the infrared lamp is a bit brighter during the reference measurement, then a bit dimmer during the sample
measurement, the measurement will be distorted. More elaborate methods, such as a "two-beam" setup (see figure), can
correct for these types of effects to give very accurate results. The Standard addition method can be used to statistically cancel
these errors.

Nevertheless, among different absorption based techniques which are used for gaseous species detection, Cavity ring-down
spectroscopy (CRDS) can be used as a calibration free method. The fact that CRDS is based on the measurements of photon
life-times (and not the laser intensity) makes it needless for any calibration and comparison with a reference [3]

FTIR
https://en.wikipedia.org/wiki/Infrared_spectroscopy 4/9
7/19/2017 Infrared spectroscopy - Wikipedia

Fourier transform infrared (FTIR) spectroscopy is a measurement technique that


allows one to record infrared spectra. Infrared light is guided through an interferometer
and then through the sample (or vice versa). A moving mirror inside the apparatus
alters the distribution of infrared light that passes through the interferometer. The
signal directly recorded, called an "interferogram", represents light output as a
function of mirror position. A data-processing technique called Fourier transform turns
this raw data into the desired result (the sample's spectrum): Light output as a function
of infrared wavelength (or equivalently, wavenumber). As described above, the
sample's spectrum is always compared to a reference.

An alternate method for acquiring spectra is the "dispersive" or "scanning An interferogram from an FTIR
monochromator" method. In this approach, the sample is irradiated sequentially with measurement. The horizontal axis is
various single wavelengths. The dispersive method is more common in UV-Vis the position of the mirror, and the
spectroscopy, but is less practical in the infrared than the FTIR method. One reason vertical axis is the amount of light
that FTIR is favored is called "Fellgett's advantage" or the "multiplex advantage": The detected. This is the "raw data" which
information at all frequencies is collected simultaneously, improving both speed and can be Fourier transformed to get the
signal-to-noise ratio. Another is called "Jacquinot's Throughput Advantage": A actual spectrum.
dispersive measurement requires detecting much lower light levels than an FTIR
measurement.[4] There are other advantages, as well as some disadvantages,[4] but
virtually all modern infrared spectrometers are FTIR instruments.

Absorption bands
IR spectroscopy is often used to identify structures because functional groups give rise to characteristic bands both in terms of
intensity and position (frequency). The positions of these bands are summarized in correlation tables as shown below.

Wavenumbers listed in cm−1.

Badger's rule

For many kinds of samples, the assignments are known, i.e. which bond deformation(s) are associated with which frequency.
In such cases further information can be gleaned about the strength on a bond, relying on the empirical guideline called
Badger's Rule. Originally published by Richard Badger in 1934,[5] this rule states that the strength of a bond correlates with
the frequency of its vibrational mode. That is, increase in bond strength leads to corresponding frequency increase and vice
versa.

Uses and applications


Infrared spectroscopy is a simple and reliable technique widely used in both organic and inorganic chemistry, in research and
industry. It is used in quality control, dynamic measurement, and monitoring applications such as the long-term unattended
measurement of CO2 concentrations in greenhouses and growth chambers by infrared gas analyzers.

It is also used in forensic analysis in both criminal and civil cases, for example in identifying polymer degradation. It can be
used in determining the blood alcohol content of a suspected drunk driver.

IR-spectroscopy has been successfully used in analysis and identification of pigments in paintings[6] and other art objects[7]
such as illuminated manuscripts.[8]

https://en.wikipedia.org/wiki/Infrared_spectroscopy 5/9
7/19/2017 Infrared spectroscopy - Wikipedia

A useful way of analyzing solid samples without the need for cutting samples uses ATR or attenuated total reflectance
spectroscopy. Using this approach, samples are pressed against the face of a single crystal. The infrared radiation passes
through the crystal and only interacts with the sample at the interface between the two materials.

With increasing technology in computer filtering and manipulation of the results, samples in solution can now be measured
accurately (water produces a broad absorbance across the range of interest, and thus renders the spectra unreadable without
this computer treatment).

Some instruments will also automatically tell you what substance is being measured from a store of thousands of reference
spectra held in storage.

Infrared spectroscopy is also useful in measuring the degree of polymerization in polymer manufacture. Changes in the
character or quantity of a particular bond are assessed by measuring at a specific frequency over time. Modern research
instruments can take infrared measurements across the range of interest as frequently as 32 times a second. This can be done
whilst simultaneous measurements are made using other techniques. This makes the observations of chemical reactions and
processes quicker and more accurate.

Infrared spectroscopy has also been successfully utilized in the field of semiconductor microelectronics:[9] for example,
infrared spectroscopy can be applied to semiconductors like silicon, gallium arsenide, gallium nitride, zinc selenide,
amorphous silicon, silicon nitride, etc.

Another important application of Infrared Spectroscopy is in the food industry to measure the concentration of various
compounds in different food products[10][11]

The instruments are now small, and can be transported, even for use in field trials.

Infrared Spectroscopy is also used in gas leak detection devices such as the DP-IR and EyeCGAs.[12] These devices detect
hydrocarbon gas leaks in the transportation of natural gas and crude oil.

In February 2014, NASA announced a greatly upgraded database (http://www.astrochem.org/pahdb/), based on IR


spectroscopy, for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20%
of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to
have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and
exoplanets.[13]

Recent developments include a miniature IR-spectrometer that's linked to a cloud based database and suitable for personal
everyday use,[14] and NIR-spectroscopic chips[15] that can be embedded in smartphones and various gadgets.

Isotope effects
The different isotopes in a particular species may exhibit different fine details in infrared spectroscopy. For example, the O–O
stretching frequency (in reciprocal centimeters) of oxyhemocyanin is experimentally determined to be 832 and 788 cm−1 for
ν(16O–16O) and ν(18O–18O), respectively.

By considering the O–O bond as a spring, the wavenumber of absorbance, ν can be calculated:

where k is the spring constant for the bond, c is the speed of light, and μ is the reduced mass of the A–B system:

( is the mass of atom ).

The reduced masses for 16O–16O and 18O–18O can be approximated as 8 and 9 respectively. Thus

https://en.wikipedia.org/wiki/Infrared_spectroscopy 6/9
7/19/2017 Infrared spectroscopy - Wikipedia

Where is the wavenumber; [wavenumber = frequency/(speed of light)]

The effect of isotopes, both on the vibration and the decay dynamics, has been found to be stronger than previously thought.
In some systems, such as silicon and germanium, the decay of the anti-symmetric stretch mode of interstitial oxygen involves
the symmetric stretch mode with a strong isotope dependence. For example, it was shown that for a natural silicon sample, the
lifetime of the anti-symmetric vibration is 11.4 ps. When the isotope of one of the silicon atoms is increased to 29Si, the
lifetime increases to 19 ps. In similar manner, when the silicon atom is changed to 30Si, the lifetime becomes 27 ps.[16]

Two-dimensional IR
Two-dimensional infrared correlation spectroscopy analysis combines multiple samples of infrared spectra to reveal more
complex properties. By extending the spectral information of a perturbed sample, spectral analysis is simplified and resolution
is enhanced. The 2D synchronous and 2D asynchronous spectra represent a graphical overview of the spectral changes due to
a perturbation (such as a changing concentration or changing temperature) as well as the relationship between the spectral
changes at two different wavenumbers.

Nonlinear two-dimensional infrared spectroscopy[17][18] is the infrared


version of correlation spectroscopy. Nonlinear two-dimensional infrared
spectroscopy is a technique that has become available with the
development of femtosecond infrared laser pulses. In this experiment, first
a set of pump pulses is applied to the sample. This is followed by a
waiting time during which the system is allowed to relax. The typical
waiting time lasts from zero to several picoseconds, and the duration can
be controlled with a resolution of tens of femtoseconds. A probe pulse is
then applied, resulting in the emission of a signal from the sample. The
nonlinear two-dimensional infrared spectrum is a two-dimensional Pulse Sequence used to obtain a two-dimensional
correlation plot of the frequency ω1 that was excited by the initial pump Fourier transform infrared spectrum. The time
period is usually referred to as the coherence
pulses and the frequency ω3 excited by the probe pulse after the waiting time and the second time period is known as the
time. This allows the observation of coupling between different waiting time. The excitation frequency is obtained
vibrational modes; because of its extremely fine time resolution, it can be by Fourier transforming along the axis.
used to monitor molecular dynamics on a picosecond timescale. It is still a
largely unexplored technique and is becoming increasingly popular for
fundamental research.

As with two-dimensional nuclear magnetic resonance (2DNMR) spectroscopy, this technique spreads the spectrum in two
dimensions and allows for the observation of cross peaks that contain information on the coupling between different modes. In
contrast to 2DNMR, nonlinear two-dimensional infrared spectroscopy also involves the excitation to overtones. These
excitations result in excited state absorption peaks located below the diagonal and cross peaks. In 2DNMR, two distinct
techniques, COSY and NOESY, are frequently used. The cross peaks in the first are related to the scalar coupling, while in the
latter they are related to the spin transfer between different nuclei. In nonlinear two-dimensional infrared spectroscopy,
analogs have been drawn to these 2DNMR techniques. Nonlinear two-dimensional infrared spectroscopy with zero waiting
time corresponds to COSY, and nonlinear two-dimensional infrared spectroscopy with finite waiting time allowing vibrational
population transfer corresponds to NOESY. The COSY variant of nonlinear two-dimensional infrared spectroscopy has been
used for determination of the secondary structure content of proteins.[19]

See also

Applied spectroscopy Forensic engineering Near-infrared spectroscopy


Astrochemistry Forensic polymer engineering Nuclear resonance vibrational
Atomic and molecular astrophysics Infrared astronomy spectroscopy
Atomic force microscopy based Infrared microscopy Photothermal microspectroscopy
infrared spectroscopy (AFM-IR) Infrared multiphoton dissociation Rotational spectroscopy
Cosmochemistry Infrared photodissociation Rotational-vibrational spectroscopy
Far-infrared astronomy spectroscopy Time-resolved spectroscopy
Forensic chemistry Infrared spectroscopy correlation Vibrational spectroscopy of linear
table molecules
Infrared spectroscopy of metal
carbonyls

https://en.wikipedia.org/wiki/Infrared_spectroscopy 7/9
7/19/2017 Infrared spectroscopy - Wikipedia

References
1. Paula, Peter Atkins, Julio de (2009). Elements of physical chemistry (5th ed.). Oxford: Oxford U.P. p. 459. ISBN 978-0-19-922672-6.
2. Laurence M. Harwood; Christopher J. Moody (1989). Experimental organic chemistry: Principles and Practice (Illustrated ed.).
Wiley-Blackwell. p. 292. ISBN 0-632-02017-2.
3. Soran Shadman; Charles Rose; Azer P. Yalin (2016). "Open-path cavity ring-down spectroscopy sensor for atmospheric ammonia".
Applied Physics B. 122: 194. doi:10.1007/s00340-016-6461-5 (https://doi.org/10.1007%2Fs00340-016-6461-5).
4. Chromatography/Fourier transform infrared spectroscopy and its applications, by Robert White, p7 (https://books.google.com/book
s?id=t2VSNnFoO3wC&pg=PA7)
5. Badger, Richard (1934). "A Relation Between Internuclear Distances and Bond Force Constants" (http://dx.doi.org/10.1063/1.174943
3). J Chem Phys. 2: 128. doi:10.1063/1.1749433 (https://doi.org/10.1063%2F1.1749433).
6. Infrared spectroscopy at ColourLex. Retrieved December 11, 2015 (http://colourlex.com/project/infrared-spectroscopy/)
7. Derrick, M.R., Stulik, D. and Landry J.M., Infrared Spectroscopy in Conservation Science, Scientific Tools for Conservation (http://
www.getty.edu/conservation/publications_resources/pdf_publications/pdf/infrared_spectroscopy.pdf), Getty Publications, 2000.
Retrieved December 11, 2015
8. Paola Ricciardi, Unlocking the secrets of illuminated manuscripts. Retrieved December 11, 2015 (http://www.labnews.co.uk/features/
unlocking-the-secrets-of-illuminated-manuscripts-08-11-2012/)
9. Lau, W.S. (1999). Infrared characterization for microelectronics (https://books.google.com/?id=rotNlJDFJWsC&printsec=frontcove
r). World Scientific. ISBN 981-02-2352-8.
10. Osborne, Brian G. (2006). Near-Infrared Spectroscopy in Food Analysis (http://onlinelibrary.wiley.com/doi/10.1002/9780470027318.
a1018/abstract;jsessionid=D750E27B720BC1196422CA9868326D53.f02t02?userIsAuthenticated=false&deniedAccessCustomised
Message=). John Wiley & Sons.
11. Villar, A.; Gorritxategi, E.; Aranzabe, E.; Fernandez, S.; Otaduy, D.; Fernandez, L.A. (2012). "Low-cost visible–near infrared sensor
for on-line monitoring of fat and fatty acids content during the manufacturing process of the milk". Food Chemistry. 135 (4): 2756–
2760. doi:10.1016/j.foodchem.2012.07.074 (https://doi.org/10.1016%2Fj.foodchem.2012.07.074).
12. www.TRMThemes.com, TRM Theme by. "Infrared (IR) / Optical Based Archives - Heath Consultants" (http://heathus.com/product_
category/gas/infrared-ir-optical-based/). Heath Consultants. Retrieved 2016-04-12.
13. Hoover, Rachel (February 21, 2014). "Need to Track Organic Nano-Particles Across the Universe? NASA's Got an App for That" (ht
tp://www.nasa.gov/ames/need-to-track-organic-nano-particles-across-the-universe-nasas-got-an-app-for-that/). NASA. Retrieved
February 22, 2014.
14. "What Happened When We Took the SCiO Food Analyzer Grocery Shopping" (http://spectrum.ieee.org/view-from-the-valley/at-wor
k/start-ups/israeli-startup-consumer-physics-says-its-scio-food-analyzer-is-finally-ready-for-prime-timeso-we-took-it-grocery-shoppi
ng). IEEE Spectrum: Technology, Engineering, and Science News. Retrieved 2017-03-23.
15. "A Review of New Small-Scale Technologies for Near Infrared Measurements" (http://www.americanpharmaceuticalreview.com/Fea
tured-Articles/163573-A-Review-of-New-Small-Scale-Technologies-for-Near-Infrared-Measurements/).
www.americanpharmaceuticalreview.com. Retrieved 2017-03-23.
16. Kohli, K.; Davies, Gordon; Vinh, N.; West, D.; Estreicher, S.; Gregorkiewicz, T.; Izeddin, I.; Itoh, K. (2006). "Isotope Dependence of
the Lifetime of the 1136-cm-1 Vibration of Oxygen in Silicon". Physical Review Letters. 96 (22): 225503.
Bibcode:2006PhRvL..96v5503K (http://adsabs.harvard.edu/abs/2006PhRvL..96v5503K). PMID 16803320 (https://www.ncbi.nlm.ni
h.gov/pubmed/16803320). doi:10.1103/PhysRevLett.96.225503 (https://doi.org/10.1103%2FPhysRevLett.96.225503).
17. P. Hamm; M. H. Lim; R. M. Hochstrasser (1998). "Structure of the amide I band of peptides measured by femtosecond nonlinear-
infrared spectroscopy". J. Phys. Chem. B. 102 (31): 6123. doi:10.1021/jp9813286 (https://doi.org/10.1021%2Fjp9813286).
18. S. Mukamel (2000). "Multidimensional Fentosecond Correlation Spectroscopies of Electronic and Vibrational Excitations". Annual
Review of Physical Chemistry. 51 (1): 691–729. Bibcode:2000ARPC...51..691M (http://adsabs.harvard.edu/abs/2000ARPC...51..691
M). PMID 11031297 (https://www.ncbi.nlm.nih.gov/pubmed/11031297). doi:10.1146/annurev.physchem.51.1.691 (https://doi.org/10.
1146%2Fannurev.physchem.51.1.691).
19. N. Demirdöven; C. M. Cheatum; H. S. Chung; M. Khalil; J. Knoester; A. Tokmakoff (2004). "Two-dimensional infrared
spectroscopy of antiparallel beta-sheet secondary structure". Journal of the American Chemical Society. 126 (25): 7981–90.
PMID 15212548 (https://www.ncbi.nlm.nih.gov/pubmed/15212548). doi:10.1021/ja049811j (https://doi.org/10.1021%2Fja049811j).

External links
Infrared gas spectroscopy (http://www.spectralcalc.com)
Wikimedia Commons has
A useful gif animation of different vibrational modes: SHU.ac.uk (http://www.sh media related to Infrared
u.ac.uk/schools/sci/chem/tutorials/molspec/irspec1.htm) spectroscopy.
Illustrated guide to basic IR spectra interpretation (http://www.jkwchui.com/201
0/09/pictorial-guide-to-interpreting-infrared-spectra/)
Infrared spectroscopy for organic chemists (http://www2.dq.fct.unl.pt/qoa/jas/ir.html)
Organic compounds spectrum database (http://riodb01.ibase.aist.go.jp/sdbs/cgi-bin/cre_index.cgi?lang=eng)
Gas phase infrared spectra, organic and inorganic compounds (http://ansyco.de/tools/ftir-spectrum-library)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Infrared_spectroscopy&oldid=790062196"

Categories: Infrared spectroscopy

https://en.wikipedia.org/wiki/Infrared_spectroscopy 8/9
7/19/2017 Infrared spectroscopy - Wikipedia

This page was last edited on 11 July 2017, at 10:37.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using
this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia
Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Infrared_spectroscopy 9/9
7/19/2017 Raman spectroscopy - Wikipedia

Raman spectroscopy
From Wikipedia, the free encyclopedia

Raman spectroscopy (/ˈrɑːmən/;


named after Sir C. V. Raman) is a
spectroscopic technique used to
observe vibrational, rotational, and
other low-frequency modes in a
system.[1] Raman spectroscopy is
commonly used in chemistry to
provide a structural fingerprint by
which molecules can be identified.

It relies on inelastic scattering, or


Raman scattering, of
monochromatic light, usually from
a laser in the visible, near infrared,
or near ultraviolet range. The laser
light interacts with molecular
vibrations, phonons or other
excitations in the system, resulting
in the energy of the laser photons
being shifted up or down. The shift Energy-level diagram showing the states involved in Raman spectra.
in energy gives information about
the vibrational modes in the system.
Infrared spectroscopy yields similar, but complementary, information.

Typically, a sample is illuminated with a laser beam. Electromagnetic radiation from the illuminated spot is
collected with a lens and sent through a monochromator. Elastic scattered radiation at the wavelength
corresponding to the laser line (Rayleigh scattering) is filtered out by either a notch filter, edge pass filter, or a
band pass filter, while the rest of the collected light is dispersed onto a detector.

Spontaneous Raman scattering is typically very weak, and as a result the main difficulty of Raman
spectroscopy is separating the weak inelastically scattered light from the intense Rayleigh scattered laser light.
Historically, Raman spectrometers used holographic gratings and multiple dispersion stages to achieve a high
degree of laser rejection. In the past, photomultipliers were the detectors of choice for dispersive Raman setups,
which resulted in long acquisition times. However, modern instrumentation almost universally employs notch
or edge filters for laser rejection and spectrographs either axial transmissive (AT), Czerny–Turner (CT)
monochromator, or FT (Fourier transform spectroscopy based), and CCD detectors.

There are a number of advanced types of Raman spectroscopy, including surface-enhanced Raman, resonance
Raman, tip-enhanced Raman, polarized Raman, stimulated Raman (analogous to stimulated emission),
transmission Raman, spatially offset Raman, and hyper Raman.

Contents
1 Theoretical basis
2 History
3 Raman shift
4 Applications
5 Microspectroscopy
6 Polarized analysis
7 Variants
https://en.wikipedia.org/wiki/Raman_spectroscopy 1/11
7/19/2017 Raman spectroscopy - Wikipedia

8 See also
9 References
10 External links

Theoretical basis
The Raman effect occurs when electromagnetic radiation interacts with a solid, liquid, or gaseous molecule’s
polarizable electron density and bonds. The spontaneous effect is a form of inelastic light scattering, where
a photon excites the molecule in either the ground (lowest energy) or excited rovibronic state (a rotational and
vibrational energy level within an electronic state). This excitation puts the molecule into a virtual energy
state for a short time before the photon scatters inelastically. Inelastic scattering means that the scattered photon
can be of either lower or higher energy than the incoming photon, compared to elastic, or Rayleigh, scattering
where the scattered photon has the same energy as the incoming photon. After interacting with the photon, the
molecule is in a different rotational or vibrational state. This change in energy between the initial and final
rovibronic states causes the scattered photon's frequency to shift away from the excitation wavelength (that of
the incoming photon), called the Rayleigh line.

For the total energy of the system to remain constant after the molecule moves to a new rovibronic state, the
scattered photon shifts to a different energy, and therefore a different frequency. This energy difference is equal
to that between the initial and final rovibronic states of the molecule. If the final state is higher in energy than
the initial state, the scattered photon will be shifted to a lower frequency (lower energy) so that the total energy
remains the same. This shift in frequency is called a Stokes shift, or downshift. If the final state is lower in
energy, the scattered photon will be shifted to a higher frequency, which is called an anti-Stokes shift, or
upshift.

For a molecule to exhibit a Raman effect, there must be a change in its electric dipole-electric dipole
polarizability with respect to the vibrational coordinate corresponding to the rovibronic state. The intensity of
the Raman scattering is proportional to this polarizability change. Therefore, the Raman spectrum, scattering
intensity as a function of the frequency shifts, depends on the rovibronic states of the molecule.

The Raman effect is based on the interaction between the electron cloud of a sample and the external electrical
field of the monochromatic light, which can create an induced dipole moment within the molecule based on its
polarizability. Because the laser light does not excite the molecule there can be no real transition between
energy levels.[2] The Raman effect should not be confused with emission (fluorescence or phosphorescence),
where a molecule in an excited electronic state emits a photon and returns to the ground electronic state, in
many cases to a vibrationally excited state on the ground electronic state potential energy surface. Raman
scattering also contrasts with infrared (IR) absorption, where the energy of the absorbed photon matches the
difference in energy between the initial and final rovibronic states. The dependence of Raman on the electric
dipole-electric dipole polarizability derivative also differs from IR spectroscopy, which depends on the electric
dipole moment derivative, the atomic polar tensor (APT). This contrasting feature allows rovibronic transitions
that might not be active in IR to be analyzed using Raman spectroscopy, as exemplified by the rule of mutual
exclusion in centrosymmetric molecules. Transitions which have large Raman intensities often have weak IR
intensities and vice versa. A third vibrational spectroscopy technique, inelastic incoherent neutron scattering
(IINS), can be used to determine the frequencies of vibrations in highly symmetric molecules that may be both
IR and Raman inactive. The IINS selection rules, or allowed transitions, differ from those of IR and Raman, so
the three techniques are complementary. They all give the same frequency for a given vibrational transition, but
the relative intensities provide different information due to the different types of interaction between the
molecule and the incoming particles, photons for IR and Raman, and neutrons for IINS.

History

https://en.wikipedia.org/wiki/Raman_spectroscopy 2/11
7/19/2017 Raman spectroscopy - Wikipedia

Although the inelastic scattering of light was predicted by Adolf Smekal in 1923,[3] it was not until 1928 that it
was observed in practice. The Raman effect was named after one of its discoverers, the Indian scientist Sir C. V.
Raman who observed the effect by means of sunlight (1928, together with K. S. Krishnan and independently by
Grigory Landsberg and Leonid Mandelstam).[1] Raman won the Nobel Prize in Physics in 1930 for this
discovery accomplished using sunlight, a narrow band photographic filter to create monochromatic light, and a
"crossed filter" to block this monochromatic light. He found that a small amount of light had changed
frequency and passed through the "crossed" filter.

Systematic pioneering theory of the Raman effect was developed by Czechoslovak physicist George Placzek
between 1930 and 1934.[4] The mercury arc became the principal light source, first with photographic detection
and then with spectrophotometric detection.

In the years following its discovery, Raman spectroscopy was used to provide the first catalog of molecular
vibrational frequencies. Originally, heroic measures were required to obtain Raman spectra due to the low
sensitivity of the technique. Typically, the sample was held in a long tube and illuminated along its length with
a beam of filtered monochromatic light generated by a gas discharge lamp. The photons that were scattered by
the sample were collected through an optical flat at the end of the tube. To maximize the sensitivity, the sample
was highly concentrated (1 M or more) and relatively large volumes (5 mL or more) were used. Consequently,
the use of Raman spectroscopy dwindled when commercial IR spectrophotometers became available in the
1940s. However, the advent of the laser in the 1960s resulted in simplified Raman spectroscopy instruments
and also boosted the sensitivity of the technique. This has revived the use of Raman spectroscopy as a common
analytical technique.

Raman shift
Raman shifts are typically reported in wavenumbers, which have units of inverse length, as this value is directly
related to energy. In order to convert between spectral wavelength and wavenumbers of shift in the Raman
spectrum, the following formula can be used:

where is the Raman shift expressed in wavenumber, λ0 is the excitation wavelength, and λ1 is the Raman
spectrum wavelength. Most commonly, the unit chosen for expressing wavenumber in Raman spectra is inverse
centimeters (cm−1). Since wavelength is often expressed in units of nanometers (nm), the formula above can
scale for this unit conversion explicitly, giving

Applications
Raman spectroscopy is used in chemistry to identify molecules and study chemical bonding. Because
vibrational frequencies are specific to a molecule’s chemical bonds and symmetry (the fingerprint region of
organic molecules is in the wavenumber range 500–1500 cm−1,[5] Raman provides a fingerprint to identify
molecules. For instance, Raman and IR spectra were used to determine the vibrational frequencies of SiO,
Si2O2, and Si3O3 on the basis of normal coordinate analyses.[6] Raman is also used to study the addition of a
substrate to an enzyme.

In solid-state physics, Raman spectroscopy is used to characterize materials, measure temperature, and find the
crystallographic orientation of a sample. As with single molecules, a solid material can be identified by
characteristic phonon modes. Information on the population of a phonon mode is given by the ratio of the
https://en.wikipedia.org/wiki/Raman_spectroscopy 3/11
7/19/2017 Raman spectroscopy - Wikipedia

Stokes and anti-Stokes intensity of the spontaneous Raman signal. Raman spectroscopy can also be used to
observe other low frequency excitations of a solid, such as plasmons, magnons, and superconducting
gap excitations. Distributed temperature sensing (DTS) uses the Raman-shifted backscatter from laser pulses to
determine the temperature along optical fibers. The orientation of an anisotropic crystal can be found from
the polarization of Raman-scattered light with respect to the crystal and the polarization of the laser light, if the
crystal structure’s point group is known.

In nanotechnology, a Raman microscope can be used to analyze nanowires to better understand their structures,
and the radial breathing mode of carbon nanotubes is commonly used to evaluate their diameter.

Raman active fibers, such as aramid and carbon, have vibrational modes that show a shift in Raman frequency
with applied stress. Polypropylene fibers exhibit similar shifts.

In solid state chemistry and the bio-pharmaceutical industry, Raman spectroscopy can be used to not only
identify active pharmaceutical ingredients (APIs), but to identify their polymorphic forms, if more than one
exist. For example, the drug Cayston (aztreonam), marketed by Gilead Sciences for cystic fibrosis,[7] can be
identified and characterized by IR and Raman spectroscopy. Using the correct polymorphic form in bio-
pharmaceutical formulations is critical, since different forms have different physical properties, like solubility
and melting point.

Raman spectroscopy has a wide variety of applications in biology and medicine. It has helped confirm the
existence of low-frequency phonons[8] in proteins and DNA,[9][10][11][12] promoting studies of low-frequency
collective motion in proteins and DNA and their biological functions.[13][14] Raman reporter molecules
with olefin or alkyne moieties are being developed for tissue imaging with SERS-labeled antibodies.[15] Raman
spectroscopy has also been used as a noninvasive technique for real-time, in situ biochemical characterization
of wounds. Multivariate analysis of Raman spectra has enabled development of a quantitative measure for
wound healing progress.[16] Spatially offset Raman spectroscopy (SORS), which is less sensitive to surface
layers than conventional Raman, can be used to discover counterfeit drugs without opening their packaging,
and to non-invasively study biological tissue.[17] A huge reason why Raman spectroscopy is so useful in
biological applications is because its results often do not face interference from water molecules, due to the fact
that they have permanent dipole moments, and as a result, the Raman scattering cannot be picked up on. This is
a large advantage, specifically in biological applications.[18] Raman spectroscopy also has a wide usage for
studying biominerals.[19] Lastly, Raman gas analyzers have many practical applications, including real-time
monitoring of anesthetic and respiratory gas mixtures during surgery.

Raman spectroscopy is an efficient and non-destructive way to investigate works of art.[20] Identifying
individual pigments in paintings and their degradation products provides insight into the working method of the
artist. It also gives information about the original state of the painting in cases where the pigments degraded
with age.[21] In addition to paintings, Raman spectroscopy can be used to investigate the chemical composition
of historical documents (such as the Book of Kells), which can provide insight about the social and economic
conditions when they were created.[22] It also offers a noninvasive way to determine the best method of
preservation or conservation of such materials.

Raman spectroscopy has been used in several research projects as a means to detect explosives from a safe
distance using laser beams.[23][24][25]

Raman Spectroscopy is being further developed so it could be used in the clinical setting. Raman4Clinic is a
European organization that is working on incorporating Raman Spectroscopy techniques in the medical field.
They are currently working on different projects, one of them being monitoring cancer using bodily fluids such
as urine and blood samples which are easily accessible. This technique would be less stressful on the patients
than constantly having to take biopsies which are not always risk free. [26]

https://en.wikipedia.org/wiki/Raman_spectroscopy 4/11
7/19/2017 Raman spectroscopy - Wikipedia

Microspectroscopy
Raman spectroscopy offers several advantages for microscopic analysis.
Since it is a scattering technique, specimens do not need to be fixed or
sectioned. Raman spectra can be collected from a very small volume (<
1 µm in diameter); these spectra allow the identification of species
present in that volume. Water does not generally interfere with Raman
spectral analysis. Thus, Raman spectroscopy is suitable for the
microscopic examination of minerals, materials such as polymers and
ceramics, cells, proteins and forensic trace evidence. A Raman
microscope begins with a standard optical microscope, and adds an
excitation laser, a monochromator, and a sensitive detector (such as a
charge-coupled device (CCD), or photomultiplier tube (PMT)). FT-
Raman has also been used with microscopes. Ultraviolet microscopes
and UV enhanced optics must be used when a UV laser source is used
for Raman microspectroscopy.

In direct imaging, the whole field of view is examined for scattering


over a small range of wavenumbers (Raman shifts). For instance, a
wavenumber characteristic for cholesterol could be used to record the
distribution of cholesterol within a cell culture.

The other approach is hyperspectral imaging or chemical imaging, in


which thousands of Raman spectra are acquired from all over the field
of view. The data can then be used to generate images showing the
location and amount of different components. Taking the cell culture
example, a hyperspectral image could show the distribution of
Comparison of topographical (AFM,
cholesterol, as well as proteins, nucleic acids, and fatty acids.
top) and Raman images of GaSe.
Sophisticated signal- and image-processing techniques can be used to
ignore the presence of water, culture media, buffers, and other Scale bar is 5 μm.[27]
interference.

Raman microscopy, and in particular confocal microscopy, has very high spatial resolution. For example, the
lateral and depth resolutions were 250 nm and 1.7 µm, respectively, using a confocal Raman microspectrometer
with the 632.8 nm line from a helium–neon laser with a pinhole of 100 µm diameter. Since the objective lenses
of microscopes focus the laser beam to several micrometres in diameter, the resulting photon flux is much
higher than achieved in conventional Raman setups. This has the added benefit of enhanced fluorescence
quenching. However, the high photon flux can also cause sample degradation, and for this reason some setups
require a thermally conducting substrate (which acts as a heat sink) in order to mitigate this process.

Another approach called global Raman imaging[28] uses complete monochromatic images instead of
reconstruction of images from acquired spectra. This technique is being used for the characterization of large
scale devices, mapping of different compounds and dynamics study. It has already been use for the
characterization of graphene layers,[29] J-aggregated dyes inside carbon nanotubes[30] and multiple other 2D
materials such as MoS2 and WSe2. Since the excitation beam is dispersed over the whole field of view, those
measurements can be done without damaging the sample.

By using Raman microspectroscopy, in vivo time- and space-resolved Raman spectra of microscopic regions of
samples can be measured. As a result, the fluorescence of water, media, and buffers can be removed.
Consequently, in vivo time- and space-resolved Raman spectroscopy is suitable to examine proteins, cells and
organs.

https://en.wikipedia.org/wiki/Raman_spectroscopy 5/11
7/19/2017 Raman spectroscopy - Wikipedia

Raman microscopy for biological and medical specimens generally uses near-infrared (NIR) lasers (785 nm
diodes and 1064 nm Nd:YAG are especially common). The use of these lower energy wavelengths reduces the
risk of damaging the specimen. However, the intensity of NIR Raman is low (owing to the ω4 dependence of
Raman scattering intensity), and most detectors require very long collection times. Recently advances were
made which had no destructive effect on mitochondria in the observation of changes in cytochrome c structure
that occur in the process of electron transport and ATP synthesis.[31]

Sensitive detectors have become available, making the technique better suited to general use. Raman
microscopy of inorganic specimens, such as rocks and ceramics and polymers, can use a broader range of
excitation wavelengths.[32]

Polarized analysis
The polarization of the Raman scattered light also contains useful information. This property can be measured
using (plane) polarized laser excitation and a polarization analyzer. Spectra acquired with the analyzer set at
both perpendicular and parallel to the excitation plane can be used to calculate the depolarization ratio. Study of
the technique is useful in teaching the connections between group theory, symmetry, Raman activity, and peaks
in the corresponding Raman spectra.[33] Polarized light only gives access to some of the Raman active modes.
By rotating the polarization you can gain access to the other modes. Each mode is separated according to its
symmetry.[34]

The spectral information arising from this analysis gives insight into molecular orientation and vibrational
symmetry. In essence, it allows the user to obtain valuable information relating to the molecular shape, for
example in synthetic chemistry or polymorph analysis. It is often used to understand macromolecular
orientation in crystal lattices, liquid crystals or polymer samples.[35]

It is convenient in polarised Raman spectroscopy to describe the propagation and polarisation directions using
Porto's notation,[36] described by and named after Brazilian physicist Sergio Pereira da Silva Porto.

Variants
Several variations of Raman spectroscopy have been developed. The usual purpose is to enhance the sensitivity
(e.g., surface-enhanced Raman), to improve the spatial resolution (Raman microscopy), or to acquire very
specific information (resonance Raman).

Spontaneous Raman spectroscopy – Term used to describe Raman spectroscopy without enhancement of
sensitivity.
Surface-enhanced Raman spectroscopy (SERS) – Normally done in a silver or gold colloid or a substrate
containing silver or gold. Surface plasmons of silver and gold are excited by the laser, resulting in an
increase in the electric fields surrounding the metal. Given that Raman intensities are proportional to the
electric field, there is large increase in the measured signal (by up to 1011). This effect was originally
observed by Martin Fleischmann but the prevailing explanation was proposed by Van Duyne in 1977.[37]
A comprehensive theory of the effect was given by Lombardi and Birke.[38]
Resonance Raman spectroscopy – The excitation wavelength is matched to an electronic transition of the
molecule or crystal, so that vibrational modes associated with the excited electronic state are greatly
enhanced. This is useful for studying large molecules such as polypeptides, which might show hundreds
of bands in "conventional" Raman spectra. It is also useful for associating normal modes with their
observed frequency shifts.[39]
Surface-enhanced resonance Raman spectroscopy (SERRS) – A combination of SERS and resonance
Raman spectroscopy that uses proximity to a surface to increase Raman intensity, and excitation
wavelength matched to the maximum absorbance of the molecule being analysed.
Angle-resolved Raman spectroscopy – Not only are standard Raman results recorded but also the angle
with respect to the incident laser. If the orientation of the sample is known then detailed information
https://en.wikipedia.org/wiki/Raman_spectroscopy 6/11
7/19/2017 Raman spectroscopy - Wikipedia

about the phonon dispersion relation can also be gleaned from a single test.[40]
Hyper Raman – A non-linear effect in which the vibrational modes interact with the second harmonic of
the excitation beam. This requires very high power, but allows the observation of vibrational modes that
are normally "silent". It frequently relies on SERS-type enhancement to boost the sensitivity.[41]
Optical tweezers Raman spectroscopy (OTRS) – Used to study individual particles, and even biochemical
processes in single cells trapped by optical tweezers.
Stimulated Raman scattering (SRS) – A pump-probe technique, where a spatially coincident, two color
pulse (with polarization either parallel or perpendicular) transfers the population from ground to a
rovibrationally excited state. If the difference in energy corresponds to an allowed Raman transition,
scattered light will correspond to loss or gain in the pump beam.
Spatially offset Raman spectroscopy (SORS) – The Raman scattering beneath an obscuring surface is
retrieved from a scaled subtraction of two spectra taken at two spatially offset points
Coherent anti-Stokes Raman spectroscopy (CARS) – Two laser beams are used to generate a coherent
anti-Stokes frequency beam, which can be enhanced by resonance.
Raman optical activity (ROA) – Measures vibrational optical activity by means of a small difference in
the intensity of Raman scattering from chiral molecules in right- and left-circularly polarized incident
light or, equivalently, a small circularly polarized component in the scattered light.[42]
Transmission Raman – Allows probing of a significant bulk of a turbid material, such as powders,
capsules, living tissue, etc. It was largely ignored following investigations in the late 1960s (Schrader and
Bergmann, 1967)[43] but was rediscovered in 2006 as a means of rapid assay of pharmaceutical dosage
forms.[44] There are medical diagnostic applications particularly in the detection of cancer.[25][45][46]
Inverse Raman spectroscopy.
Tip-enhanced Raman spectroscopy (TERS) – Uses a metallic (usually silver-/gold-coated AFM or STM)
tip to enhance the Raman signals of molecules situated in its vicinity. The spatial resolution is
approximately the size of the tip apex (20–30 nm). TERS has been shown to have sensitivity down to the
single molecule level and holds some promise for bioanalysis applications.[47]
Surface plasmon polariton enhanced Raman scattering (SPPERS) – This approach exploits apertureless
metallic conical tips for near field excitation of molecules. This technique differs from the TERS
approach due to its inherent capability of suppressing the background field. In fact, when an appropriate
laser source impinges on the base of the cone, a TM0 mode[48] (polaritonic mode) can be locally created,
namely far away from the excitation spot (apex of the tip). The mode can propagate along the tip without
producing any radiation field up to the tip apex where it interacts with the molecule. In this way, the focal
plane is separated from the excitation plane by a distance given by the tip length, and no background
plays any role in the Raman excitation of the molecule.[49][50][51][52]
Micro-cavity substrates – A method that improves the detection limit of conventional Raman spectra
using micro-Raman in a micro-cavity coated with reflective Au or Ag. The micro-cavity has a radius of
several micrometers and enhances the entire Raman signal by providing multiple excitations of the
sample and couples the forward-scattered Raman photons toward the collection optics in the back-
scattered Raman geometry.[53]
Stand-off remote Raman. In standoff Raman, the sample is measured at a distance from the Raman
spectrometer, usually by using a telescope for light collection. Remote Raman spectroscopy was
proposed in the 1960s[54] and initially developed for the measurement of atmospheric gases.[55] The
technique was extended In 1992 by Angel et al. for standoff Raman detection of hazardous inorganic and
organic compounds.[56] Standoff Raman detection offers a fast-Raman mode of analyzing large areas
such as a football field in minutes. A pulsed laser source and gated detector allow Raman spectra
measurements in the daylight[57] and reduces the long-lived fluorescent background generated by
transition ions and rare earth ions. Another way to avoid fluorescence, first demonstrated by Sandy Asher
in 1984, is to use a UV laser probe beam. At wavelengths of 260 nm, there is effectively no fluorescence
interference and the UV signal is inherently strong.[25][58][59] A 10X beam expander mounted in front of
the laser allows focusing of the beam and a telescope is directly coupled through the camera lens for
signal collection. With the system's time-gating capability it is possible to measure remote Raman of your
distant target and the atmosphere between the laser and target.[25]

See also
https://en.wikipedia.org/wiki/Raman_spectroscopy 7/11
7/19/2017 Raman spectroscopy - Wikipedia

Raman microscope

References
1. Gardiner, D.J. (1989). Practical Raman spectroscopy. Springer-Verlag. ISBN 978-0-387-50254-0.
2. G., Hammes, Gordon (2005). Spectroscopy for the biological sciences (https://www.worldcat.org/oclc/850776164).
Wiley. ISBN 9780471733546. OCLC 850776164 (https://www.worldcat.org/oclc/850776164).
3. Smekal, A. (1923). "Zur Quantentheorie der Dispersion". Die Naturwissenschaften. 11 (43): 873–875.
doi:10.1007/BF01576902 (https://doi.org/10.1007%2FBF01576902).
4. Placzek G. (1934) "Rayleigh Streuung und Raman Effekt", In: Hdb. der Radiologie, Vol. VI., 2, p. 209
5. THE FINGERPRINT REGION OF AN INFRA-RED SPECTRUM (http://www.chemguide.co.uk/analysis/ir/fingerp
rint.html) Chemguide, Jim Clark 2000
6. Khanna, R.K. (1981). "Raman-spectroscopy of oligomeric SiO species isolated in solid methane". Journal of
Chemical Physics. 74 (4): 2108. Bibcode:1981JChPh..74.2108K (http://adsabs.harvard.edu/abs/1981JChPh..74.2108
K). doi:10.1063/1.441393 (https://doi.org/10.1063%2F1.441393).
7. "FDA approves Gilead cystic fibrosis drug Cayston" (http://www.businessweek.com/ap/financialnews/D9E237QG1.
htm). BusinessWeek. February 23, 2010. Retrieved 2010-03-05.
8. Chou, Kuo-Chen; Chen, Nian-Yi (1977). "The biological functions of low-frequency phonons". Scientia Sinica. 20
(3): 447–457.
9. Urabe, H.; Tominaga, Y.; Kubota, K. (1983). "Experimental evidence of collective vibrations in DNA double helix
Raman spectroscopy". Journal of Chemical Physics. 78 (10): 5937–5939. Bibcode:1983JChPh..78.5937U (http://ads
abs.harvard.edu/abs/1983JChPh..78.5937U). doi:10.1063/1.444600 (https://doi.org/10.1063%2F1.444600).
10. Chou, K.C. (1983). "Identification of low-frequency modes in protein molecules" (https://www.ncbi.nlm.nih.gov/pm
c/articles/PMC1152424). Biochemical Journal. 215 (3): 465–469. PMC 1152424 (https://www.ncbi.nlm.nih.gov/pm
c/articles/PMC1152424)  . PMID 6362659 (https://www.ncbi.nlm.nih.gov/pubmed/6362659).
doi:10.1042/bj2150465 (https://doi.org/10.1042%2Fbj2150465).
11. Chou, K.C. (1984). "Low-frequency vibration of DNA molecules" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC
1143999). Biochemical Journal. 221 (1): 27–31. PMC 1143999 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC114
3999)  . PMID 6466317 (https://www.ncbi.nlm.nih.gov/pubmed/6466317). doi:10.1042/bj2210027 (https://doi.org/1
0.1042%2Fbj2210027).
12. Urabe, H.; Sugawara, Y.; Ataka, M.; Rupprecht, A. (1998). "Low-frequency Raman spectra of lysozyme crystals and
oriented DNA films: dynamics of crystal water" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1299499). Biophys
J. 74 (3): 1533–1540. PMC 1299499 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1299499)  . PMID 9512049
(https://www.ncbi.nlm.nih.gov/pubmed/9512049). doi:10.1016/s0006-3495(98)77865-8 (https://doi.org/10.1016%2F
s0006-3495%2898%2977865-8).
13. Chou, Kuo-Chen (1988). "Review: Low-frequency collective motion in biomacromolecules and its biological
functions". Biophysical Chemistry. 30 (1): 3–48. PMID 3046672 (https://www.ncbi.nlm.nih.gov/pubmed/3046672).
doi:10.1016/0301-4622(88)85002-6 (https://doi.org/10.1016%2F0301-4622%2888%2985002-6).
14. Chou, K.C. (1989). "Low-frequency resonance and cooperativity of hemoglobin". Trends in Biochemical Sciences.
14 (6): 212–3. PMID 2763333 (https://www.ncbi.nlm.nih.gov/pubmed/2763333). doi:10.1016/0968-0004(89)90026-
1 (https://doi.org/10.1016%2F0968-0004%2889%2990026-1).
15. Schlücker, S.; et al. (2011). "Design and synthesis of Raman reporter molecules for tissue imaging by immuno-SERS
microscopy". Journal of Biophotonics. 4 (6): 453–463. PMID 21298811 (https://www.ncbi.nlm.nih.gov/pubmed/212
98811). doi:10.1002/jbio.201000116 (https://doi.org/10.1002%2Fjbio.201000116).
16. Jain, R.; et al. (2014). "Raman Spectroscopy Enables Noninvasive Biochemical Characterization and Identification
of the Stage of Healing of a Wound" (http://pubs.acs.org/doi/abs/10.1021/ac500513t). Analytical Chemistry. 86 (8):
3764–3772. PMC 4004186 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4004186)  . PMID 24559115 (https://w
ww.ncbi.nlm.nih.gov/pubmed/24559115). doi:10.1021/ac500513t (https://doi.org/10.1021%2Fac500513t).
17. "Fake drugs caught inside the pack" (http://news.bbc.co.uk/2/hi/health/6314287.stm). BBC News. 2007-01-31.
Retrieved 2008-12-08.
18. "Using Raman spectroscopy to characterize biological materials : Nature Protocols" (https://www.nature.com/article
s/nprot.2016.036.epdf?shared_access_token=DsvtMSL4q7YxnbZQz2RtkdRgN0jAjWel9jnR3ZoTv0MUh2KdPBCw
sx0aoA-6NCRlXjfflKn423qefTB3_FhgiM-nCtFVThvUKNFfKqqeRgwphVSQYwC3JBFZ5x7jpfmbj70KdeHKwutc
GAu_gHI_NTBDWZfPuAjmwriCUAHQ_higA8RHn6dVm3uPSFWonb8W). www.nature.com. Retrieved
2017-05-22.
19. Taylor, P.D.; Vinn, O.; Kudryavtsev, A.; Schopf, J.W. (2010). "Raman spectroscopic study of the mineral
composition of cirratulid tubes (Annelida, Polychaeta)" (https://www.researchgate.net/publication/44690889_Raman
_spectroscopic_study_of_the_mineral_composition_of_cirratulid_tubes_%28Annelida_Polychaeta%29). Journal of
Structural Biology. 171 (3): 402–405. PMID 20566380 (https://www.ncbi.nlm.nih.gov/pubmed/20566380).
doi:10.1016/j.jsb.2010.05.010 (https://doi.org/10.1016%2Fj.jsb.2010.05.010). Retrieved 2014-06-10.
https://en.wikipedia.org/wiki/Raman_spectroscopy 8/11
7/19/2017 Raman spectroscopy - Wikipedia

20. Howell G. M. Edwards, John M. Chalmers, Raman Spectroscopy in Archaeology and Art History, Royal Society of
Chemistry, 2005
21. Raman Spectroscopy (http://colourlex.com/project/raman-spectroscopy/) at ColourLex
22. Quinn, Eamon (May 28, 2007) Irish classic is still a hit (in calfskin, not paperback) (https://www.nytimes.com/2007/
05/28/world/europe/28kells.html). New York Times
23. Ben Vogel (29 August 2008). "Raman spectroscopy portends well for standoff explosives detection" (https://web.arc
hive.org/web/20081203151346/http://www.janes.com/news/transport/business/jar/jar080829_1_n.shtml). Jane's.
Archived from the original (http://www.janes.com/news/transport/business/jar/jar080829_1_n.shtml) on 2008-12-03.
Retrieved 2008-08-29.
24. "Finding explosives with laser beams" (http://www.eurekalert.org/pub_releases/2012-02/vuot-few022712.php), a TU
Vienna press-release
25. Misra, Anupam K.; Sharma, Shiv K.; Acosta, Tayro E.; Porter, John N.; et al. (2012). "Single-Pulse Standoff Raman
Detection of Chemicals from 120 m Distance During Daytime". Applied Spectroscopy. 66 (11): 1279–85.
PMID 23146183 (https://www.ncbi.nlm.nih.gov/pubmed/23146183). doi:10.1366/12-06617 (https://doi.org/10.136
6%2F12-06617).
26. "Working Groups | raman4clinics.eu" (https://www.raman4clinics.eu/working-groups/). www.raman4clinics.eu.
Retrieved 2017-05-22.
27. Li, Xufan; Lin, Ming-Wei; Puretzky, Alexander A.; Idrobo, Juan C.; Ma, Cheng; Chi, Miaofang; Yoon, Mina;
Rouleau, Christopher M.; Kravchenko, Ivan I.; Geohegan, David B.; Xiao, Kai (2014). "Controlled Vapor Phase
Growth of Single Crystalline, Two-Dimensional Ga Se Crystals with High Photoresponse". Scientific Reports. 4.
doi:10.1038/srep05497 (https://doi.org/10.1038%2Fsrep05497).
28. Marcet, S.; Verhaegen, M.; Blais-Ouellette, S.; Martel, R. (2012). "Raman Spectroscopy hyperspectral imager based
on Bragg Tunable Filters". SPIE Photonics North. Photonics North 2012. 8412: 84121J. doi:10.1117/12.2000479 (htt
ps://doi.org/10.1117%2F12.2000479).
29. Robin W. Havener; et al. (December 2011). "High-Throughput Graphene Imaging on Arbitrary Substrates with
Widefield Raman Spectroscopy". ACS Nano. 6 (1): 373–80. PMID 22206260 (https://www.ncbi.nlm.nih.gov/pubme
d/22206260). doi:10.1021/nn2037169 (https://doi.org/10.1021%2Fnn2037169).
30. Gaufrès, E.; Tang, N. Y.-Wa; Lapointe, F.; Cabana, J.; Nadon, M.-A.; Cottenye, N.; Raymond, F.; Szkopek, T.;
Martel, R. (2014). "Giant Raman scattering from J-aggregated dyes inside carbon nanotubes for multispectral
imaging". Nature Photonics. 8: 72–78. doi:10.1038/nphoton.2013.309 (https://doi.org/10.1038%2Fnphoton.2013.30
9).
31. "Mitochondria on guard of human life" (https://phys.org/news/2015-11-mitochondria-human-life.html), Lomonosov
Moscow State University. Phys. November 18, 2015. Retrieved 10 feb 2017
32. Ellis DI; Goodacre R (August 2006). "Metabolic fingerprinting in disease diagnosis: biomedical applications of
infrared and Raman spectroscopy". Analyst. 131 (8): 875–85. Bibcode:2006Ana...131..875E (http://adsabs.harvard.e
du/abs/2006Ana...131..875E). PMID 17028718 (https://www.ncbi.nlm.nih.gov/pubmed/17028718).
doi:10.1039/b602376m (https://doi.org/10.1039%2Fb602376m).
33. Itoh, Yuki; Hasegawa, Takeshi (May 2, 2012). "Polarization Dependence of Raman Scattering from a Thin Film
Involving Optical Anisotropy Theorized for Molecular Orientation Analysis". The Journal of Physical Chemistry A.
116 (23): 5560–5570. PMID 22551093 (https://www.ncbi.nlm.nih.gov/pubmed/22551093). doi:10.1021/jp301070a
(https://doi.org/10.1021%2Fjp301070a).
34. Iliev, M. N.; Abrashev, M. V.; Laverdiere, J.; Jandi, S.; et al. (February 16, 2006). "Distortion-dependent Raman
spectra and mode mixing in RMnO3 perovskites (R=La,Pr,Nd,Sm,Eu,Gd,Tb,Dy,Ho,Y)". Physical Review B.
35. Khanna, R.K. (1957). Evidence of ion-pairing in the polarized Raman spectra of a Ba2+CrO doped KI single
crystal. John Wiley & Sons, Ltd. doi:10.1002/jrs.1250040104 (https://doi.org/10.1002%2Fjrs.1250040104).
36. Porto's notation (http://www.cryst.ehu.es/cgi-bin/cryst/programs/nph-doc-raman) Bilbao Crystallographic → Raman
scattering
37. Jeanmaire DL; van Duyne RP (1977). "Surface Raman Electrochemistry Part I. Heterocyclic, Aromatic and
Aliphatic Amines Adsorbed on the Anodized Silver Electrode". Journal of Electroanalytical Chemistry. Elsevier
Sequouia S.A. 84: 1–20. doi:10.1016/S0022-0728(77)80224-6 (https://doi.org/10.1016%2FS0022-0728%2877%298
0224-6).
38. Lombardi JR; Birke RL (2008). "A Unified Approach to Surface-Enhanced Raman Spectroscopy". [Journal of
Physical Chemistry C]. American Chemical Society. 112 (14): 5605–5617. doi:10.1021/jp800167v (https://doi.org/1
0.1021%2Fjp800167v).
39. Chao RS; Khanna RK; Lippincott ER (1974). "Theoretical and experimental resonance Raman intensities for the
manganate ion". J Raman Spectroscopy. 3 (2–3): 121–131. Bibcode:1975JRSp....3..121C (http://adsabs.harvard.edu/
abs/1975JRSp....3..121C). doi:10.1002/jrs.1250030203 (https://doi.org/10.1002%2Fjrs.1250030203).
40. Zachary J. Smith & Andrew J. Berger (2008). "Integrated Raman- and angular-scattering microscopy". Opt. Lett. 3
(7): 714–716. Bibcode:2008OptL...33..714S (http://adsabs.harvard.edu/abs/2008OptL...33..714S).
doi:10.1364/OL.33.000714 (https://doi.org/10.1364%2FOL.33.000714).

https://en.wikipedia.org/wiki/Raman_spectroscopy 9/11
7/19/2017 Raman spectroscopy - Wikipedia

41. Kneipp K; et al. (1999). "Surface-Enhanced Non-Linear Raman Scattering at the Single Molecule Level". Chem.
Phys. 247: 155–162. Bibcode:1999CP....247..155K (http://adsabs.harvard.edu/abs/1999CP....247..155K).
doi:10.1016/S0301-0104(99)00165-2 (https://doi.org/10.1016%2FS0301-0104%2899%2900165-2).
42. Barron LD; Hecht L; McColl IH; Blanch EW (2004). "Raman optical activity comes of age". Molec. Phys. 102 (8):
731–744. Bibcode:2004MolPh.102..731B (http://adsabs.harvard.edu/abs/2004MolPh.102..731B).
doi:10.1080/00268970410001704399 (https://doi.org/10.1080%2F00268970410001704399).
43. Schrader, Bernhard; Bergmann, Gerhard (1967). "Die Intensität des Ramanspektrums polykristalliner Substanzen".
Fresenius' Zeitschrift für Analytische Chemie. 225 (2): 230–247. ISSN 0016-1152 (https://www.worldcat.org/issn/00
16-1152). doi:10.1007/BF00983673 (https://doi.org/10.1007%2FBF00983673).
44. Matousek, P.; Parker, A. W. (2006). "Bulk Raman Analysis of Pharmaceutical Tablets". Applied Spectroscopy. 60
(12): 1353–1357. Bibcode:2006ApSpe..60.1353M (http://adsabs.harvard.edu/abs/2006ApSpe..60.1353M).
PMID 17217583 (https://www.ncbi.nlm.nih.gov/pubmed/17217583). doi:10.1366/000370206779321463 (https://doi.
org/10.1366%2F000370206779321463).
45. Matousek, P.; Stone, N. (2007). "Prospects for the diagnosis of breast cancer by noninvasive probing of calcifications
using transmission Raman spectroscopy". Journal of Biomedical Optics. 12 (2): 024008.
Bibcode:2007JBO....12b4008M (http://adsabs.harvard.edu/abs/2007JBO....12b4008M). PMID 17477723 (https://ww
w.ncbi.nlm.nih.gov/pubmed/17477723). doi:10.1117/1.2718934 (https://doi.org/10.1117%2F1.2718934).
46. Kamemoto, Lori E.; Misra, Anupam K.; Sharma, Shiv K.; Goodman, Hugh Luk; et al. (December 4, 2009). "Near-
Infrared Micro-Raman Spectroscopy for in Vitro Detection of Cervical Cancer" (https://www.ncbi.nlm.nih.gov/pmc/
articles/PMC2880181). Applied Spectroscopy. 64 (3): 255–61. PMC 2880181 (https://www.ncbi.nlm.nih.gov/pmc/art
icles/PMC2880181)  . PMID 20223058 (https://www.ncbi.nlm.nih.gov/pubmed/20223058).
doi:10.1366/000370210790918364 (https://doi.org/10.1366%2F000370210790918364).
47. Hermann, P; Hermeling, A; Lausch, V; Holland, G; Möller, L; Bannert, N; Naumann, D (2011). "Evaluation of tip-
enhanced Raman spectroscopy for characterizing different virus strains". Analyst. 136 (2): 1148–1152.
doi:10.1039/C0AN00531B (https://doi.org/10.1039%2FC0AN00531B).
48. Novotny, L; Hafner, C (1994). "Light propagation in a cylindrical waveguide with a complex, metallic, dielectric
function". Physical Review E. 50 (5): 4094–4106. Bibcode:1994PhRvE..50.4094N (http://adsabs.harvard.edu/abs/19
94PhRvE..50.4094N). doi:10.1103/PhysRevE.50.4094 (https://doi.org/10.1103%2FPhysRevE.50.4094).
49. De Angelis, F; Das, G; Candeloro, P; Patrini, M; et al. (2010). "Nanoscale chemical mapping using three-
dimensional adiabatic compression of surface plasmon polaritons". Nature Nanotechnology. 5 (1): 67–72.
Bibcode:2010NatNa...5...67D (http://adsabs.harvard.edu/abs/2010NatNa...5...67D). PMID 19935647 (https://www.n
cbi.nlm.nih.gov/pubmed/19935647). doi:10.1038/nnano.2009.348 (https://doi.org/10.1038%2Fnnano.2009.348).
50. De Angelis, F; Proietti Zaccaria, R; Francardi, M; Liberale, C; et al. (2011). "Multi-scheme approach for efficient
surface plasmon polariton generation in metallic conical tips on AFM-based cantilevers". Optics Express. 19 (22):
22268. Bibcode:2011OExpr..1922268D (http://adsabs.harvard.edu/abs/2011OExpr..1922268D).
doi:10.1364/OE.19.022268 (https://doi.org/10.1364%2FOE.19.022268).
51. Proietti Zaccaria, R; Alabastri, A; De Angelis, F; Das, G; et al. (2012). "Fully analytical description of adiabatic
compression in dissipative polaritonic structures". Physical Review B. 86 (3): 035410.
Bibcode:2012PhRvB..86c5410P (http://adsabs.harvard.edu/abs/2012PhRvB..86c5410P).
doi:10.1103/PhysRevB.86.035410 (https://doi.org/10.1103%2FPhysRevB.86.035410).
52. Proietti Zaccaria, R; De Angelis, F; Toma, A; Razzari, L; et al. (2012). "Surface plasmon polariton compression
through radially and linearly polarized source". Optics Letters. 37 (4): 545. Bibcode:2012OptL...37..545Z (http://ads
abs.harvard.edu/abs/2012OptL...37..545Z). doi:10.1364/OL.37.000545 (https://doi.org/10.1364%2FOL.37.000545).
53. Misra, Anupam K.; Sharma, Shiv K.; Kamemoto, Lori; Zinin, Pavel V.; et al. (December 8, 2008). "Novel Micro-
Cavity Substrates for Improving the Raman Signal from Submicrometer Size Materials". Applied Spectroscopy. 63
(3): 373–7. PMID 19281655 (https://www.ncbi.nlm.nih.gov/pubmed/19281655). doi:10.1366/000370209787598988
(https://doi.org/10.1366%2F000370209787598988).
54. Cooney, J. (1965). Proceedings of the MRI Symposium on Electromagnetic Sensing of the Earth from Satellites, J.
Fox, Ed. Missing or empty |title= (help)
55. Leonard, Donald A. (1967). "Observation of Raman Scattering from the Atmosphere using a Pulsed Nitrogen
Ultraviolet Laser". Nature. 216 (5111): 142–143. doi:10.1038/216142a0 (https://doi.org/10.1038%2F216142a0).
56. Angel, S. M.; Kulp, Thomas J.; Vess, Thomas M. (1992). "Remote-Raman Spectroscopy at Intermediate Ranges
Using Low-Power cw Lasers". Applied Spectroscopy. 46 (7): 1085–1091. doi:10.1366/0003702924124132 (https://d
oi.org/10.1366%2F0003702924124132).
57. Carter, J. C.; Angel, S. M.; Lawrence-Snyder, M; Scaffidi, J; Whipple, R. E.; Reynolds, J. G. (2005). "Standoff
detection of high explosive materials at 50 meters in ambient light conditions using a small Raman instrument".
Applied Spectroscopy. 59 (6): 769–75. PMID 16053543 (https://www.ncbi.nlm.nih.gov/pubmed/16053543).
doi:10.1366/0003702054280612 (https://doi.org/10.1366%2F0003702054280612).
58. Gaft, M.; Panczer, M.G.; Reisfeld, R.; Uspensky, E. (2001). "Laser-induced timeresolved luminescence as a tool for
rare-earth element identification in minerals". Phys. Chem. Minerals. 28 (5): 347–363. doi:10.1007/s002690100163
(https://doi.org/10.1007%2Fs002690100163).
https://en.wikipedia.org/wiki/Raman_spectroscopy 10/11
7/19/2017 Raman spectroscopy - Wikipedia

59. Waychunas, G.A. (1988). "Luminescence, x-ray emission and new spectroscopies" (http://rimg.geoscienceworld.org/
content/18/1/639.citation). Reviews in Mineralogy Mineralogical Society of America. 18.

External links
DoITPoMS Teaching and Learning Package – Raman Spectroscopy (http://www.doitpoms.ac.uk/tlplib/ra
man/index.php) – an introduction to Raman spectroscopy, aimed at undergraduate level.
Raman spectroscopy in analysis of paintings (http://colourlex.com/project/raman-spectroscopy/),
ColourLex

Retrieved from "https://en.wikipedia.org/w/index.php?title=Raman_spectroscopy&oldid=790558742"

Categories: Raman spectroscopy Raman scattering

This page was last edited on 14 July 2017, at 14:52.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Raman_spectroscopy 11/11
7/19/2017 Nuclear magnetic resonance spectroscopy - Wikipedia

Nuclear magnetic resonance spectroscopy


From Wikipedia, the free encyclopedia

Nuclear magnetic resonance spectroscopy, most commonly


known as NMR spectroscopy, is a research technique that exploits
the magnetic properties of certain atomic nuclei. This type of
spectroscopy determines the physical and chemical properties of
atoms or the molecules in which they are contained. It relies on the
phenomenon of nuclear magnetic resonance and can provide
detailed information about the structure, dynamics, reaction state,
and chemical environment of molecules. The intramolecular
magnetic field around an atom in a molecule changes the resonance
frequency, thus giving access to details of the electronic structure
of a molecule and its individual functional groups.

Most frequently, NMR spectroscopy is used by chemists and


biochemists to investigate the properties of organic molecules,
although it is applicable to any kind of sample that contains nuclei
possessing spin. Suitable samples range from small compounds
analyzed with 1-dimensional proton or carbon-13 NMR
spectroscopy to large proteins or nucleic acids using 3 or 4- A 900MHz NMR instrument with a 21.1 T
dimensional techniques. The impact of NMR spectroscopy on the magnet at HWB-NMR, Birmingham, UK
sciences has been substantial because of the range of information
and the diversity of samples, including solutions and solids.

NMR spectra are unique, well-resolved, analytically tractable and often highly predictable for small molecules.
Thus, in organic chemistry practice, NMR analysis is used to confirm the identity of a substance. Different
functional groups are obviously distinguishable, and identical functional groups with differing neighboring
substituents still give distinguishable signals. NMR has largely replaced traditional wet chemistry tests such as
color reagents or typical chromatography for identification. A disadvantage is that a relatively large amount, 2–
50 mg, of a purified substance is required, although it may be recovered through a workup. Preferably, the
sample should be dissolved in a solvent, because NMR analysis of solids requires a dedicated MAS machine
and may not give equally well-resolved spectra. The timescale of NMR is relatively long, and thus it is not
suitable for observing fast phenomena, producing only an averaged spectrum. Although large amounts of
impurities do show on an NMR spectrum, better methods exist for detecting impurities, as NMR is inherently
not very sensitive - though at higher frequencies, sensitivity narrows.

NMR spectrometers are relatively expensive; universities usually have them, but they are less common in
private companies. Modern NMR spectrometers have a very strong, large and expensive liquid helium-cooled
superconducting magnet, because resolution directly depends on magnetic field strength. Less expensive
machines using permanent magnets and lower resolution are also available, which still give sufficient
performance for certain application such as reaction monitoring and quick checking of samples. There are even
benchtop nuclear magnetic resonance spectrometers.

NMR can be observed in magnetic fields less then a millitesla. "Low"-resolution NMR produces broader peaks
which can easily overlap one another causing issues in resolving complex structures. The use of higher strength
magnetic fields result in clear resolution of the peaks and is the standard in industry.[1]

Contents
1 History
2 Basic NMR techniques

https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectroscopy 1/10
7/19/2017 Nuclear magnetic resonance spectroscopy - Wikipedia

2.1 Resonant frequency


2.2 Sample handling
2.3 Deuterated solvents
2.4 Shim and lock
2.5 Acquisition of spectra
2.6 Chemical shift
2.7 J-coupling
2.7.1 Second-order (or strong) coupling
2.7.2 Magnetic inequivalence
3 Correlation spectroscopy
4 Solid-state nuclear magnetic resonance
5 Biomolecular NMR spectroscopy
5.1 Proteins
5.2 Nucleic acids
5.3 Carbohydrates
6 See also
7 References
8 Further reading
9 External links

History
The Purcell group at Harvard University and the Bloch group at Stanford University independently developed
NMR spectroscopy in the late 1940s and early 1950s. Edward Mills Purcell and Felix Bloch shared the 1952
Nobel Prize in Physics for their discoveries.[2]

Basic NMR techniques


Resonant frequency

When placed in a magnetic field, NMR active nuclei (such as 1H or 13C)


absorb electromagnetic radiation at a frequency characteristic of the
isotope.[3] The resonant frequency, energy of the absorption, and the
intensity of the signal are proportional to the strength of the magnetic field.
For example, in a 21 Tesla magnetic field, protons resonate at 900 MHz. It
is common to refer to a 21 T magnet as a 900 MHz magnet, although
different nuclei resonate at a different frequency at this field strength in
proportion to their nuclear magnetic moments.

Sample handling

An NMR spectrometer typically consists of a spinning sample-holder


inside a very strong magnet, a radio-frequency emitter and a receiver with The NMR sample is prepared in a
a probe (an antenna assembly) that goes inside the magnet to surround the thin-walled glass tube - an NMR
sample, optionally gradient coils for diffusion measurements and tube.
electronics to control the system. Spinning the sample is necessary to
average out diffusional motion. Whereas, measurements of diffusion
constants (diffusion ordered spectroscopy or DOSY)[4][5] are done the sample stationary and spinning off, and
flow cells can be used for online analysis of process flows.

Deuterated solvents

https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectroscopy 2/10
7/19/2017 Nuclear magnetic resonance spectroscopy - Wikipedia

The vast majority of nuclei in a solution would belong to the solvent, and most regular solvents are
hydrocarbons and would contain NMR-reactive protons. Thus, deuterium (hydrogen-2) is substituted (99+%).
The most used deuterated solvent is deuterochloroform (CDCl3), although deuterium oxide (D2O) and
deuterated DMSO (DMSO-d6) are used for hydrophilic analytes and deuterated benzene is also common. The
chemical shifts are slightly different in different solvents, depending on electronic solvation effects. NMR
spectra are often calibrated against the known solvent residual proton peak instead of added tetramethylsilane.

Shim and lock

To detect the very small frequency shifts due to nuclear magnetic resonance, the applied magnetic field must be
constant throughout the sample volume. High resolution NMR spectrometers use shims to adjust the
homogeneity of the magnetic field to parts per billion (ppb) in a volume of a few cubic centimeters. In order to
detect and compensate for inhomogeneity and drift in the magnetic field, the spectrometer maintains a "lock"
on the solvent deuterium frequency with a separate lock unit. In modern NMR spectrometers shimming is
adjusted automatically, though in some cases the operator has to optimize the shim parameters manually to
obtain the best possible resolution.[6][7]

Acquisition of spectra

Upon excitation of the sample with a radio frequency (60–1000 MHz) pulse, a nuclear magnetic resonance
response - a free induction decay (FID) - is obtained. It is a very weak signal, and requires sensitive radio
receivers to pick up. A Fourier transform is carried out to extract the frequency-domain spectrum from the raw
time-domain FID. A spectrum from a single FID has a low signal-to-noise ratio, but fortunately it improves
readily with averaging of repeated acquisitions. Good 1H NMR spectra can be acquired with 16 repeats, which
takes only minutes. However, for elements heavier than hydrogen, the relaxation time is rather long, e.g. around
8 seconds for 13C. Thus, acquisition of quantitative heavy-element spectra can be time-consuming, taking tens
of minutes to hours.

If the second excitation pulse is sent prematurely before the relaxation is complete, the average magnetization
vector still points in a nonparallel direction, giving suboptimal absorption and emission of the pulse. In
practice, the peak areas are then not proportional to the stoichiometry; only the presence, but not the amount of
functional groups is possible to discern. An inversion recovery experiment can be done to determine the
relaxation time and thus the required delay between pulses. A 180° pulse, an adjustable delay, and a 90° pulse
is transmitted. When the 90° pulse exactly cancels out the signal, the delay corresponds to the time needed for
90° of relaxation.[8] Inversion recovery is worthwhile for quantitive 13C, 2D and other time-consuming
experiments.

Chemical shift

A spinning charge generates a magnetic field that results in a magnetic moment proportional to the spin. In the
presence of an external magnetic field, two spin states exist (for a spin 1/2 nucleus): one spin up and one spin
down, where one aligns with the magnetic field and the other opposes it. The difference in energy (ΔE) between
the two spin states increases as the strength of the field increases, but this difference is usually very small,
leading to the requirement for strong NMR magnets (1-20 T for modern NMR instruments). Irradiation of the
sample with energy corresponding to the exact spin state separation of a specific set of nuclei will cause
excitation of those set of nuclei in the lower energy state to the higher energy state.

For spin 1/2 nuclei, the energy difference between the two spin states at a given magnetic field strength is
proportional to their magnetic moment. However, even if all protons have the same magnetic moments, they do
not give resonant signals at the same frequency values. This difference arises from the differing electronic
environments of the nucleus of interest. Upon application of an external magnetic field, these electrons move in
response to the field and generate local magnetic fields that oppose the much stronger applied field. This local
field thus "shields" the proton from the applied magnetic field, which must therefore be increased in order to
https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectroscopy 3/10
7/19/2017 Nuclear magnetic resonance spectroscopy - Wikipedia

achieve resonance (absorption of rf energy). Such increments are very small, usually in parts per million (ppm).
For instance, the proton peak from an aldehyde is shifted ca. 10 ppm compared to a hydrocarbon peak, since as
an electron-withdrawing group, the carbonyl deshields the proton by reducing the local electron density. The
difference between 2.3487 T and 2.3488 T is therefore about 42 ppm. However a frequency scale is commonly
used to designate the NMR signals, even though the spectrometer may operate by sweeping the magnetic field,
and thus the 42 ppm is 4200 Hz for a 100 MHz reference frequency (rf).

However given that the location of different NMR signals is dependent on the external magnetic field strength
and the reference frequency, the signals are usually reported relative to a reference signal, usually that of TMS
(tetramethylsilane). Additionally, since the distribution of NMR signals is field dependent, these frequencies are
divided by the spectrometer frequency. However, since we are dividing Hz by MHz, the resulting number
would be too small, and thus it is multiplied by a million. This operation therefore gives a locator number
called the "chemical shift" with units of parts per million.[9] In general, chemical shifts for protons are highly
predictable since the shifts are primarily determined by simpler shielding effects (electron density), but the
chemical shifts for many heavier nuclei are more strongly influenced by other factors including excited states
("paramagnetic" contribution to shielding tensor).

The chemical shift provides information about the structure of the


molecule. The conversion of the raw data to this information is called
assigning the spectrum. For example, for the 1H-NMR spectrum for
ethanol (CH3CH2OH), one would expect signals at each of three
specific chemical shifts: one for the CH3 group, one for the CH2 group
and one for the OH group. A typical CH3 group has a shift around 1
ppm, a CH2 attached to an OH has a shift of around 4 ppm and an OH
has a shift anywhere from 2–6 ppm depending on the solvent used and
the amount of hydrogen bonding. While the O atom does draw electron
density away from the attached H through their mutual sigma bond, the Example of the chemical shift: NMR
electron lone pairs on the O bathe the H in their shielding effect. spectrum of hexaborane B6H10
showing peaks shifted in frequency,
In Paramagnetic NMR spectroscopy, measurements are conducted on which give clues as to the molecular
paramagnetic samples. The paramagnetism gives rise to very diverse structure. (click to read interpretation
chemical shifts. In 1H NMR spectroscopy, the chemical shift range can details)
span 500 ppm.

Because of molecular motion at room temperature, the three methyl protons average out during the NMR
experiment (which typically requires a few ms). These protons become degenerate and form a peak at the same
chemical shift.

The shape and area of peaks are indicators of chemical structure too. In the example above—the proton
spectrum of ethanol—the CH3 peak has three times the area as the OH peak. Similarly the CH2 peak would be
twice the area of the OH peak but only 2/3 the area of the CH3 peak.

Software allows analysis of signal intensity of peaks, which under conditions of optimal relaxation, correlate
with the number of protons of that type. Analysis of signal intensity is done by integration—the mathematical
process that calculates the area under a curve. The analyst must integrate the peak and not measure its height
because the peaks also have width—and thus its size is dependent on its area not its height. However, it should
be mentioned that the number of protons, or any other observed nucleus, is only proportional to the intensity, or
the integral, of the NMR signal in the very simplest one-dimensional NMR experiments. In more elaborate
experiments, for instance, experiments typically used to obtain carbon-13 NMR spectra, the integral of the
signals depends on the relaxation rate of the nucleus, and its scalar and dipolar coupling constants. Very often
these factors are poorly known - therefore, the integral of the NMR signal is very difficult to interpret in more
complicated NMR experiments.

https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectroscopy 4/10
7/19/2017 Nuclear magnetic resonance spectroscopy - Wikipedia

J-coupling

Some of the most useful information for structure determination in a one-


dimensional NMR spectrum comes from J-coupling or scalar coupling (a Multiplicity Intensity Ratio
special case of spin-spin coupling) between NMR active nuclei. This coupling Singlet (s) 1
arises from the interaction of different spin states through the chemical bonds of
a molecule and results in the splitting of NMR signals. These splitting patterns Doublet (d) 1:1
can be complex or simple and, likewise, can be straightforwardly interpretable Triplet (t) 1:2:1
or deceptive. This coupling provides detailed insight into the connectivity of Quartet (q) 1:3:3:1
atoms in a molecule.
Quintet 1:4:6:4:1
Coupling to n equivalent (spin ½) nuclei splits the signal into a n+1 multiplet Sextet 1:5:10:10:5:1
with intensity ratios following Pascal's triangle as described on the right. Septet 1:6:15:20:15:6:1
Coupling to additional spins will lead to further splittings of each component of
the multiplet e.g. coupling to two
different spin ½ nuclei with
significantly different coupling
constants will lead to a doublet of
doublets (abbreviation: dd). Note
that coupling between nuclei that
are chemically equivalent (that is,
have the same chemical shift) has
no effect on the NMR spectra and
couplings between nuclei that are
distant (usually more than 3 bonds
apart for protons in flexible
molecules) are usually too small to
cause observable splittings. Long-
range couplings over more than
three bonds can often be observed
in cyclic and aromatic compounds,
leading to more complex splitting
patterns. 1
Example H NMR spectrum (1-dimensional) of ethanol plotted as signal
For example, in the proton spectrum intensity vs. chemical shift. There are three different types of H atoms in
for ethanol described above, the ethanol regarding NMR. The hydrogen (H) on the -OH group is not coupling
CH3 group is split into a triplet with with the other H atoms and appears as a singlet, but the CH3- and the -CH2-

an intensity ratio of 1:2:1 by the two hydrogens are coupling with each other, resulting in a triplet and quartet
neighboring CH2 protons. Similarly, respectively.

the CH2 is split into a quartet with


an intensity ratio of 1:3:3:1 by the three neighboring CH3 protons. In principle, the two CH2 protons would also
be split again into a doublet to form a doublet of quartets by the hydroxyl proton, but intermolecular exchange
of the acidic hydroxyl proton often results in a loss of coupling information.

Coupling to any spin ½ nuclei such as phosphorus-31 or fluorine-19 works in this fashion (although the
magnitudes of the coupling constants may be very different). But the splitting patterns differ from those
described above for nuclei with spin greater than ½ because the spin quantum number has more than two
possible values. For instance, coupling to deuterium (a spin 1 nucleus) splits the signal into a 1:1:1 triplet
because the spin 1 has three spin states. Similarly, a spin 3/2 nucleus splits a signal into a 1:1:1:1 quartet and so
on.

Coupling combined with the chemical shift (and the integration for protons) tells us not only about the chemical
environment of the nuclei, but also the number of neighboring NMR active nuclei within the molecule. In more
complex spectra with multiple peaks at similar chemical shifts or in spectra of nuclei other than hydrogen,
https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectroscopy 5/10
7/19/2017 Nuclear magnetic resonance spectroscopy - Wikipedia

coupling is often the only way to distinguish different nuclei.

Second-order (or strong) coupling

The above description assumes that the


coupling constant is small in
comparison with the difference in
NMR frequencies between the
inequivalent spins. If the shift
separation decreases (or the coupling
strength increases), the multiplet
intensity patterns are first distorted,
and then become more complex and
less easily analyzed (especially if more
than two spins are involved).
Intensification of some peaks in a
multiplet is achieved at the expense of
the remainder, which sometimes
almost disappear in the background 1H NMR spectrum of menthol with chemical shift in ppm on the horizontal
noise, although the integrated area axis. Each magnetically inequivalent proton has a characteristic shift, and
under the peaks remains constant. In couplings to other protons appear as splitting of the peaks into multiplets:
most high-field NMR, however, the e.g. peak a, because of the three magnetically equivalent protons in methyl
distortions are usually modest and the group a, couple to one adjacent proton (e) and thus appears as a doublet.
characteristic distortions (roofing) can
in fact help to identify related peaks.

Some of these patterns can be analyzed with the method published by John Pople,[10] though it has limited
scope.

Second-order effects decrease as the frequency difference between multiplets increases, so that high-field (i.e.
high-frequency) NMR spectra display less distortion than lower frequency spectra. Early spectra at 60 MHz
were more prone to distortion than spectra from later machines typically operating at frequencies at 200 MHz
or above.

Magnetic inequivalence

More subtle effects can occur if chemically equivalent spins (i.e., nuclei related by symmetry and so having the
same NMR frequency) have different coupling relationships to external spins. Spins that are chemically
equivalent but are not indistinguishable (based on their coupling relationships) are termed magnetically
inequivalent. For example, the 4 H sites of 1,2-dichlorobenzene divide into two chemically equivalent pairs by
symmetry, but an individual member of one of the pairs has different couplings to the spins making up the other
pair. Magnetic inequivalence can lead to highly complex spectra which can only be analyzed by computational
modeling. Such effects are more common in NMR spectra of aromatic and other non-flexible systems, while
conformational averaging about C-C bonds in flexible molecules tends to equalize the couplings between
protons on adjacent carbons, reducing problems with magnetic inequivalence.

Correlation spectroscopy
Correlation spectroscopy is one of several types of two-dimensional nuclear magnetic resonance (NMR)
spectroscopy or 2D-NMR. This type of NMR experiment is best known by its acronym, COSY. Other types of
two-dimensional NMR include J-spectroscopy, exchange spectroscopy (EXSY), Nuclear Overhauser effect
spectroscopy (NOESY), total correlation spectroscopy (TOCSY) and heteronuclear correlation experiments,
such as HSQC, HMQC, and HMBC. In correlation spectroscopy, emission is centered on the peak of an
individual nucleus; if its magnetic field is correlated with another nucleus by through-bond (COSY, HSQC,
https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectroscopy 6/10
7/19/2017 Nuclear magnetic resonance spectroscopy - Wikipedia

etc.) or through-space (NOE) coupling, a response can also be detected on the frequency of the correlated
nucleus. Two-dimensional NMR spectra provide more information about a molecule than one-dimensional
NMR spectra and are especially useful in determining the structure of a molecule, particularly for molecules
that are too complicated to work with using one-dimensional NMR. The first two-dimensional experiment,
COSY, was proposed by Jean Jeener, a professor at Université Libre de Bruxelles, in 1971.[11][12] This
experiment was later implemented by Walter P. Aue, Enrico Bartholdi and Richard R. Ernst, who published
their work in 1976.[13]

Solid-state nuclear magnetic resonance


A variety of physical circumstances do not allow molecules to be studied in solution, and at the same time not
by other spectroscopic techniques to an atomic level, either. In solid-phase media, such as crystals,
microcrystalline powders, gels, anisotropic solutions, etc., it is in particular the dipolar coupling and chemical
shift anisotropy that become dominant to the behaviour of the nuclear spin systems. In conventional solution-
state NMR spectroscopy, these additional interactions would lead to a significant broadening of spectral lines.
A variety of techniques allows establishing high-resolution conditions, that can, at least for 13C spectra, be
comparable to solution-state NMR spectra.

Two important concepts for high-resolution solid-state NMR spectroscopy are the limitation of possible
molecular orientation by sample orientation, and the reduction of anisotropic nuclear magnetic interactions by
sample spinning. Of the latter approach, fast spinning around the magic angle is a very prominent method,
when the system comprises spin 1/2 nuclei. Spinning rates of ca. 20 kHz are used, which demands special
equipment. A number of intermediate techniques, with samples of partial alignment or reduced mobility, is
currently being used in NMR spectroscopy.

Applications in which solid-state NMR effects occur are often related to structure investigations on membrane
proteins, protein fibrils or all kinds of polymers, and chemical analysis in inorganic chemistry, but also include
"exotic" applications like the plant leaves and fuel cells. For example, Rahmani et al. studied the effect of
pressure and temperature on the bicellar structures' self-assembly using deuterium NMR spectroscopy.[14]

Biomolecular NMR spectroscopy


Proteins

Much of the innovation within NMR spectroscopy has been within the field of protein NMR spectroscopy, an
important technique in structural biology. A common goal of these investigations is to obtain high resolution 3-
dimensional structures of the protein, similar to what can be achieved by X-ray crystallography. In contrast to
X-ray crystallography, NMR spectroscopy is usually limited to proteins smaller than 35 kDa, although larger
structures have been solved. NMR spectroscopy is often the only way to obtain high resolution information on
partially or wholly intrinsically unstructured proteins. It is now a common tool for the determination of
Conformation Activity Relationships where the structure before and after interaction with, for example, a drug
candidate is compared to its known biochemical activity. Proteins are orders of magnitude larger than the small
organic molecules discussed earlier in this article, but the basic NMR techniques and some NMR theory also
applies. Because of the much higher number of atoms present in a protein molecule in comparison with a small
organic compound, the basic 1D spectra become crowded with overlapping signals to an extent where direct
spectral analysis becomes untenable. Therefore, multidimensional (2, 3 or 4D) experiments have been devised
to deal with this problem. To facilitate these experiments, it is desirable to isotopically label the protein with
13C and 15N because the predominant naturally occurring isotope 12C is not NMR-active and the nuclear

quadrupole moment of the predominant naturally occurring 14N isotope prevents high resolution information
from being obtained from this nitrogen isotope. The most important method used for structure determination of
proteins utilizes NOE experiments to measure distances between atoms within the molecule. Subsequently, the

https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectroscopy 7/10
7/19/2017 Nuclear magnetic resonance spectroscopy - Wikipedia

distances obtained are used to generate a 3D structure of the molecule by solving a distance geometry problem.
NMR can also be used to obtain information on the dynamics and conformational flexibility of different regions
of a protein.

Nucleic acids

"Nucleic acid NMR" is the use of NMR spectroscopy to obtain information about the structure and dynamics of
polynucleic acids, such as DNA or RNA. As of 2003, nearly half of all known RNA structures had been
determined by NMR spectroscopy.[15]

Nucleic acid and protein NMR spectroscopy are similar but differences exist. Nucleic acids have a smaller
percentage of hydrogen atoms, which are the atoms usually observed in NMR spectroscopy, and because
nucleic acid double helices are stiff and roughly linear, they do not fold back on themselves to give "long-
range" correlations.[16] The types of NMR usually done with nucleic acids are 1H or proton NMR, 13C NMR,
15N NMR, and 31P NMR. Two-dimensional NMR methods are almost always used, such as correlation
spectroscopy (COSY) and total coherence transfer spectroscopy (TOCSY) to detect through-bond nuclear
couplings, and nuclear Overhauser effect spectroscopy (NOESY) to detect couplings between nuclei that are
close to each other in space.[17]

Parameters taken from the spectrum, mainly NOESY cross-peaks and coupling constants, can be used to
determine local structural features such as glycosidic bond angles, dihedral angles (using the Karplus equation),
and sugar pucker conformations. For large-scale structure, these local parameters must be supplemented with
other structural assumptions or models, because errors add up as the double helix is traversed, and unlike with
proteins, the double helix does not have a compact interior and does not fold back upon itself. NMR is also
useful for investigating nonstandard geometries such as bent helices, non-Watson–Crick basepairing, and
coaxial stacking. It has been especially useful in probing the structure of natural RNA oligonucleotides, which
tend to adopt complex conformations such as stem-loops and pseudoknots. NMR is also useful for probing the
binding of nucleic acid molecules to other molecules, such as proteins or drugs, by seeing which resonances are
shifted upon binding of the other molecule.[17]

Carbohydrates

Carbohydrate NMR spectroscopy addresses questions on the structure and conformation of carbohydrates.

See also
Distance geometry NMR spectroscopy of stereoisomers
Earth's field NMR Nuclear magnetic resonance spectroscopy of
In vivo magnetic resonance spectroscopy nucleic acids
Functional magnetic resonance spectroscopy of Nuclear magnetic resonance spectroscopy of
the brain proteins
Low field NMR Nuclear quadrupole resonance
Magnetic Resonance Imaging Pulsed field magnet
NMR crystallography Relaxation (NMR)
NMR spectra database Triple-resonance nuclear magnetic resonance
NMR tube - includes a section on sample spectroscopy
preparation
NMR spectroscopy of stereoisomers
References

1. Paudler, William (1974). Nuclear Magnetic Resonance. Boston: Allyn and Bacon Chemistry Series. pp. 9–11.

https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectroscopy 8/10
7/19/2017 Nuclear magnetic resonance spectroscopy - Wikipedia

2. "Background and Theory Page of Nuclear Magnetic Resonance Facility" (http://www.nmr.unsw.edu.au/usercorner/n


mrhistory.htm). Mark Wainwright Analytical Centre - University of Southern Wales Sydney. 9 December 2011.
Retrieved 9 February 2014.
3. Shah, N; Sattar, A; Benanti, M; Hollander, S; Cheuck, L (January 2006). "Magnetic resonance spectroscopy as an
imaging tool for cancer: a review of the literature." (http://www.jaoa.org/content/106/1/23.full). The Journal of the
American Osteopathic Association. 106 (1): 23–27. PMID 16428685 (https://www.ncbi.nlm.nih.gov/pubmed/164286
85).
4. Johnson Jr., C. S. (1999). "Diffusion ordered nuclear magnetic resonance spectroscopy: principles and applications".
Progress in Nuclear Magnetic Resonance Spectroscopy. 34: 203–256. doi:10.1016/S0079-6565(99)00003-5 (https://
doi.org/10.1016%2FS0079-6565%2899%2900003-5).
5. Neufeld, R.; Stalke, D. (2015). "Accurate Molecular Weight Determination of Small Molecules via DOSY-NMR by
Using External Calibration Curves with Normalized Diffusion Coefficients". Chem. Sci. 6: 3354–3364.
doi:10.1039/C5SC00670H (https://doi.org/10.1039%2FC5SC00670H).
6. "Center for NMR Spectroscopy: The Lock" (http://nmr.chem.wsu.edu/tutorials/basics/lock/). nmr.chem.wsu.edu.
7. "NMR Artifacts" (http://www2.chemistry.msu.edu/facilities/nmr/NMR_Artifacts.html). www2.chemistry.msu.edu.
8. Parella, Teodor. "INVERSION-RECOVERY EXPERIMENT" (http://triton.iqfr.csic.es/guide/eNMR/eNMR1D/invre
c.html). triton.iqfr.csic.es.
9. James Keeler. "Chapter 2: NMR and energy levels" (http://www-keeler.ch.cam.ac.uk/lectures/Irvine/chapter2.pdf)
(reprinted at University of Cambridge). Understanding NMR Spectroscopy. University of California, Irvine. Retrieved
2007-05-11.
10. Pople, J.A.; Bernstein, H. J.; Schneider, W. G. (1957). "The Analysis of Nuclear Magnetic Resonanace Spectra". Can
J. Chem. 35: 65–81.
11. Aue, W. P. and Bartholdi, E. and Ernst, R. R., Two‐dimensional spectroscopy. Application to nuclear magnetic
resonance; The Journal of Chemical Physics, 64, 2229-2246 (1976) (http://dx.doi.org/10.1063/1.432450)
12. Jeener, J., Jeener, Jean: Reminiscences about the Early Days of 2D NMR; John Wiley & Sons, Ltd: Encyclopedia of
Magnetic Resonance (2007) (http://dx.doi.org/10.1002/9780470034590.emrhp0087)
13. Martin, G.E; Zekter, A.S., Two-Dimensional NMR Methods for Establishing Molecular Connectivity; VCH
Publishers, Inc: New York, 1988 (p.59)
14. A. Rahmani, C. Knight , and M. R. Morrow. Response to hydrostatic pressure of bicellar dispersions containing
anionic lipid: Pressure-induced interdigitation. 2013, 29 (44), pp 13481–13490, doi:10.1021/la4035694 (https://dx.do
i.org/10.1021%2Fla4035694)
15. Fürtig, Boris; Richter, Christian; Wöhnert, Jens; Schwalbe, Harald (2003). "NMR Spectroscopy of RNA".
ChemBioChem. 4 (10): 936–62. PMID 14523911 (https://www.ncbi.nlm.nih.gov/pubmed/14523911).
doi:10.1002/cbic.200300700 (https://doi.org/10.1002%2Fcbic.200300700).
16. Addess, Kenneth J.; Feigon, Juli (1996). "Introduction to 1H NMR Spectroscopy of DNA". In Hecht, Sidney M.
Bioorganic Chemistry: Nucleic Acids. New York: Oxford University Press. ISBN 0-19-508467-5.
17. Wemmer, David (2000). "Chapter 5: Structure and Dynamics by NMR". In Bloomfield, Victor A.; Crothers, Donald
M.; Tinoco, Ignacio. Nucleic acids: Structures, Properties, and Functions. Sausalito, California: University Science
Books. ISBN 0-935702-49-0.

Further reading
John D. Roberts (1959). Nuclear Magnetic Resonance : applications to organic chemistry (http://resolve
r.caltech.edu/CaltechBOOK:1959.001). McGraw-Hill Book Company. ISBN 9781258811662.
J.A.Pople; W.G.Schneider; H.J.Bernstein (1959). High-resolution Nuclear Magnetic Resonance.
McGraw-Hill Book Company.
A. Abragam (1961). The Principles of Nuclear Magnetism. Clarendon Press. ISBN 9780198520146.
Charles P. Slichter (1963). Principles of magnetic resonance: with examples from solid state physics.
Harper & Row. ISBN 9783540084761.
John Emsley; James Feeney; Leslie Howard Sutcliffe (1965). High Resolution Nuclear Magnetic
Resonance Spectroscopy. Pergamon. ISBN 9781483184081.

External links
James Keeler. "Understanding NMR Spectroscopy" (http://www-
Wikimedia Commons has
keeler.ch.cam.ac.uk/lectures/Irvine/) (reprinted at University of
media related to Nuclear
Cambridge). University of California, Irvine. Retrieved
magnetic resonance
2007-05-11. spectroscopy.
https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectroscopy 9/10
7/19/2017 Nuclear magnetic resonance spectroscopy - Wikipedia

The Basics of NMR (http://www.cis.rit.edu/htbooks/nmr/) - A


non-technical overview of NMR theory, equipment, and techniques by Dr. Joseph Hornak, Professor of
Chemistry at RIT
GAMMA and PyGAMMA Libraries (http://scion.duhs.duke.edu/vespa/gamma/) - GAMMA is an open
source C++ library written for the simulation of Nuclear Magnetic Resonance Spectroscopy experiments.
PyGAMMA is a Python wrapper around GAMMA.
relax (http://nmr-relax.com) Software for the analysis of NMR dynamics
Vespa (http://scion.duhs.duke.edu/vespa/project/wiki) - VeSPA (Versatile Simulation, Pulses and
Analysis) is a free software suite composed of three Python applications. These GUI based tools are for
magnetic resonance (MR) spectral simulation, RF pulse design, and spectral processing and analysis of
MR data.

Retrieved from "https://en.wikipedia.org/w/index.php?


title=Nuclear_magnetic_resonance_spectroscopy&oldid=788275932"

Categories: Spectroscopy Nuclear magnetic resonance

This page was last edited on 30 June 2017, at 14:28.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectroscopy 10/10
7/19/2017 Electron paramagnetic resonance - Wikipedia

Electron paramagnetic resonance


From Wikipedia, the free encyclopedia

Electron paramagnetic resonance (EPR) or electron spin resonance (ESR) spectroscopy is a method for
studying materials with unpaired electrons. The basic concepts of EPR are analogous to those of nuclear
magnetic resonance (NMR), but it is electron spins that are excited instead of the spins of atomic nuclei. EPR
spectroscopy is particularly useful for studying metal complexes or organic radicals. EPR was first observed in
Kazan State University by Soviet physicist Yevgeny Zavoisky in 1944,[1][2] and was developed independently at
the same time by Brebis Bleaney at the University of Oxford.

Contents
1 Theory
1.1 Origin of an EPR signal
1.2 Field modulation
1.3 Maxwell–Boltzmann distribution
2 Hardware components
2.1 The microwave bridge
2.1.1 The need for the reference arm
2.2 The magnet
2.3 The microwave resonator (cavity)
3 Spectral parameters
EPR spectrometer
3.1 The g factor
3.2 Hyperfine coupling
3.3 Resonance linewidth definition
4 Pulsed electron paramagnetic resonance
5 Applications
6 Miniature electron spin resonance spectroscopy
with Micro-ESR
7 High-field high-frequency measurements
8 See also
9 References
10 External links

Theory
Origin of an EPR signal

Every electron has a magnetic moment and spin quantum number , with magnetic components
and . In the presence of an external magnetic field with strength , the electron's magnetic moment
aligns itself either parallel ( ) or antiparallel ( ) to the field, each alignment having a specific
energy due to the Zeeman effect:

where

is the electron's so-called g-factor (see also the Landé g-factor), for the free electron,[3]
is the Bohr magneton.

https://en.wikipedia.org/wiki/Electron_paramagnetic_resonance 1/12
7/19/2017 Electron paramagnetic resonance - Wikipedia

Therefore, the separation between the lower and the upper state is for unpaired free electrons.
This equation implies that the splitting of the energy levels is directly proportional to the magnetic field's
strength, as shown in the diagram below.

An unpaired electron can move between the two energy levels by either absorbing or emitting a photon of energy
such that the resonance condition, , is obeyed. This leads to the fundamental equation of EPR
spectroscopy: .

Experimentally, this equation permits a large combination of frequency and magnetic field values, but the great
majority of EPR measurements are made with microwaves in the 9000–10000 MHz (9–10 GHz) region, with
fields corresponding to about 3500 G (0.35 T). Furthermore, EPR spectra can be generated by either varying the
photon frequency incident on a sample while holding the magnetic field constant or doing the reverse. In
practice, it is usually the frequency that is kept fixed. A collection of paramagnetic centers, such as free radicals,
is exposed to microwaves at a fixed frequency. By increasing an external magnetic field, the gap between the
and energy states is widened until it matches the energy of the microwaves, as represented
by the double arrow in the diagram above. At this point the unpaired electrons can move between their two spin
states. Since there typically are more electrons in the lower state, due to the Maxwell–Boltzmann distribution
(see below), there is a net absorption of energy, and it is this absorption that is monitored and converted into a
spectrum. The upper spectrum below is the simulated absorption for a system of free electrons in a varying
magnetic field. The lower spectrum is the first derivative of the absorption spectrum. The latter is the most
common way to record and publish continuous wave EPR spectra.

For the microwave frequency of 9388.2 MHz, the predicted resonance occurs at a magnetic field of about
= 0.3350 teslas = 3350 gausses.

Because of electron-nuclear mass differences, the magnetic moment of an electron is substantially larger than the
corresponding quantity for any nucleus, so that a much higher electromagnetic frequency is needed to bring
about a spin resonance with an electron than with a nucleus, at identical magnetic field strengths. For example,

https://en.wikipedia.org/wiki/Electron_paramagnetic_resonance 2/12
7/19/2017 Electron paramagnetic resonance - Wikipedia

for the field of 3350 G shown at the right, spin resonance occurs near 9388.2 MHz for an electron compared to
only about 14.3 MHz for 1H nuclei. (For NMR spectroscopy, the corresponding resonance equation is
where and depend on the nucleus under study.)

Field modulation

As previously mentioned an EPR spectra is usually directly measured as


the first derivative of the absorption. This is accomplished by using field
modulation. A small additional oscillating magnetic field is applied to the
external magnetic field at a typical frequency of 100 kHz.[4] By detecting
the peak to peak amplitude the first derivative of the absorption is
measured. By using phase sensitive detection only signals with the same
modulation (100 kHz) are detected. This results in higher signal to noise
ratios. Note field modulation is unique to continuous wave EPR
measurements and spectra resulting from pulsed experiments are
The field oscillates between B1 and
presented as absorption profiles.
B2 due to the superimposed
Maxwell–Boltzmann distribution modulation field at 100 kHz. This
causes the absorption intensity to
In practice, EPR samples consist of collections of many paramagnetic oscillate between I1 and I2. The larger
species, and not single isolated paramagnetic centers. If the population of the difference the larger the intensity
radicals is in thermodynamic equilibrium, its statistical distribution is detected by the detector tuned to 100
described by the Maxwell–Boltzmann equation: KHz (note this can be negative or
even 0). As the difference between
the two intensities is detected the first
derivative of the absorption is
detected.

where is the number of paramagnetic centers occupying the upper energy state, is the Boltzmann
constant, and is the thermodynamic temperature. At 298 K, X-band microwave frequencies ( ≈ 9.75 GHz)
give ≈ 0.998, meaning that the upper energy level has a slightly smaller population than the lower
one. Therefore, transitions from the lower to the higher level are more probable than the reverse, which is why
there is a net absorption of energy.

The sensitivity of the EPR method (i.e., the minimal number of detectable spins ) depends on the photon
frequency according to

where is a constant, is the sample's volume, is the unloaded quality factor of the microwave cavity
(sample chamber), is the cavity filling coefficient, and is the microwave power in the spectrometer cavity.
With and being constants, ~ , i.e., ~ , where ≈ 1.5. In practice, can change
varying from 0.5 to 4.5 depending on spectrometer characteristics, resonance conditions, and sample size.

A great sensitivity is therefore obtained with a low detection limit and a large number of spins. Therefore,
the required parameters are:

A high spectrometer frequency to maximize the Eq. 2. Common frequencies are discussed below
A low temperature to decrease the number of spin at the high level of energy as shown in Eq. 1. This
condition explains why spectra are often recorded on sample at the boiling point of liquid nitrogen or
https://en.wikipedia.org/wiki/Electron_paramagnetic_resonance 3/12
7/19/2017 Electron paramagnetic resonance - Wikipedia

liquid helium.

Hardware components
The microwave bridge

The microwave bridge contains both the microwave source and the detector[5]. Older spectrometers used a
vacuum tube called a kylstron to generate microwaves however modern spectrometers use a Gunn diode.
Immediately after the microwave source there is an isolator which serves to attenuate any reflections back to the
source which would result in fluctuations in the microwave frequency.[6] The microwave power from the source
is then passed through a directional coupler which splits the microwave power into two paths, one directed
towards the cavity and the other the reference arm. Along both paths there is a variable attenuator is present
which is a device that allows for the precise control of the flow of microwave power. This in turn allows for
accurate control over the intensity of the microwaves the sample experiences. On the reference arm after the
variable attenuator there is a phase shifter that sets a defined phase relationship between the reference and
reflected signal which allows for phase sensitive detection.

Most EPR machines are reflection spectrometers meaning the detector should only see microwave radiation
coming back from the cavity. This is achieved by use of a device known as the circulator which directs the
microwave radiation (from the branch that is heading towards the cavity) into the cavity. Reflected microwave
radiation (after absorption by the sample) is then passed through the circulator towards the detector ensuring it
does not go back to the microwave source. The reference signal and reflected signal are combined and passed to
the detector diode which converts the microwave power into an electrical current.

The need for the reference arm

At low energies (less than 1 μW) the diode current is proportional to the microwave power and the detector is
referred to as a square law detector. At higher power levels (greater than 1 mW) the diode current is proportional
to the square root of the microwave power and the detector is called a linear detector. In order to obtain optimal
sensitivity as well as quantitative information the diode should be operating within the linear region. To ensure
the detector is operating at that level the reference arm serves to provide a "bias".

The magnet

In an EPR machine the magnetic assembly includes the magnetic with a dedicated power supply as well as a
field sensor or regulator such as a Hall probe. EPR machines use one of two types of magnet which is
determined by the operating microwave frequency (which determine the range of magnetic field strengths
required). The first is an electromagnet which are generally capable of generating field strengths of up to 1.5 T
making them suitable for measurements using the Q-band frequency. In order to generate field strengths
appropriate for W-band and higher frequency operation superconducting magnets are employed. The magnetic
field is homogeneous across the sample volume and has a high stability at static field.

The microwave resonator (cavity)

The microwave resonator is designed to enhance the microwave magnetic field at the sample in order to induce
EPR transitions. It is a metal box with a rectangular or cylindrical shape that resonates with microwaves (like an
organ pipe with sound waves). At the resonance frequency of the cavity microwaves remain inside the cavity and
are not reflected back. Resonance means the cavity stores microwave energy, its ability to do this is given by the
quality fact (Q) defined by the following equation:

https://en.wikipedia.org/wiki/Electron_paramagnetic_resonance 4/12
7/19/2017 Electron paramagnetic resonance - Wikipedia

The higher the value of Q the higher the sensitivity of the spectrometer. The energy dissipated is the energy lost
in one microwave period. Energy may be lost to the side walls of the cavity as microwaves may generate
currents which in turn generate heat. A consequence of resonance is the creation of a standing wave inside the
cavity. Electromagnetic standing waves have their electric and magnetic field components exactly out of phase.
This provides an advantage as the electric field provides nonresonant absorption of the microwaves, which in
turn increases the dissipated energy and reduces Q. To achieve the largest signals and hence sensitivity the
sample is positioned such that it lies within the magnetic field maximum and the electric field minimum. When
the magnetic field strength is such that an absorption event occurs, the value of Q will be reduced due to the
extra energy loss. This results in a change of impedance which serves to stop the cavity from being critically
coupled. This means microwaves will now be reflected back to the detector (in the microwave bridge) where an
EPR signal is detected.[7]

Spectral parameters
In real systems, electrons are normally not solitary, but are associated with one or more atoms. There are several
important consequences of this:

1. An unpaired electron can gain or lose angular momentum, which can change the value of its g-factor,
causing it to differ from . This is especially significant for chemical systems with transition-metal ions.
2. The magnetic moment of a nucleus with a non-zero nuclear spin will affect any unpaired electrons
associated with that atom. This leads to the phenomenon of hyperfine coupling, analogous to J-coupling in
NMR, splitting the EPR resonance signal into doublets, triplets and so forth.
3. Interactions of an unpaired electron with its environment influence the shape of an EPR spectral line. Line
shapes can yield information about, for example, rates of chemical reactions.[8]
4. The g-factor and hyperfine coupling in an atom or molecule may not be the same for all orientations of an
unpaired electron in an external magnetic field. This anisotropy depends upon the electronic structure of
the atom or molecule (e.g., free radical) in question, and so can provide information about the atomic or
molecular orbital containing the unpaired electron.

The g factor

Knowledge of the g-factor can give information about a paramagnetic center's electronic structure. An unpaired
electron responds not only to a spectrometer's applied magnetic field but also to any local magnetic fields of
atoms or molecules. The effective field experienced by an electron is thus written

where includes the effects of local fields ( can be positive or negative). Therefore, the
resonance condition (above) is rewritten as follows:

The quantity is denoted and called simply the g-factor, so that the final resonance equation becomes

This last equation is used to determine in an EPR experiment by measuring the field and the frequency at
which resonance occurs. If does not equal , the implication is that the ratio of the unpaired electron's spin
magnetic moment to its angular momentum differs from the free-electron value. Since an electron's spin
magnetic moment is constant (approximately the Bohr magneton), then the electron must have gained or lost
angular momentum through spin–orbit coupling. Because the mechanisms of spin–orbit coupling are well
understood, the magnitude of the change gives information about the nature of the atomic or molecular orbital
containing the unpaired electron.

https://en.wikipedia.org/wiki/Electron_paramagnetic_resonance 5/12
7/19/2017 Electron paramagnetic resonance - Wikipedia

In general, the g factor is not a number but a second-rank tensor represented by 9 numbers arranged in a 3×3
matrix. The principal axes of this tensor are determined by the local fields, for example, by the local atomic
arrangement around the unpaired spin in a solid or in a molecule. Choosing an appropriate coordinate system
(say, x,y,z) allows one to "diagonalize" this tensor, thereby reducing the maximal number of its components from
9 to 3: gxx, gyy and gzz. For a single spin experiencing only Zeeman interaction with an external magnetic field,
the position of the EPR resonance is given by the expression gxxBx + gyyBy + gzzBz. Here Bx, By and Bz are the
components of the magnetic field vector in the coordinate system (x,y,z); their magnitudes change as the field is
rotated, so does the frequency of the resonance. For a large ensemble of randomly oriented spins, the EPR
spectrum consists of three peaks of characteristic shape at frequencies gxxB0, gyyB0 and gzzB0: the low-frequency
peak is positive in first-derivative spectra, the high-frequency peak is negative, and the central peak is bipolar.
Such situation is commonly observed in powders, and the spectra are therefore called "powder-pattern spectra".
In crystals, the number of EPR lines is determined by the number of crystallographically equivalent orientations
of the EPR spin (called "EPR center").

Hyperfine coupling

Since the source of an EPR spectrum is a change in an electron's spin state, it might be thought that all EPR
spectra for a single electron spin would consist of one line. However, the interaction of an unpaired electron, by
way of its magnetic moment, with nearby nuclear spins, results in additional allowed energy states and, in turn,
multi-lined spectra. In such cases, the spacing between the EPR spectral lines indicates the degree of interaction
between the unpaired electron and the perturbing nuclei. The hyperfine coupling constant of a nucleus is directly
related to the spectral line spacing and, in the simplest cases, is essentially the spacing itself.

Two common mechanisms by which electrons and nuclei interact are the Fermi contact interaction and by
dipolar interaction. The former applies largely to the case of isotropic interactions (independent of sample
orientation in a magnetic field) and the latter to the case of anisotropic interactions (spectra dependent on sample
orientation in a magnetic field). Spin polarization is a third mechanism for interactions between an unpaired
electron and a nuclear spin, being especially important for -electron organic radicals, such as the benzene
radical anion. The symbols "a" or "A" are used for isotropic hyperfine coupling constants, while "B" is usually
employed for anisotropic hyperfine coupling constants.[9]

In many cases, the isotropic hyperfine splitting pattern for a radical freely tumbling in a solution (isotropic
system) can be predicted.

For a radical having M equivalent nuclei, each with a spin of I, the number of EPR lines expected is
2MI + 1. As an example, the methyl radical, CH3, has three 1H nuclei, each with I = 1/2, and so the
number of lines expected is 2MI + 1 = 2(3)(1/2) + 1 = 4, which is as observed.
For a radical having M1 equivalent nuclei, each with a spin of I1, and a group of M2 equivalent nuclei,
each with a spin of I2, the number of lines expected is (2M1I1 + 1) (2M2I2 + 1). As an example, the
methoxymethyl radical, H2C(OCH3), has two equivalent 1H nuclei, each with I = 1/2 and three equivalent
1H nuclei each with I = 1/2, and so the number of lines expected is (2M1I1 + 1) (2M2I2 + 1) = [2(2)(1/2) +
1][2(3)(1/2) + 1] = 3×4 = 12, again as observed.

The above can be extended to predict the number of lines for any number of nuclei.

While it is easy to predict the number of lines a radical's EPR spectrum should show, the reverse problem,
unraveling a complex multi-line EPR spectrum and assigning the various spacings to specific nuclei, is more
difficult.

In the often encountered case of I = 1/2 nuclei (e.g., 1H, 19F, 31P), the line intensities produced by a population
of radicals, each possessing M equivalent nuclei, will follow Pascal's triangle. For example, the spectrum at the
right shows that the three 1H nuclei of the CH3 radical give rise to 2MI + 1 = 2(3)(1/2) + 1 = 4 lines with a

https://en.wikipedia.org/wiki/Electron_paramagnetic_resonance 6/12
7/19/2017 Electron paramagnetic resonance - Wikipedia

1:3:3:1 ratio. The line spacing gives a hyperfine coupling


constant of aH = 23 G for each of the three 1H nuclei. Note
again that the lines in this spectrum are first derivatives of
absorptions.

As a second example, consider the methoxymethyl radical,


H2C(OCH3). The two equivalent methyl hydrogens will give
an overall 1:2:1 EPR pattern, each component of which is
further split by the three methoxy hydrogens into a 1:3:3:1 Simulated EPR spectrum of the CH3 radical
pattern to give a total of 3×4 = 12 lines, a triplet of quartets.
A simulation of the observed EPR spectrum is shown at the
right and agrees with the 12-line prediction and the expected
line intensities. Note that the smaller coupling constant
(smaller line spacing) is due to the three methoxy hydrogens,
while the larger coupling constant (line spacing) is from the
two hydrogens bonded directly to the carbon atom bearing
the unpaired electron. It is often the case that coupling
constants decrease in size with distance from a radical's
unpaired electron, but there are some notable exceptions,
such as the ethyl radical (CH2CH3).
Simulated EPR spectrum of the H2C(OCH3) radical

Resonance linewidth definition

Resonance linewidths are defined in terms of the magnetic induction B and its corresponding units, and are
measured along the x axis of an EPR spectrum, from a line's center to a chosen reference point of the line. These
defined widths are called halfwidths and possess some advantages: for asymmetric lines, values of left and right
halfwidth can be given. The halfwidth is the distance measured from the line's center to the point in which
absorption value has half of maximal absorption value in the center of resonance line. First inclination width
is a distance from center of the line to the point of maximal absorption curve inclination. In practice, a
full definition of linewidth is used. For symmetric lines, halfwidth , and full inclination width
.

Pulsed electron paramagnetic resonance


The dynamics of electron spins are best studied with pulsed measurements.[10] Microwave pulses typically 10–
100 ns long are used to control the spins in the Bloch sphere. The spin–lattice relaxation time can be measured
with an inversion recovery experiment.

As with pulsed NMR, the Hahn echo is central to many pulsed EPR experiments. A Hahn echo decay
experiment can be used to measure the dephasing time, as shown in the animation below. The size of the echo is
recorded for different spacings of the two pulses. This reveals the decoherence, which is not refocused by the
pulse. In simple cases, an exponential decay is measured, which is described by the time.

https://en.wikipedia.org/wiki/Electron_paramagnetic_resonance 7/12
7/19/2017 Electron paramagnetic resonance - Wikipedia

Pulsed electron paramagnetic resonance could be advanced into electron nuclear double resonance spectroscopy
(ENDOR), which utilizes waves in the radio frequencies. Since different nuclei with unpaired electrons respond
to different wavelengths, radio frequencies are required at times. Since the results of the ENDOR gives the
coupling resonance between the nuclei and the unpaired electron, the relationship between them can be
determined.

Applications
EPR/ESR spectroscopy is used in various branches of science, such as biology, chemistry and physics, for the
detection and identification of free radicals and paramagnetic centers such as F-centers. EPR is a sensitive,
specific method for studying both radicals formed in chemical reactions and the reactions themselves. For
example, when ice (solid H2O) is decomposed by exposure to high-energy radiation, radicals such as H, OH, and
HO2 are produced. Such radicals can be identified and studied by EPR. Organic and inorganic radicals can be
detected in electrochemical systems and in materials exposed to UV light. In many cases, the reactions to make
the radicals and the subsequent reactions of the radicals are of interest, while in other cases EPR is used to
provide information on a radical's geometry and the orbital of the unpaired electron. EPR/ESR spectroscopy is
also used in geology and archaeology as a dating tool. It can be applied to a wide range of materials such as
carbonates, sulfates, phosphates, silica or other silicates.[11]

Medical and biological applications of EPR also exist. Although radicals are very reactive, and so do not
normally occur in high concentrations in biology, special reagents have been developed to spin-label molecules
of interest. These reagents are particularly useful in biological systems. Specially-designed nonreactive radical
molecules can attach to specific sites in a biological cell, and EPR spectra can then give information on the
environment of these so-called spin labels or spin probes. Spin-labeled fatty acids have been extensively used to
study dynamic organisation of lipids in biological membranes,[12] lipid-protein interactions[13] and temperature
of transition of gel to liquid crystalline phases.[14]

https://en.wikipedia.org/wiki/Electron_paramagnetic_resonance 8/12
7/19/2017 Electron paramagnetic resonance - Wikipedia

A type of dosimetry system has been designed for reference standards and routine use in medicine, based on
EPR signals of radicals from irradiated polycrystalline α-alanine (the alanine deamination radical, the hydrogen
abstraction radical, and the (CO−(OH))=C(CH3)NH2+ radical) . This method is suitable for measuring gamma
and x-rays, electrons, protons, and high-linear energy transfer (LET) radiation of doses in the 1 Gy to 100 kGy
range.[15]

EPR/ESR spectroscopy can be applied only to systems in which the balance between radical decay and radical
formation keeps the free radicals concentration above the detection limit of the spectrometer used. This can be a
particularly severe problem in studying reactions in liquids. An alternative approach is to slow down reactions
by studying samples held at cryogenic temperatures, such as 77 K (liquid nitrogen) or 4.2 K (liquid helium). An
example of this work is the study of radical reactions in single crystals of amino acids exposed to x-rays, work
that sometimes leads to activation energies and rate constants for radical reactions.

The study of radiation-induced free radicals in biological substances (for cancer research) poses the additional
problem that tissue contains water, and water (due to its electric dipole moment) has a strong absorption band in
the microwave region used in EPR spectrometers.

EPR/ESR also has been used by archaeologists for the dating of teeth. Radiation damage over long periods of
time creates free radicals in tooth enamel, which can then be examined by EPR and, after proper calibration,
dated. Alternatively, material extracted from the teeth of people during dental procedures can be used to quantify
their cumulative exposure to ionizing radiation. People exposed to radiation from the Chernobyl disaster have
been examined by this method.[16][17]

Radiation-sterilized foods have been examined with EPR spectroscopy, the aim being to develop methods to
determine whether a particular food sample has been irradiated and to what dose.

Because of its high sensitivity, EPR was used recently to measure the quantity of energy used locally during a
mechanochemical milling process.[18]

EPR/ESR spectroscopy has been used to measure properties of crude oil, in particular asphaltene and vanadium
content. EPR measurement of asphaltene content is a function of spin density and solvent polarity. Prior work
dating to the 1960s has demonstrated the ability to measure vanadium content to sub-ppm levels.

In the field of quantum computing, pulsed EPR is used to control the state of electron spin qubits in materials
such as diamond, silicon and gallium arsenide.

Miniature electron spin resonance spectroscopy with Micro-ESR


Miniaturisation of military radar technologies allowed the development of miniature microwave electronics as a
spin-off by the California Institute of Technology. Since 2007 these sensors have been employed in miniaturized
electron spin resonance spectrometers called Micro-ESR. Applications include real-time monitoring of free
radical containing asphaltenes in (crude) oils, biomedical R&D to measure oxidative stress, evaluation of the
shelf life of food products.

High-field high-frequency measurements


High-field high-frequency EPR measurements are sometimes needed to detect subtle spectroscopic details.
However, for many years the use of electromagnets to produce the needed fields above 1.5 T was impossible,
due principally to limitations of traditional magnet materials. The first multifunctional millimeter EPR
spectrometer with a superconducting solenoid was described in the early 1970s by Prof. Y. S. Lebedev's group
(Russian Institute of Chemical Physics, Moscow) in collaboration with L. G. Oranski's group (Ukrainian Physics
and Technics Institute, Donetsk), which began working in the Institute of Problems of Chemical Physics,

https://en.wikipedia.org/wiki/Electron_paramagnetic_resonance 9/12
7/19/2017 Electron paramagnetic resonance - Wikipedia

Chernogolovka around 1975.[19] Two decades later, a W-band EPR spectrometer was produced as a small
commercial line by the German Bruker Company, initiating the expansion of W-band EPR techniques into
medium-sized academic laboratories.

Waveband L S C X P K Q U V E W F D — J —
300 100 75 30 20 12.5 8.5 6 4.6 4 3.2 2.7 2.1 1.6 1.1 0.83
1 3 4 10 15 24 35 50 65 75 95 111 140 190 285 360
0.03 0.11 0.14 0.33 0.54 0.86 1.25 1.8 2.3 2.7 3.5 3.9 4.9 6.8 10.2 12.8

The EPR waveband is stipulated by the frequency or wavelength of a spectrometer's microwave source (see
Table).

EPR experiments often are conducted at X and, less commonly, Q bands, mainly due to the ready availability of
the necessary microwave components (which originally were developed for radar applications). A second reason
for widespread X and Q band measurements is that electromagnets can reliably generate fields up to about 1
tesla. However, the low spectral resolution over g-factor at these wavebands limits the study of paramagnetic
centers with comparatively low anisotropic magnetic parameters. Measurements at > 40 GHz, in the millimeter
wavelength region, offer the following advantages:

1. EPR spectra are simplified due to the


reduction of second-order effects at
high fields.
2. Increase in orientation selectivity and
sensitivity in the investigation of
disordered systems.
3. The informativity and precision of
pulse methods, e.g., ENDOR also
increase at high magnetic fields. EPR spectra of TEMPO, a nitroxide radical, as a function of
4. Accessibility of spin systems with frequency. Note the improvement in resolution from left to right.[19]
larger zero-field splitting due to the
larger microwave quantum energy h .
5. The higher spectral resolution over g-factor, which increases with irradiation frequency and external
magnetic field B0. This is used to investigate the structure, polarity, and dynamics of radical
microenvironments in spin-modified organic and biological systems through the spin label and probe
method. The figure shows how spectral resolution improves with increasing frequency.
6. Saturation of paramagnetic centers occurs at a comparatively low microwave polarizing field B1, due to
the exponential dependence of the number of excited spins on the radiation frequency . This effect can be
successfully used to study the relaxation and dynamics of paramagnetic centers as well as of superslow
motion in the systems under study.
7. The cross-relaxation of paramagnetic centers decreases dramatically at high magnetic fields, making it
easier to obtain more-precise and more-complete information about the system under study.[19]

This was demonstrated experimentally in the study of various biological, polymeric and model systems at D-
band EPR.[20]

See also
Electric dipole spin resonance
Ferromagnetic resonance
Dynamic nuclear polarisation
Spin labels
Site-directed spin labeling
Spin trapping
EDMR
https://en.wikipedia.org/wiki/Electron_paramagnetic_resonance 10/12
7/19/2017 Electron paramagnetic resonance - Wikipedia

References
1. Zavoisky, E. (1945). "Spin-magnetic resonance in paramagnetics". Fizicheskiĭ Zhurnal. 9: 211–245.
2. Zavoisky, E. (1944). "Paramagnetic Absorption in Perpendicular and Parallel Fields for Salts, Solutions and Metals".
PhD thesis. Kazan State University.
3. Odom, B.; Hanneke, D.; D'Urso, B.; and Gabrielse, G. (2006). "New Measurement of the Electron Magnetic Moment
Using a One-Electron Quantum Cyclotron". Physical Review Letters. 97 (3): 030801. Bibcode:2006PhRvL..97c0801O
(http://adsabs.harvard.edu/abs/2006PhRvL..97c0801O). PMID 16907490 (https://www.ncbi.nlm.nih.gov/pubmed/169
07490). doi:10.1103/PhysRevLett.97.030801 (https://doi.org/10.1103%2FPhysRevLett.97.030801).
4. Chechik, Victor; Carter, Emma; Murphy, Damien (2016-07-14). Electron Paramagnetic Resonance (https://www.ama
zon.co.uk/d/Books/Electron-Paramagnetic-Resonance-Chemistry-Primers/0198727607/ref=sr_1_1?ie=UTF8&qid=14
92443665&sr=8-1&keywords=electron+paramagnetic+resonance). New York, NY: OUP Oxford.
ISBN 9780198727606.
5. Eaton, Gareth R.; Eaton, Sandra S.; Barr, David P.; Weber, Ralph T. (2010-04-10). Quantitative EPR (https://books.go
ogle.co.uk/books?id=sayWdlbWGfwC&pg=PA6&lpg=PA6&dq=microwave+bridge+epr&source=bl&ots=h4BxyYKn
pK&sig=4DYdXmzf2X49oXAx3QbniG18Wyo&hl=en&sa=X&ved=0ahUKEwjgtIr7wtfUAhUIAcAKHV7xB6kQ6A
EITDAG#v=onepage&q=microwave%20bridge%20epr&f=false). Springer Science & Business Media.
ISBN 9783211929483.
6. Chechik, Victor; Carter, Emma; Murphy, Damien (2016-07-14). Electron Paramagnetic Resonance (https://www.ama
zon.co.uk/d/Books/Electron-Paramagnetic-Resonance-Chemistry-Primers/0198727607/ref=sr_1_1?ie=UTF8&qid=14
99418567&sr=8-1&keywords=electron+paramagnetic+resonance) (UK ed. edition ed.). OUP Oxford.
ISBN 9780198727606.
7. Eaton, Dr Gareth R.; Eaton, Dr Sandra S.; Barr, Dr David P.; Weber, Dr Ralph T. (2010). "Basics of Continuous Wave
EPR". Quantitative EPR. Springer Vienna: 1–14. doi:10.1007/978-3-211-92948-3_1 (https://doi.org/10.1007%2F978-
3-211-92948-3_1).
8. Ira N. Levine (1975). Molecular Spectroscopy. Wiley & Sons, Inc. p. 380. ISBN 0-471-53128-6.
9. Strictly speaking, "a" refers to the hyperfine splitting constant, a line spacing measured in magnetic field units, while
A and B refer to hyperfine coupling constants measured in frequency units. Splitting and coupling constants are
proportional, but not identical. The book by Wertz and Bolton has more information (pp. 46 and 442). Wertz, J. E., &
Bolton, J. R. (1972). Electron spin resonance: Elementary theory and practical applications. New York: McGraw-Hill.
10. Arthur Schweiger; Gunnar Jeschke (2001). Principles of Pulse Electron Paramagnetic Resonance. Oxford University
Press. ISBN 978-0-19-850634-8.
11. Ikeya, Motoji. New Applications of Electron Spin Resonance (http://www.worldscientific.com/worldscibooks/10.1142/
1854). doi:10.1142/1854 (https://doi.org/10.1142%2F1854).
12. Yashroy, R. C. (1990). "Magnetic resonance studies of dynamic organisation of lipids in chloroplast membranes".
Journal of Biosciences. 15 (4): 281. doi:10.1007/BF02702669 (https://doi.org/10.1007%2FBF02702669).
13. Yashroy, R. C. (1991). "Protein heat denaturation and study of membrane lipid-protein interactions by spin label
ESR". Journal of Biochemical and Biophysical Methods. 22 (1): 55–9. PMID 1848569 (https://www.ncbi.nlm.nih.gov/
pubmed/1848569). doi:10.1016/0165-022X(91)90081-7 (https://doi.org/10.1016%2F0165-022X%2891%2990081-7).
14. Yashroy, R. C. (1990). "Determination of membrane lipid phase transition temperature from 13C-NMR intensities".
Journal of Biochemical and Biophysical Methods. 20 (4): 353–6. PMID 2365951 (https://www.ncbi.nlm.nih.gov/pub
med/2365951). doi:10.1016/0165-022X(90)90097-V (https://doi.org/10.1016%2F0165-022X%2890%2990097-V).
15. "Dosimetry Systems". Journal of the ICRU. 8 (5): 29. 2008. doi:10.1093/jicru/ndn027 (https://doi.org/10.1093%2Fjicr
u%2Fndn027).
16. Gualtieri, G.; Colacicchia, S; Sgattonic, R.; Giannonic, M. (2001). "The Chernobyl Accident: EPR Dosimetry on
Dental Enamel of Children". Applied Radiation and Isotopes. 55 (1): 71–79. PMID 11339534 (https://www.ncbi.nlm.n
ih.gov/pubmed/11339534). doi:10.1016/S0969-8043(00)00351-1 (https://doi.org/10.1016%2FS0969-8043%2800%29
00351-1).
17. Chumak, V.; Sholom, S.; Pasalskaya, L. (1999). "Application of High Precision EPR Dosimetry with Teeth for
Reconstruction of Doses to Chernobyl Populations" (http://rpd.oxfordjournals.org/cgi/content/abstract/84/1-4/515).
Radiation Protection Dosimetry. 84: 515–520. doi:10.1093/oxfordjournals.rpd.a032790 (https://doi.org/10.1093%2Fo
xfordjournals.rpd.a032790).
18. Baron, M.; Chamayou, A.; Marchioro, L.; Raffi, J. (2005). "Radicalar probes to measure the action of energy on
granular materials". Adv. Powder Technol. 16 (3): 199–212. doi:10.1163/1568552053750242 (https://doi.org/10.116
3%2F1568552053750242).
19. EPR of low-dimensional systems (http://hf-epr.awardspace.us/index.htm)
20. Krinichnyi, V.I. (1995) 2-mm Wave Band EPR Spectroscopy of Condensed Systems, CRC Press, Boca Raton, Fl.

External links

https://en.wikipedia.org/wiki/Electron_paramagnetic_resonance 11/12
7/19/2017 Electron paramagnetic resonance - Wikipedia

Electron Magnetic Resonance Program (http://www.magnet.fsu.ed Wikimedia Commons has


u/usershub/scientificdivisions/emr/overview.html) National High media related to Electron
Magnetic Field Laboratory paramagnetic resonance.
Electron Paramagnetic Resonance (Specialist Periodical Reports)
(http://www.rsc.org/shop/books/series.asp?seriesid=49) Published by the Royal Society of Chemistry
Using ESR to measure free radicals in used engine oil (http://www.campoly.com/files/2014/2073/0484/03
1_ESR_of_engine_oil_ADMIN-0256_v1.1.pdf)

Retrieved from "https://en.wikipedia.org/w/index.php?


title=Electron_paramagnetic_resonance&oldid=789433349"

Categories: Scientific techniques Magnetism Spectroscopy Russian inventions

This page was last edited on 7 July 2017, at 09:10.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Electron_paramagnetic_resonance 12/12
7/19/2017 Absorption spectroscopy - Wikipedia

Absorption spectroscopy
From Wikipedia, the free encyclopedia

Absorption spectroscopy refers to


spectroscopic techniques that measure the
absorption of radiation, as a function of
frequency or wavelength, due to its
interaction with a sample. The sample
absorbs energy, i.e., photons, from the
radiating field. The intensity of the
absorption varies as a function of frequency,
and this variation is the absorption
spectrum. Absorption spectroscopy is An overview of electromagnetic radiation absorption. This
performed across the electromagnetic example discusses the general principle using visible light as a
spectrum. specific example. A white beam source – emitting light of multiple
wavelengths – is focused on a sample (the complementary color pairs
Absorption spectroscopy is employed as an
are indicated by the yellow dotted lines). Upon striking the sample,
analytical chemistry tool to determine the
photons that match the energy gap of the molecules present (green
presence of a particular substance in a
light in this example) are absorbed in order to excite the molecule.
sample and, in many cases, to quantify the
Other photons transmit unaffected and, if the radiation is in the
amount of the substance present. Infrared
visible region (400-700nm), the sample color is the complementary
and ultraviolet-visible spectroscopy are
color of the absorbed light. By comparing the attenuation of the
particularly common in analytical
transmitted light with the incident, an absorption spectrum can be
applications. Absorption spectroscopy is
obtained.
also employed in studies of molecular and
atomic physics, astronomical spectroscopy
and remote sensing.

There are a wide range of experimental


approaches for measuring absorption
spectra. The most common arrangement is
to direct a generated beam of radiation at a
sample and detect the intensity of the
radiation that passes through it. The First direct detection and chemical analysis of the atmosphere of a
transmitted energy can be used to calculate planet outside our solar system in 2001. Sodium filters the star light
the absorption. The source, sample of HD 209458 as Jupiter planet passes in front.
arrangement and detection technique vary
significantly depending on the frequency range and the purpose of the experiment.

Contents
1 Absorption spectrum
1.1 Theory
1.2 Relation to transmission spectrum
1.3 Relation to emission spectrum
1.4 Relation to scattering and reflection spectra
2 Applications
2.1 Remote sensing
2.2 Astronomy
2.3 Atomic and molecular physics
3 Experimental methods
3.1 Basic approach
3.2 Specific approaches
https://en.wikipedia.org/wiki/Absorption_spectroscopy 1/7
7/19/2017 Absorption spectroscopy - Wikipedia

4 See also
5 References
6 External links

Absorption spectrum
A material's absorption spectrum is the fraction of
incident radiation absorbed by the material over a range
of frequencies. The absorption spectrum is primarily
determined[1][2][3] by the atomic and molecular
composition of the material. Radiation is more likely to
be absorbed at frequencies that match the energy
difference between two quantum mechanical states of Solar spectrum with Fraunhofer lines as it appears
the molecules. The absorption that occurs due to a visually.
transition between two states is referred to as an
absorption line and a spectrum is typically composed of many lines.

The frequencies where absorption lines occur, as well as their relative intensities, primarily depend on the
electronic and molecular structure of the sample. The frequencies will also depend on the interactions between
molecules in the sample, the crystal structure in solids, and on several environmental factors (e.g., temperature,
pressure, electromagnetic field). The lines will also have a width and shape that are primarily determined by the
spectral density or the density of states of the system.

Theory

Absorption lines are typically classified by the nature of the quantum mechanical change induced in the
molecule or atom. Rotational lines, for instance, occur when the rotational state of a molecule is changed.
Rotational lines are typically found in the microwave spectral region. Vibrational lines correspond to changes
in the vibrational state of the molecule and are typically found in the infrared region. Electronic lines
correspond to a change in the electronic state of an atom or molecule and are typically found in the visible and
ultraviolet region. X-ray absorptions are associated with the excitation of inner shell electrons in atoms. These
changes can also be combined (e.g. rotation-vibration transitions), leading to new absorption lines at the
combined energy of the two changes.

The energy associated with the quantum mechanical change primarily determines the frequency of the
absorption line but the frequency can be shifted by several types of interactions. Electric and magnetic fields
can cause a shift. Interactions with neighboring molecules can cause shifts. For instance, absorption lines of the
gas phase molecule can shift significantly when that molecule is in a liquid or solid phase and interacting more
strongly with neighboring molecules.

The width and shape of absorption lines are determined by the instrument used for the observation, the material
absorbing the radiation and the physical environment of that material. It is common for lines to have the shape
of a Gaussian or Lorentzian distribution. It is also common for a line to be described solely by its intensity and
width instead of the entire shape being characterized.

The integrated intensity—obtained by integrating the area under the absorption line—is proportional to the
amount of the absorbing substance present. The intensity is also related to the temperature of the substance and
the quantum mechanical interaction between the radiation and the absorber. This interaction is quantified by the
transition moment and depends on the particular lower state the transition starts from and the upper state it is
connected to.

https://en.wikipedia.org/wiki/Absorption_spectroscopy 2/7
7/19/2017 Absorption spectroscopy - Wikipedia

The width of absorption lines may be determined by the spectrometer used to record it. A spectrometer has an
inherent limit on how narrow a line it can resolve and so the observed width may be at this limit. If the width is
larger than the resolution limit, then it is primarily determined by the environment of the absorber. A liquid or
solid absorber, in which neighboring molecules strongly interact with one another, tends to have broader
absorption lines than a gas. Increasing the temperature or pressure of the absorbing material will also tend to
increase the line width. It is also common for several neighboring transitions to be close enough to one another
that their lines overlap and the resulting overall line is therefore broader yet.

Relation to transmission spectrum

Absorption and transmission spectra represent equivalent information and one can be calculated from the other
through a mathematical transformation. A transmission spectrum will have its maximum intensities at
wavelengths where the absorption is weakest because more light is transmitted through the sample. An
absorption spectrum will have its maximum intensities at wavelengths where the absorption is strongest.

Relation to emission spectrum

Emission is a process by which a substance


releases energy in the form of
electromagnetic radiation. Emission can
occur at any frequency at which absorption
can occur, and this allows the absorption Emission spectrum of iron
lines to be determined from an emission
spectrum. The emission spectrum will typically have a quite different intensity pattern from the absorption
spectrum, though, so the two are not equivalent. The absorption spectrum can be calculated from the emission
spectrum using appropriate theoretical models and additional information about the quantum mechanical states
of the substance.

Relation to scattering and reflection spectra

The scattering and reflection spectra of a material are influenced by both its index of refraction and its
absorption spectrum. In an optical context, the absorption spectrum is typically quantified by the extinction
coefficient, and the extinction and index coefficients are quantitatively related through the Kramers-Kronig
relation. Therefore, the absorption spectrum can be derived from a scattering or reflection spectrum. This
typically requires simplifying assumptions or models, and so the derived absorption spectrum is an
approximation.

Applications
Absorption spectroscopy is useful in chemical analysis[4] because of its specificity and its quantitative nature.
The specificity of absorption spectra allows compounds to be distinguished from one another in a mixture,
making absorption spectroscopy useful in wide variety of applications. For instance, Infrared gas analyzers can
be used to identify the presence of pollutants in the air, distinguishing the pollutant from nitrogen, oxygen,
water and other expected constituents.[5]

The specificity also allows unknown samples to be identified by comparing a measured spectrum with a library
of reference spectra. In many cases, it is possible to determine qualitative information about a sample even if it
is not in a library. Infrared spectra, for instance, have characteristics absorption bands that indicate if carbon-
hydrogen or carbon-oxygen bonds are present.

An absorption spectrum can be quantitatively related to the amount of material present using the Beer-Lambert
law. Determining the absolute concentration of a compound requires knowledge of the compound's absorption
coefficient. The absorption coefficient for some compounds is available from reference sources, and it can also
be determined by measuring the spectrum of a calibration standard with a known concentration of the target.
https://en.wikipedia.org/wiki/Absorption_spectroscopy 3/7
7/19/2017 Absorption spectroscopy - Wikipedia

Remote sensing

One of the unique advantages of spectroscopy as an


analytical technique is that measurements can be made
without bringing the instrument and sample into contact.
Radiation that travels between a sample and an instrument
will contain the spectral information, so the measurement
can be made remotely. Remote spectral sensing is valuable
in many situations. For example, measurements can be
made in toxic or hazardous environments without placing
an operator or instrument at risk. Also, sample material
does not have to be brought into contact with the
instrument—preventing possible cross contamination.

Remote spectral measurements present several challenges The infrared absorption spectrum of NASA
compared to laboratory measurements. The space in laboratory sulfur dioxide ice is compared with the
between the sample of interest and the instrument may also infrared absorption spectra of ices on Jupiter's
have spectral absorptions. These absorptions can mask or moon, Io credit NASA, Bernard Schmitt, and
confound the absorption spectrum of the sample. These UKIRT.
background interferences may also vary over time. The
source of radiation in remote measurements is often an
environmental source, such as sunlight or the thermal radiation from a warm object, and this makes it necessary
to distinguish spectral absorption from changes in the source spectrum.

To simplify these challenges, Differential optical absorption spectroscopy has gained some popularity, as it
focusses on differential absorption features and omits broad-band absorption such as aerosol extinction and
extinction due to rayleigh scattering. This method is applied to ground-based, air-borne and satellite based
measurements. Some ground-based methods provide the possibility to retrieve tropospheric and stratospheric
trace gas profiles.

Astronomy

Astronomical spectroscopy is a particularly


significant type of remote spectral sensing.
In this case, the objects and samples of
interest are so distant from earth that
electromagnetic radiation is the only means
available to measure them. Astronomical
spectra contain both absorption and
emission spectral information. Absorption
spectroscopy has been particularly
important for understanding interstellar
clouds and determining that some of them
contain molecules. Absorption spectroscopy
is also employed in the study of extrasolar
planets. Detection of extrasolar planets by
the transit method also measures their
absorption spectrum and allows for the
determination of the planet's atmospheric
Absorption spectrum observed by the Hubble Space Telescope
composition,[6] temperature, pressure, and
scale height, and hence allows also for the
determination of the planet's mass.[7]

Atomic and molecular physics


https://en.wikipedia.org/wiki/Absorption_spectroscopy 4/7
7/19/2017 Absorption spectroscopy - Wikipedia

Theoretical models, principally quantum mechanical models, allow for the absorption spectra of atoms and
molecules to be related to other physical properties such as electronic structure, atomic or molecular mass, and
molecular geometry. Therefore, measurements of the absorption spectrum are used to determine these other
properties. Microwave spectroscopy, for example, allows for the determination of bond lengths and angles with
high precision.

In addition, spectral measurements can be used to determine the accuracy of theoretical predictions. For
example, the Lamb shift measured in the hydrogen atomic absorption spectrum was not expected to exist at the
time it was measured. Its discovery spurred and guided the development of quantum electrodynamics, and
measurements of the Lamb shift are now used to determine the fine-structure constant.

Experimental methods
Basic approach

The most straightforward approach to absorption spectroscopy is to generate radiation with a source, measure a
reference spectrum of that radiation with a detector and then re-measure the sample spectrum after placing the
material of interest in between the source and detector. The two measured spectra can then be combined to
determine the material's absorption spectrum. The sample spectrum alone is not sufficient to determine the
absorption spectrum because it will be affected by the experimental conditions—the spectrum of the source, the
absorption spectra of other materials in between the source and detector and the wavelength dependent
characteristics of the detector. The reference spectrum will be affected in the same way, though, by these
experimental conditions and therefore the combination yields the absorption spectrum of the material alone.

A wide variety of radiation sources are employed in order to cover the electromagnetic spectrum. For
spectroscopy, it is generally desirable for a source to cover a broad swath of wavelengths in order to measure a
broad region of the absorption spectrum. Some sources inherently emit a broad spectrum. Examples of these
include globars or other black body sources in the infrared, mercury lamps in the visible and ultraviolet and x-
ray tubes. One recently developed, novel source of broad spectrum radiation is synchrotron radiation which
covers all of these spectral regions. Other radiation sources generate a narrow spectrum but the emission
wavelength can be tuned to cover a spectral range. Examples of these include klystrons in the microwave
region and lasers across the infrared, visible and ultraviolet region (though not all lasers have tunable
wavelengths).

The detector employed to measure the radiation power will also depend on the wavelength range of interest.
Most detectors are sensitive to a fairly broad spectral range and the sensor selected will often depend more on
the sensitivity and noise requirements of a given measurement. Examples of detectors common in spectroscopy
include heterodyne receivers in the microwave, bolometers in the millimeter-wave and infrared, mercury
cadmium telluride and other cooled semiconductor detectors in the infrared, and photodiodes and
photomultiplier tubes in the visible and ultraviolet.

If both the source and the detector cover a broad spectral region, then it is also necessary to introduce a means
of resolving the wavelength of the radiation in order to determine the spectrum. Often a spectrograph is used to
spatially separate the wavelengths of radiation so that the power at each wavelength can be measured
independently. It is also common to employ interferometry to determine the spectrum—Fourier transform
infrared spectroscopy is a widely used implementation of this technique.

Two other issues that must be considered in setting up an absorption spectroscopy experiment include the optics
used to direct the radiation and the means of holding or containing the sample material (called a cuvette or
cell). For most UV, visible, and NIR measurements the use of precision quartz cuvettes are necessary. In both
cases, it is important to select materials that have relatively little absorption of their own in the wavelength
range of interest. The absorption of other materials could interfere with or mask the absorption from the
sample. For instance, in several wavelength ranges it is necessary to measure the sample under vacuum or in a
rare gas environment because gases in the atmosphere have interfering absorption features.

https://en.wikipedia.org/wiki/Absorption_spectroscopy 5/7
7/19/2017 Absorption spectroscopy - Wikipedia

Specific approaches

Astronomical spectroscopy
Cavity ring down spectroscopy (CRDS)
Laser absorption spectrometry (LAS)
Mössbauer spectroscopy
Photoemission spectroscopy
Photothermal optical microscopy
Photothermal spectroscopy
Reflectance spectroscopy
Tunable diode laser absorption spectroscopy (TDLAS)
X-ray absorption fine structure (XAFS)
X-ray absorption near edge structure (XANES)
Total absorption spectroscopy (TAS)

See also
Absorption (optics) Photoemission spectroscopy
Densitometry Transparent materials
HITRAN Water absorption
Infrared gas analyzer White cell (spectroscopy)
Infrared spectroscopy of metal carbonyls X-ray absorption spectroscopy
Lyman-alpha forest
Optical density
Photoemission spectroscopy
References
1. Modern Spectroscopy (Paperback) by J. Michael Hollas ISBN 978-0-470-84416-8
2. Symmetry and Spectroscopy: An Introduction to Vibrational and Electronic Spectroscopy (Paperback) by Daniel C.
Harris, Michael D. Bertolucci ISBN 978-0-486-66144-5
3. Spectra of Atoms and Molecules by Peter F. Bernath ISBN 978-0-19-517759-6
4. James D. Ingle, Jr. and Stanley R. Crouch, Spectrochemical Analysis, Prentice Hall, 1988, ISBN 0-13-826876-2
5. "Gaseous Pollutants – Fourier Transform Infrared Spectroscopy" (https://web.archive.org/web/20121023233506/htt
p://www.epa.gov/apti/course422/ce4b4.html). Archived from the original (http://www.epa.gov/apti/course422/ce4b4.
html) on 2012-10-23. Retrieved 2009-09-30.
6. Khalafinejad, S.; Essen, C. von; Hoeijmakers, H. J.; Zhou, G.; Klocová, T.; Schmitt, J. H. M. M.; Dreizler, S.;
Lopez-Morales, M.; Husser, T.-O. (2017-02-01). "Exoplanetary atmospheric sodium revealed by orbital motion" (htt
p://www.aanda.org/10.1051/0004-6361/201629473). Astronomy & Astrophysics. 598. ISSN 0004-6361 (https://www.
worldcat.org/issn/0004-6361). doi:10.1051/0004-6361/201629473 (https://doi.org/10.1051%2F0004-6361%2F20162
9473).
7. de Wit, Julien; Seager, S. (19 December 2013). "Constraining Exoplanet Mass from Transmission Spectroscopy".
Science. 342 (6165): 1473–1477. Bibcode:2013Sci...342.1473D
(http://adsabs.harvard.edu/abs/2013Sci...342.1473D). PMID 24357312 (https://www.ncbi.nlm.nih.gov/pubmed/2435
7312). arXiv:1401.6181 (https://arxiv.org/abs/1401.6181)  . doi:10.1126/science.1245450 (https://doi.org/10.1126%
2Fscience.1245450).

External links
Solar absorption spectrum (http://csep10.phys.utk.edu/astr162/lect/sun/spectrum.html)
Visible Absorption Spectrum Simulation (http://learningobjects.wesleyan.edu/projects/project.php?loid=
11)
Plot Absorption Intensity for many molecules in HITRAN database (http://www.spectralcalc.com/spectra
l_browser/db_intensity.php)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Absorption_spectroscopy&oldid=788423218"

https://en.wikipedia.org/wiki/Absorption_spectroscopy 6/7
7/19/2017 Absorption spectroscopy - Wikipedia

Categories: Scientific techniques Analytical chemistry Spectroscopy Electromagnetic radiation


Astrochemistry Radiation

This page was last edited on 1 July 2017, at 09:03.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Absorption_spectroscopy 7/7
7/19/2017 Photoluminescence - Wikipedia

Photoluminescence
From Wikipedia, the free encyclopedia

Photoluminescence (abbreviated as PL) is light emission from any


form of matter after the absorption of photons (electromagnetic
radiation). It is one of many forms of luminescence (light emission) and
is initiated by photoexcitation (excitation by photons), hence the prefix
photo-.[1] Following excitation various relaxation processes typically
occur in which other photons are re-radiated. Time periods between
absorption and emission may vary: ranging from short femtosecond-
regime for emission involving free-carrier plasma in inorganic
semiconductors[2] up to milliseconds for phosphorescent processes in
molecular systems; and under special circumstances delay of emission Fluorescent solutions under UV-light.
may even span to minutes or hours. Absorbed photons are rapidly re-
emitted under shorter electromagnetic
Observation of photoluminescence at a certain energy can be viewed as wavelengths.
indication that excitation populated an excited state associated with this
transition energy.

While this is generally true in atoms and similar systems, correlations and other more complex phenomena also
act as sources for photoluminescence in many-body systems such as semiconductors. A theoretical approach to
handle this is given by the semiconductor luminescence equations.

Contents
1 Forms
2 Photoluminescence properties of direct-gap semiconductors
2.1 Ideal quantum-well structures
2.1.1 Photoexcitation
2.1.2 Relaxation
2.1.3 Radiative recombination
2.2 Effects of disorder
3 Photoluminescent material for temperature detection
4 Experimental Methods
5 See also
6 References
7 Further reading

Forms
Photoluminescence processes can be classified by various parameters such as the energy of the exciting photon
with respect to the emission. Resonant excitation describes a situation in which photons of a particular
wavelength are absorbed and equivalent photons are very rapidly re-emitted. This is often referred to as
resonance fluorescence. For materials in solution or in the gas phase, this process involves electrons but no
significant internal energy transitions involving molecular features of the chemical substance between
absorption and emission. In crystalline inorganic semiconductors where an electronic band structure is formed,
secondary emission can be more complicated as events may contain both coherent contributions such as
resonant Rayleigh scattering where a fixed phase relation with the driving light field is maintained (i.e.
energetically elastic processes where no losses are involved), and incoherent contributions (or inelastic modes
where some energy channels into an auxiliary loss mode),[3]

https://en.wikipedia.org/wiki/Photoluminescence 1/6
7/19/2017 Photoluminescence - Wikipedia

The latter originate, e.g., from the radiative recombination of excitons, Coulomb-bound electron-hole pair
states in solids. Resonance fluorescence may also show significant quantum optical correlations.[3][4][5]

More processes may occur when a substance undergoes internal energy transitions before re-emitting the
energy from the absorption event. Electrons change energy states by either resonantly gaining energy from
absorption of a photon or losing energy by emitting photons. In chemistry-related disciplines, one often
distinguishes between fluorescence and phosphorescence. The former is typically a fast process, yet some
amount of the original energy is dissipated so that re-emitted light photons will have lower energy than did the
absorbed excitation photons. The re-emitted photon in this case is said to be red shifted, referring to the reduced
energy it carries following this loss (as the Jablonski diagram shows). For phosphorescence, absorbed photons
undergo intersystem crossing where they enter into a state with altered spin multiplicity (see term symbol),
usually a triplet state. Once energy from this absorbed electron is transferred in this triplet state, electron
transition back to the lower singlet energy states is quantum mechanically forbidden, meaning that it happens
much more slowly than other transitions. The result is a slow process of radiative transition back to the singlet
state, sometimes lasting minutes or hours. This is the basis for "glow in the dark" substances.

Photoluminescence is an important technique for measuring the purity and crystalline quality of
semiconductors such as GaN and InP and for quantification of the amount of disorder present in a system.[6]
Several variations of photoluminescence exist, including photoluminescence excitation (PLE) spectroscopy.

Time-resolved photoluminescence (TRPL) is a method where the sample is excited with a light pulse and then
the decay in photoluminescence with respect to time is measured. This technique is useful for measuring the
minority carrier lifetime of III-V semiconductors like gallium arsenide (GaAs).

Photoluminescence properties of direct-gap semiconductors


In a typical PL experiment, a semiconductor is excited with a light-source that provides photons with an energy
larger than the bandgap energy. The incoming light excites a polarization that can be described with the
semiconductor Bloch equations.[7][8] Once the photons are absorbed, electrons and holes are formed with finite
momenta in the conduction and valence bands, respectively. The excitations then undergo energy and
momentum relaxation towards the band gap minimum. Typical mechanisms are Coulomb scattering and the
interaction with phonons. Finally, the electrons recombine with holes under emission of photons.

Ideal, defect-free semiconductors are many-body systems where the interactions of charge-carriers and lattice
vibrations have to be considered in addition to the light-matter coupling. In general, the PL properties are also
extremely sensitive to internal electric fields and to the dielectric environment (such as in photonic crystals)
which impose further degrees of complexity. A precise microscopic description is provided by the
semiconductor luminescence equations.[7]

Ideal quantum-well structures

An ideal, defect-free semiconductor quantum well structure is a useful model system to illustrate the
fundamental processes in typical PL experiments. The discussion is based on results published in Klingshirn
(2012)[9] and Balkan (1998).[10]

The fictive model structure for this discussion has two confined quantized electronic and two hole subbands, e1,
e2 and h1, h2, respectively. The linear absorption spectrum of such a structure shows the exciton resonances of
the first (e1h1) and the second quantum well subbands (e2, h2), as well as the absorption from the
corresponding continuum states and from the barrier.

Photoexcitation

https://en.wikipedia.org/wiki/Photoluminescence 2/6
7/19/2017 Photoluminescence - Wikipedia

In general, three different excitation conditions are distinguished: resonant, quasi-resonant, and non-resonant.
For the resonant excitation, the central energy of the laser corresponds to the lowest exciton resonance of the
quantum well. No or only a negligible amount of the excess energy is injected to the carrier system. For these
conditions, coherent processes contribute significantly to the spontaneous emission.[3][11] The decay of
polarization creates excitons directly. The detection of PL is challenging for resonant excitation as it is difficult
to discriminate contributions from the excitation, i.e., stray-light and diffuse scattering from surface roughness.
Thus, speckle and resonant Rayleigh-scattering are always superimposed to the incoherent emission.

In case of the non-resonant excitation, the structure is excited with some excess energy. This is the typical
situation used in most PL experiments as the excitation energy can be discriminated using a spectrometer or an
optical filter. One has to distinguish between quasi-resonant excitation and barrier excitation.

For quasi-resonant conditions, the energy of the excitation is tuned above the ground state but still below the
barrier absorption edge, for example, into the continuum of the first subband. The polarization decay for these
conditions is much faster than for resonant excitation and coherent contributions to the quantum well emission
are negligible. The initial temperature of the carrier system is significantly higher than the lattice temperature
due to the surplus energy of the injected carriers. Finally, only the electron-hole plasma is initially created. It is
then followed by the formation of excitons.[12][13]

In case of barrier excitation, the initial carrier distribution in the quantum well strongly depends on the carrier
scattering between barrier and the well.

Relaxation

Initially, the laser light induces coherent polarization in the sample, i.e., the transitions between electron and
hole states oscillate with the laser frequency and a fixed phase. The polarization dephases typically on a sub-
100 fs time-scale in case of nonresonant excitation due to ultra-fast Coulomb- and phonon-scattering.[14]

The dephasing of the polarization leads to creation of populations of electrons and holes in the conduction and
the valence bands, respectively. The lifetime of the carrier populations is rather long, limited by radiative and
non-radiative recombination such as Auger recombination. During this lifetime a fraction of electrons and holes
may form excitons, this topic is still controversially discussed in the literature. The formation rate depends on
the experimental conditions such as lattice temperature, excitation density, as well as on the general material
parameters, e.g., the strength of the Coulomb-interaction or the exciton binding energy.

The characteristic time-scales are in the range of hundreds of picoseconds in GaAs;[12] they appear to be much
shorter in wide-gap semiconductors.[15]

Directly after the excitation with short (femtosecond) pulses and the quasi-instantaneous decay of the
polarization, the carrier distribution is mainly determined by the spectral width of the excitation, e.g., a laser
pulse. The distribution is thus highly non-thermal and resembles a Gaussian distribution, centered at a finite
momentum. In the first hundreds of femtoseconds, the carriers are scattered by phonons, or at elevated carrier
densities via Coulomb-interaction. The carrier system successively relaxes to the Fermi–Dirac distribution
typically within the first picosecond. Finally, the carrier system cools down under the emission of phonons.
This can take up to several nanoseconds, depending on the material system, the lattice temperature, and the
excitation conditions such as the surplus energy.

Initially, the carrier temperature decreases fast via emission of optical phonons. This is quite efficient due to the
comparatively large energy associated with optical phonons, (36meV or 420K in GaAs) and their rather flat
dispersion, allowing for a wide range of scattering processes under conservation of energy and momentum.
Once the carrier temperature decreases below the value corresponding to the optical phonon energy, acoustic
phonons dominate the relaxation. Here, cooling is less efficient due their dispersion and small energies and the
temperature decreases much slower beyond the first tens of picoseconds.[16][17] At elevated excitation densities,
the carrier cooling is further inhibited by the so-called hot-phonon effect.[18] The relaxation of a large number
https://en.wikipedia.org/wiki/Photoluminescence 3/6
7/19/2017 Photoluminescence - Wikipedia

of hot carriers leads to a high generation rate of optical phonons which exceeds the decay rate into acoustic
phonons. This creates a non-equilibrium "over-population" of optical phonons and thus causes their increased
reabsorption by the charge-carriers significantly suppressing any cooling. A system thus cools slower, the
higher the carrier density is.

Radiative recombination

The emission directly after the excitation is spectrally very broad, yet still centered in the vicinity of the
strongest exciton resonance. As the carrier distribution relaxes and cools, the width of the PL peak decreases
and the emission energy shifts to match the ground state of the exciton for ideal samples without disorder. The
PL spectrum approaches its quasi-steady-state shape defined by the distribution of electrons and holes.
Increasing the excitation density will change the emission spectra. They are dominated by the excitonic ground
state for low densities. Additional peaks from higher subband transitions appear as the carrier density or lattice
temperature are increased as these states get more and more populated. Also, the width of the main PL peak
increases significantly with rising excitation due to excitation-induced dephasing[19] and the emission peak
experiences a small shift in energy due to the Coulomb-renormalization and phase-filling.[8]

In general, both exciton populations and plasma, uncorrelated electrons and holes, can act as sources for
photoluminescence as described in the semiconductor-luminescence equations. Both yield very similar spectral
features which are difficult to distinguish; their emission dynamics, however, vary significantly. The decay of
excitons yields a single-exponential decay function since the probability of their radiative recombination does
not depend on the carrier density. The probability of spontaneous emission for uncorrelated electrons and holes,
is approximately proportional to the product of electron and hole populations eventually leading to a non-
single-exponential decay described by a hyperbolic function.

Effects of disorder

Real material systems always incorporate disorder. Examples are structural defects[20] in the lattice or disorder
due to variations of the chemical composition. Their treatment is extremely challenging for microscopic
theories due to the lack of detailed knowledge about perturbations of the ideal structure. Thus, the influence of
the extrinsic effects on the PL is usually addressed phenomenologically.[21] In experiments, disorder can lead to
localization of carriers and hence drastically increase the photoluminescence life times as localized carriers
cannot as easily find nonradiative recombination centers as can free ones.

Researchers from the King Abdullah University of Science and Technology (KAUST) have studied the
photoinduced entropy (i.e. thermodynamic disorder) of InGaN/GaN p-i-n double-heterostructure nanowires
using temperature-dependent photoluminescence.[6] They defined the photoinduced entropy as a
thermodynamic quantity that represents the unavailability of a system's energy for conversion into useful work
due to carrier recombination and photon emission. They have also related the change in entropy generation to
the change in photocarrier dynamics in the InGaN active regions using results from time-resolved
photoluminescence study. They hypothesized that the amount of generated disorder in the InGaN layers
eventually increases as the temperature approaches room temperature because of the thermal activation of
surface states. To study the photoinduced entropy, the scientists have developed a mathematical model that
considers the net energy exchange resulting from photoexcitation and photoluminescence.

Photoluminescent material for temperature detection


In phosphor thermometry, the temperature dependence of the photoluminescence process is exploited to
measure temperature.

Experimental Methods

https://en.wikipedia.org/wiki/Photoluminescence 4/6
7/19/2017 Photoluminescence - Wikipedia

Photoluminescence spectroscopy is a widely used technique for characterisation of the optical and electronic
properties of semiconductors and molecules. In chemistry, it is more often referred to as fluorescence
spectroscopy, but the instrumentation is the same. The relaxation processes can be studied using Time-resolved
fluorescence spectroscopy to find the decay lifetime of the photoluminescence. These techniques can be
combined with microscopy, to map the intensity (Confocal microscopy) or the lifetime (Fluorescence-lifetime
imaging microscopy) of the photoluminescence across a sample (e.g. a semiconducting wafer, or a biological
sample that has been marked with fluorescent molecules).

See also
Emission
Cathodoluminescence
Secondary emission
Rayleigh scattering
Absorption
Red shift
Charge carrier
Semiconductor Bloch equations
Elliott formula
Semiconductor laser theory
List of light sources
Photo-reflectance
Stokes shift
Photonic molecule

References
1. IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version:
(2006–) "photochemistry (http://goldbook.iupac.org/P04588.html)".
2. Hayes, G.R.; Deveaud, B. (2002). "Is Luminescence from Quantum Wells Due to Excitons?". physica status solidi
(a) 190 (3): 637–640. doi:10.1002/1521-396X(200204)190:3<637::AID-PSSA637>3.0.CO;2-7 (http://dx.doi.org/10.
1002%2F1521-396X%28200204%29190%3A3%3C637%3A%3AAID-PSSA637%3E3.0.CO%3B2-7)
3. Kira, M.; Jahnke, F.; Koch, S. W. (1999). "Quantum Theory of Secondary Emission in Optically Excited
Semiconductor Quantum Wells". Physical Review Letters 82 (17): 3544–3547. doi:10.1103/PhysRevLett.82.3544 (ht
tp://dx.doi.org/10.1103%2FPhysRevLett.82.3544)
4. Kimble, H. J.; Dagenais, M.; Mandel, L. (1977). "Photon Antibunching in Resonance Fluorescence". Physical
Review Letters 39 (11): 691–695. doi:10.1103/PhysRevLett.39.691 (http://dx.doi.org/10.1103%2FPhysRevLett.39.69
1)
5. Carmichael, H. J.; Walls, D. F. (1976). "Proposal for the measurement of the resonant Stark effect by photon
correlation techniques". Journal of Physics B: Atomic and Molecular Physics 9 (4): L43. doi:10.1088/0022-
3700/9/4/001 (http://dx.doi.org/10.1088%2F0022-3700%2F9%2F4%2F001)
6. Alfaraj, N.; Mitra, S.; Wu, F. ; Ajia, A. A.; Janjua, B.; Prabaswara, A.; Aljefri, R. A.; Sun, H.; Ng, T. K.; Ooi, B. S.;
Roqan, I. S.; Li, X. (1998). "Photoinduced entropy of InGaN/GaN p-i-n double-heterostructure nanowires". Applied
Physics Letters 110 (16): 161110. [1] (http://dx.doi.org/10.1063/1.4981252.)
7. Kira, M.; Koch, S. W. (2011). Semiconductor Quantum Optics. Cambridge University Press. ISBN 978-0521875097.
8. Haug, H.; Koch, S. W. (2009). Quantum Theory of the Optical and Electronic Properties of Semiconductors (5th
ed.). World Scientific. p. 216. ISBN 9812838848.
9. Klingshirn, Claus F. (2012). Semiconductor Optics. Springer. ISBN 978-3-642-28361-1.
10. Balkan, Naci (1998). Hot Electrons in Semiconductors: Physics and Devices. Oxford University Press. ISBN
0198500580.
11. Kira, M.; Jahnke, F.; Hoyer, W.; Koch, S. W. (1999). "Quantum theory of spontaneous emission and coherent effects
in semiconductor microstructures". Progress in Quantum Electronics 23 (6): 189–279. doi:10.1016/S0079-
6727(99)00008-7. (http://dx.doi.org/10.1016%2FS0079-6727%2899%2900008-7)
12. Kaindl, R. A.; Carnahan, M. A.; Hägele, D.; Lövenich, R.; Chemla, D. S. (2003). "Ultrafast terahertz probes of
transient conducting and insulating phases in an electron–hole gas". Nature 423 (6941): 734–738.
doi:10.1038/nature01676. (http://dx.doi.org/10.1038%2Fnature01676)
13. Chatterjee, S.; Ell, C.; Mosor, S.; Khitrova, G.; Gibbs, H.; Hoyer, W.; Kira, M.; Koch, S. W.; Prineas, J.; Stolz, H.
(2004). "Excitonic Photoluminescence in Semiconductor Quantum Wells: Plasma versus Excitons". Physical Review
https://en.wikipedia.org/wiki/Photoluminescence 5/6
7/19/2017 Photoluminescence - Wikipedia

Letters 92 (6). doi:10.1103/PhysRevLett.92.067402. (http://dx.doi.org/10.1103%2FPhysRevLett.92.067402)


14. Arlt, S.; Siegner, U.; Kunde, J.; Morier-Genoud, F.; Keller, U. (1999). "Ultrafast dephasing of continuum transitions
in bulk semiconductors". Physical Review B 59 (23): 14860–14863. doi:10.1103/PhysRevB.59.14860. (http://dx.doi.
org/10.1103%2FPhysRevB.59.14860)
15. Umlauff, M.; Hoffmann, J.; Kalt, H.; Langbein, W.; Hvam, J.; Scholl, M.; Söllner, J.; Heuken, M.; Jobst, B.;
Hommel, D. (1998). "Direct observation of free-exciton thermalization in quantum-well structures". Physical Review
B 57 (3): 1390–1393. doi:10.1103/PhysRevB.57.1390 (http://dx.doi.org/10.1103%2FPhysRevB.57.1390).
16. Kash, Kathleen; Shah, Jagdeep (1984). "Carrier energy relaxation in In0.53Ga0.47As determined from picosecond
luminescence studies". Applied Physics Letters 45 (4): 401. doi:10.1063/1.95235. (http://dx.doi.org/10.1063%2F1.95
235)
17. Polland, H.; Rühle, W.; Kuhl, J.; Ploog, K.; Fujiwara, K.; Nakayama, T. (1987). "Nonequilibrium cooling of
thermalized electrons and holes in GaAs/Al_{x}Ga_{1-x}As quantum wells". Physical Review B 35 (15): 8273–
8276. doi:10.1103/PhysRevB.35.8273. (http://dx.doi.org/10.1103%2FPhysRevB.35.8273)
18. Shah, Jagdeep; Leite, R.C.C.; Scott, J.F. (1970). "Photoexcited hot LO phonons in GaAs". Solid State
Communications 8 (14): 1089–1093. doi:10.1016/0038-1098(70)90002-5. (http://dx.doi.org/10.1016%2F0038-109
8%2870%2990002-5)
19. Wang, Hailin; Ferrio, Kyle; Steel, Duncan; Hu, Y.; Binder, R.; Koch, S. W. (1993). "Transient nonlinear optical
response from excitation induced dephasing in GaAs". Physical Review Letters 71 (8): 1261–1264.
doi:10.1103/PhysRevLett.71.1261. (http://dx.doi.org/10.1103%2FPhysRevLett.71.1261)
20. Lähnemann, J.; Jahn, U.; Brandt, O.; Flissikowski, T.; Dogan, P.; Grahn, H.T. (2014). "Luminescence associated with
stacking faults in GaN". J. Phys. D: Appl. Phys. 47: 423001. Bibcode:2014JPhD...47P3001L (http://adsabs.harvard.e
du/abs/2014JPhD...47P3001L). arXiv:1405.1261 (https://arxiv.org/abs/1405.1261)  . doi:10.1088/0022-
3727/47/42/423001 (https://doi.org/10.1088%2F0022-3727%2F47%2F42%2F423001).
21. Baranovskii, S.; Eichmann, R.; Thomas, P. (1998). "Temperature-dependent exciton luminescence in quantum wells
by computer simulation". Physical Review B 58 (19): 13081–13087. doi:10.1103/PhysRevB.58.13081. (http://dx.doi.
org/10.1103%2FPhysRevB.58.13081)

Further reading
Klingshirn, C. F. (2006). Semiconductor Optics. Springer.
Wikimedia Commons has
ISBN 978-3540383451.
media related to
Kalt, H.; Hetterich, M. (2004). Optics of Semiconductors and Photoluminescence.
Their Nanostructures. Springer. ISBN 978-3540383451.
Donald A. McQuarrie; John D. Simon (1997), Physical Chemistry, a molecular approach, University
Science Books
Kira, M.; Koch, S. W. (2011). Semiconductor Quantum Optics. Cambridge University Press. ISBN 978-
0521875097.
Peygambarian, N.; Koch, S. W.; Mysyrowicz, André (1993). Introduction to Semiconductor Optics.
Prentice Hall. ISBN 978-0-13-638990-3.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Photoluminescence&oldid=787656015"

Categories: Spectroscopy Luminescence

This page was last edited on 26 June 2017, at 18:28.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Photoluminescence 6/6
7/24/2017 Microscopy - Wikipedia

Microscopy
From Wikipedia, the free encyclopedia

Microscopy is the technical field of using microscopes to view


objects and areas of objects that cannot be seen with the naked eye
(objects that are not within the resolution range of the normal eye).
There are three well-known branches of microscopy: optical,
electron, and scanning probe microscopy.

Optical & electron microscopy involve the diffraction, reflection,


or refraction of electromagnetic radiation/electron beams
interacting with the specimen, and the collection of the scattered
radiation or another signal in order to create an image. This process
may be carried out by wide-field irradiation of the sample (for
example standard light microscopy and transmission electron
microscopy) or by scanning of a fine beam over the sample (for
example confocal laser scanning microscopy and scanning electron
microscopy). Scanning probe microscopy involves the interaction
of a scanning probe with the surface of the object of interest. The
development of microscopy revolutionized biology, gave rise to the
field of histology and so remains an essential technique in the life Often considered to be the first
and physical sciences. acknowledged microscopist and
microbiologist, Antonie van Leeuwenhoek
is best known for his pioneering work in
the field of microscopy and for his
Contents contributions toward the establishment of
microbiology as a scientific discipline.
1 History
2 Optical microscopy
2.1 Limitations
2.2 Techniques
2.2.1 Bright field
2.2.2 Oblique illumination
2.2.3 Dark field
2.2.4 Dispersion staining
2.2.5 Phase contrast
2.2.6 Differential interference contrast
2.2.7 Interference reflection
2.2.8 Fluorescence
2.2.9 Confocal
2.2.10 Single plane illumination microscopy Scanning electron microscope image of
and light sheet fluorescence pollen
microscopy
2.2.11 Wide-field multiphoton microscopy
2.2.12 Deconvolution
2.3 Sub-diffraction techniques
2.4 Serial time-encoded amplified microscopy
2.5 Extensions
2.6 Other enhancements
2.7 X-ray
3 Electron microscopy
4 Scanning probe microscopy
4.1 Ultrasonic force
5 Ultraviolet microscopy
6 Infrared microscopy
7 Digital holographic microscopy
https://en.wikipedia.org/wiki/Microscopy 1/16
7/24/2017 Microscopy - Wikipedia

8 Digital pathology (virtual microscopy)


9 Laser microscopy
10 Amateur microscopy
11 Application in forensic science
12 See also
13 References
14 Further reading
15 External links
15.1 General
15.2 Techniques

Microscopic examination in a biochemical


History laboratory

The field of microscopy (optical microscopy) has its roots in the 17th-century Dutch Republic, with some
notable representatives such as Zacharias Janssen, Hans Lippershey, Jan Swammerdam, and Antonie van
Leeuwenhoek. Van Leeuwenhoek is often considered to be the first acknowledged microscopist and
microbiologist.[1]

Optical microscopy
Optical or light microscopy involves passing visible light
transmitted through or reflected from the sample through a single
or multiple lenses to allow a magnified view of the sample.[2] The
resulting image can be detected directly by the eye, imaged on a
photographic plate or captured digitally. The single lens with its
attachments, or the system of lenses and imaging equipment, along
with the appropriate lighting equipment, sample stage and support,
makes up the basic light microscope. The most recent development
is the digital microscope, which uses a CCD camera to focus on the
exhibit of interest. The image is shown on a computer screen, so
eye-pieces are unnecessary.

Limitations

Limitations of standard optical microscopy (bright field


microscopy) lie in three areas;

This technique can only image dark or strongly refracting


objects effectively.
Diffraction limits resolution to approximately
0.2 micrometres (see: microscope). This limits the practical
magnification limit to ~1500x. Stereo microscope
Out of focus light from points outside the focal plane reduces
image clarity.

Live cells in particular generally lack sufficient contrast to be studied successfully, since the internal structures
of the cell are colourless and transparent. The most common way to increase contrast is to stain the different
structures with selective dyes, but this often involves killing and fixing the sample. Staining may also introduce
artifacts, apparent structural details that are caused by the processing of the specimen and are thus not
legitimate features of the specimen. In general, these techniques make use of differences in the refractive index
of cell structures. It is comparable to looking through a glass window: you (bright field microscopy) don't see
the glass but merely the dirt on the glass. There is a difference, as glass is a denser material, and this creates a

https://en.wikipedia.org/wiki/Microscopy 2/16
7/24/2017 Microscopy - Wikipedia

difference in phase of the light passing through. The human eye is not sensitive to this difference in phase, but
clever optical solutions have been thought out to change this difference in phase into a difference in amplitude
(light intensity).

Techniques

In order to improve specimen contrast or highlight certain structures in a sample special techniques must be
used. A huge selection of microscopy techniques are available to increase contrast or label a sample.

Four examples of transillumination techniques used to generate contrast in a sample of tissue paper.
1.559 μm/pixel.

Bright field Cross-polarized light Dark field illumination, Phase contrast


illumination, sample illumination, sample sample contrast comes illumination, sample
contrast comes from contrast comes from from light scattered by contrast comes from
absorbance of light in rotation of polarized the sample. interference of different
the sample. light through the path lengths of light
sample. through the sample.

Bright field

Bright field microscopy is the simplest of all the light microscopy techniques. Sample illumination is via
transmitted white light, i.e. illuminated from below and observed from above. Limitations include low contrast
of most biological samples and low apparent resolution due to the blur of out of focus material. The simplicity
of the technique and the minimal sample preparation required are significant advantages.

Oblique illumination

The use of oblique (from the side) illumination gives the image a 3-dimensional appearance and can highlight
otherwise invisible features. A more recent technique based on this method is Hoffmann's modulation contrast,
a system found on inverted microscopes for use in cell culture. Oblique illumination suffers from the same
limitations as bright field microscopy (low contrast of many biological samples; low apparent resolution due to
out of focus objects).

Dark field

Dark field microscopy is a technique for improving the contrast of unstained, transparent specimens.[3] Dark
field illumination uses a carefully aligned light source to minimize the quantity of directly transmitted
(unscattered) light entering the image plane, collecting only the light scattered by the sample. Dark field can
dramatically improve image contrast – especially of transparent objects – while requiring little equipment setup
or sample preparation. However, the technique suffers from low light intensity in final image of many
biological samples, and continues to be affected by low apparent resolution.

https://en.wikipedia.org/wiki/Microscopy 3/16
7/24/2017 Microscopy - Wikipedia

Rheinberg illumination is a special variant of dark field


illumination in which transparent, colored filters are inserted just
before the condenser so that light rays at high aperture are
differently colored than those at low aperture (i.e. the background
to the specimen may be blue while the object appears self-luminous
red). Other color combinations are possible but their effectiveness
is quite variable.[4]

Dispersion staining
A diatom under Rheinberg illumination
Dispersion staining is an optical technique that results in a colored
image of a colorless object. This is an optical staining technique
and requires no stains or dyes to produce a color effect. There are five different microscope configurations used
in the broader technique of dispersion staining. They include brightfield Becke line, oblique, darkfield, phase
contrast, and objective stop dispersion staining.

Phase contrast

In electron microscopy: Phase-contrast imaging

More sophisticated techniques will show proportional differences


in optical density. Phase contrast is a widely used technique that
shows differences in refractive index as difference in contrast. It
was developed by the Dutch physicist Frits Zernike in the 1930s
(for which he was awarded the Nobel Prize in 1953). The nucleus
in a cell for example will show up darkly against the surrounding
cytoplasm. Contrast is excellent; however it is not for use with
thick objects. Frequently, a halo is formed even around small
objects, which obscures detail. The system consists of a circular
annulus in the condenser, which produces a cone of light. This cone
is superimposed on a similar sized ring within the phase-objective.
Every objective has a different size ring, so for every objective
another condenser setting has to be chosen. The ring in the
objective has special optical properties: it, first of all, reduces the
direct light in intensity, but more importantly, it creates an artificial
phase difference of about a quarter wavelength. As the physical
properties of this direct light have changed, interference with the
diffracted light occurs, resulting in the phase contrast image. One
disadvantage of phase-contrast microscopy is halo formation (halo- Phase-contrast light micrograph of hyaline
light ring). cartilage showing chondrocytes and
organelles, lacunae and extracellular
matrix.
Differential interference contrast

Superior and much more expensive is the use of interference contrast. Differences in optical density will show
up as differences in relief. A nucleus within a cell will actually show up as a globule in the most often used
differential interference contrast system according to Georges Nomarski. However, it has to be kept in mind
that this is an optical effect, and the relief does not necessarily resemble the true shape. Contrast is very good
and the condenser aperture can be used fully open, thereby reducing the depth of field and maximizing
resolution.

The system consists of a special prism (Nomarski prism, Wollaston prism) in the condenser that splits light in
an ordinary and an extraordinary beam. The spatial difference between the two beams is minimal (less than the
maximum resolution of the objective). After passage through the specimen, the beams are reunited by a similar
prism in the objective.
https://en.wikipedia.org/wiki/Microscopy 4/16
7/24/2017 Microscopy - Wikipedia

In a homogeneous specimen, there is no difference between the two beams, and no contrast is being generated.
However, near a refractive boundary (say a nucleus within the cytoplasm), the difference between the ordinary
and the extraordinary beam will generate a relief in the image. Differential interference contrast requires a
polarized light source to function; two polarizing filters have to be fitted in the light path, one below the
condenser (the polarizer), and the other above the objective (the analyzer).

Note: In cases where the optical design of a microscope produces an appreciable lateral separation of the two
beams we have the case of classical interference microscopy, which does not result in relief images, but can
nevertheless be used for the quantitative determination of mass-thicknesses of microscopic objects.

Interference reflection

An additional technique using interference is interference reflection microscopy (also known as reflected
interference contrast, or RIC). It relies on cell adhesion to the slide to produce an interference signal. If there is
no cell attached to the glass, there will be no interference.

Interference reflection microscopy can be obtained by using the same elements used by DIC, but without the
prisms. Also, the light that is being detected is reflected and not transmitted as it is when DIC is employed.

Fluorescence

When certain compounds are illuminated with high energy light,


they emit light of a lower frequency. This effect is known as
fluorescence. Often specimens show their characteristic
autofluorescence image, based on their chemical makeup.

This method is of critical importance in the modern life sciences, as


it can be extremely sensitive, allowing the detection of single
molecules. Many different fluorescent dyes can be used to stain
different structures or chemical compounds. One particularly
powerful method is the combination of antibodies coupled to a
fluorophore as in immunostaining. Examples of commonly used
fluorophores are fluorescein or rhodamine. Images may also contain artifacts. This is a
confocal laser scanning fluorescence
The antibodies can be tailor-made for a chemical compound. For micrograph of thale cress anther (part of
example, one strategy often in use is the artificial production of stamen). The picture shows among other
proteins, based on the genetic code (DNA). These proteins can then things a nice red flowing collar-like
be used to immunize rabbits, forming antibodies which bind to the structure just below the anther. However,
protein. The antibodies are then coupled chemically to a an intact thale cress stamen does not have
fluorophore and used to trace the proteins in the cells under study. such collar, this is a fixation artifact: the
stamen has been cut below the picture
Highly efficient fluorescent proteins such as the green fluorescent frame, and epidermis (upper layer of cells)
protein (GFP) have been developed using the molecular biology of stamen stalk has peeled off, forming a
technique of gene fusion, a process that links the expression of the non-characteristic structure. Photo: Heiti
fluorescent compound to that of the target protein. This combined Paves from Tallinn University of
fluorescent protein is, in general, non-toxic to the organism and Technology.
rarely interferes with the function of the protein under study.
Genetically modified cells or organisms directly express the
fluorescently tagged proteins, which enables the study of the function of the original protein in vivo.

Growth of protein crystals results in both protein and salt crystals. Both are colorless and microscopic.
Recovery of the protein crystals requires imaging which can be done by the intrinsic fluorescence of the protein
or by using transmission microscopy. Both methods require an ultraviolet microscope as protein absorbs light at
280 nm. Protein will also fluorescence at approximately 353 nm when excited with 280 nm light.[5]

https://en.wikipedia.org/wiki/Microscopy 5/16
7/24/2017 Microscopy - Wikipedia

Since fluorescence emission differs in wavelength (color) from the excitation light, an ideal fluorescent image
shows only the structure of interest that was labeled with the fluorescent dye. This high specificity led to the
widespread use of fluorescence light microscopy in biomedical research. Different fluorescent dyes can be used
to stain different biological structures, which can then be detected simultaneously, while still being specific due
to the individual color of the dye.

To block the excitation light from reaching the observer or the detector, filter sets of high quality are needed.
These typically consist of an excitation filter selecting the range of excitation wavelengths, a dichroic mirror,
and an emission filter blocking the excitation light. Most fluorescence microscopes are operated in the Epi-
illumination mode (illumination and detection from one side of the sample) to further decrease the amount of
excitation light entering the detector.

An example of fluorescence microscopy today is two-photon or multi-photon imaging. Two photon imaging
allows imaging of living tissues up to a very high depth by enabling greater excitation light penetration and
reduced background emission signal.

See also: total internal reflection fluorescence microscope Neuroscience

Confocal

Confocal microscopy uses a scanning point of light and a pinhole to prevent out of focus light from reaching
the detector. Compared to full sample illumination, confocal microscopy gives slightly higher resolution, and
significantly improves optical sectioning. Confocal microscopy is, therefore, commonly used where 3D
structure is important.

Single plane illumination microscopy and light sheet fluorescence microscopy

Using a plane of light formed by focusing light through a cylindrical lens at a narrow angle or by scanning a
line of light in a plane perpendicular to the axis of objective, high resolution optical sections can be
taken.[6][7][8] Single plane illumination, or light sheet illumination, is also accomplished using beam shaping
techniques incorporating multiple-prism beam expanders.[9][10] The images are captured by CCDs. These
variants allow very fast and high signal to noise ratio image capture.

Wide-field multiphoton microscopy

Wide-field multiphoton microscopy[11][12][13][14] refers to an optical non-linear imaging technique tailored for
ultrafast imaging in which a large area of the object is illuminated and imaged without the need for scanning.
High intensities are required to induce non-linear optical processes such as two-photon fluorescence or second
harmonic generation. In scanning multiphoton microscopes the high intensities are achieved by tightly focusing
the light, and the image is obtained by stage- or beam-scanning the sample. In wide-field multiphoton
microscopy the high intensities are best achieved using an optically amplified pulsed laser source to attain a
large field of view (~100 µm).[11][12][13] The image in this case is obtained as a single frame with a CCD
without the need of scanning, making the technique particularly useful to visualize dynamic processes
simultaneously across the object of interest. With wide-field multiphoton microscopy the frame rate can be
increased up to a 1000-fold compared to multiphoton scanning microscopy.[12]

Deconvolution

Fluorescence microscopy is a powerful technique to show specifically labeled structures within a complex
environment and to provide three-dimensional information of biological structures. However, this information
is blurred by the fact that, upon illumination, all fluorescently labeled structures emit light, irrespective of

https://en.wikipedia.org/wiki/Microscopy 6/16
7/24/2017 Microscopy - Wikipedia

whether they are in focus or not. So an image of a certain structure is always blurred by the contribution of light
from structures that are out of focus. This phenomenon results in a loss of contrast especially when using
objectives with a high resolving power, typically oil immersion objectives with a high numerical aperture.

However, blurring is not caused by random processes, such as light


scattering, but can be well defined by the optical properties of the
image formation in the microscope imaging system. If one
considers a small fluorescent light source (essentially a bright spot),
light coming from this spot spreads out further from our perspective
as the spot becomes more out of focus. Under ideal conditions, this
produces an "hourglass" shape of this point source in the third
(axial) dimension. This shape is called the point spread function
(PSF) of the microscope imaging system. Since any fluorescence
Mathematically modeled Point Spread
image is made up of a large number of such small fluorescent light
Function of a pulsed THz laser imaging
sources, the image is said to be "convolved by the point spread
function". The mathematically modeled PSF of a terahertz laser system.[15]
pulsed imaging system is shown on the right.

The output of an imaging system can be described using the equation:

Where n is the additive noise.[16] Knowing this point spread function[17] means that it is possible to reverse this
process to a certain extent by computer-based methods commonly known as deconvolution microscopy.[18]
There are various algorithms available for 2D or 3D deconvolution. They can be roughly classified in
nonrestorative and restorative methods. While the nonrestorative methods can improve contrast by removing
out-of-focus light from focal planes, only the restorative methods can actually reassign light to its proper place
of origin. Processing fluorescent images in this manner can be an advantage over directly acquiring images
without out-of-focus light, such as images from confocal microscopy, because light signals otherwise
eliminated become useful information. For 3D deconvolution, one typically provides a series of images taken
from different focal planes (called a Z-stack) plus the knowledge of the PSF, which can be derived either
experimentally or theoretically from knowing all contributing parameters of the microscope.

Sub-diffraction techniques

A multitude
of super-
resolution
microscopy
techniques
have been
developed in
recent times
which
circumvent
the
diffraction
barrier.
Example of super-resolution microscopy. Image of Her3 and Her2, target of the breast cancer
This is drugTrastuzumab, within a cancer cell.
mostly
achieved by imaging a sufficiently static sample multiple times and either modifying the excitation light or
observing stochastic changes in the image.

https://en.wikipedia.org/wiki/Microscopy 7/16
7/24/2017 Microscopy - Wikipedia

Knowledge of and chemical control over fluorophore photophysics is at the core of these techniques, by which
resolutions of ~20 nanometers are regularly obtained.[19][20]

Serial time-encoded amplified microscopy

Serial time encoded amplified microscopy (STEAM) is an imaging method that provides ultrafast shutter speed
and frame rate, by using optical image amplification to circumvent the fundamental trade-off between
sensitivity and speed, and a single-pixel photodetector to eliminate the need for a detector array and readout
time limitations [21] The method is at least 1000 times faster than the state-of-the-art CCD and CMOS cameras.
Consequently, it is potentially useful for a broad range of scientific, industrial, and biomedical applications that
require high image acquisition rates, including real-time diagnosis and evaluation of shockwaves,
microfluidics, MEMS, and laser surgery.

Extensions

Most modern instruments provide simple solutions for micro-photography and image recording electronically.
However such capabilities are not always present and the more experienced microscopist will, in many cases,
still prefer a hand drawn image to a photograph. This is because a microscopist with knowledge of the subject
can accurately convert a three-dimensional image into a precise two-dimensional drawing. In a photograph or
other image capture system however, only one thin plane is ever in good focus.

The creation of careful and accurate micrographs requires a microscopical technique using a monocular
eyepiece. It is essential that both eyes are open and that the eye that is not observing down the microscope is
instead concentrated on a sheet of paper on the bench besides the microscope. With practice, and without
moving the head or eyes, it is possible to accurately record the observed details by tracing round the observed
shapes by simultaneously "seeing" the pencil point in the microscopical image.

Practicing this technique also establishes good general microscopical technique. It is always less tiring to
observe with the microscope focused so that the image is seen at infinity and with both eyes open at all times.

Other enhancements

Microspectroscopy:spectroscopy with a microscope

X-ray

As resolution depends on the wavelength of the light. Electron microscopy has been developed since the 1930s
that use electron beams instead of light. Because of the much smaller wavelength of the electron beam,
resolution is far higher.

Though less common, X-ray microscopy has also been developed since the late 1940s. The resolution of X-ray
microscopy lies between that of light microscopy and electron microscopy.

Electron microscopy
Until the invention of sub-diffraction microscopy, the wavelength of the light limited the resolution of
traditional microscopy to around 0.2 micrometers. In order to gain higher resolution, the use of an electron
beam with a far smaller wavelength is used in electron microscopes.

Transmission electron microscopy (TEM) is quite similar to the compound light microscope, by sending
an electron beam through a very thin slice of the specimen. The resolution limit in 2005 was around 0.05
nanometer and has not increased appreciably since that time.
Scanning electron microscopy (SEM) visualizes details on the surfaces of specimens and gives a very
nice 3D view. It gives results much like those of the stereo light microscope. The best resolution for SEM
https://en.wikipedia.org/wiki/Microscopy 8/16
7/24/2017 Microscopy - Wikipedia

in 2011 was 0.4 nanometer.

Electron microscopes equipped for X-ray spectroscopy can provide qualitative and quantitative elemental
analysis.

Scanning probe microscopy


This is a sub-diffraction technique. Examples of scanning probe microscopes are the atomic force microscope
(AFM), the Scanning tunneling microscope, the photonic force microscope and the recurrence tracking
microscope. All such methods use the physical contact of a solid probe tip to scan the surface of an object,
which is supposed to be almost flat.

Ultrasonic force

Ultrasonic Force Microscopy (UFM) has been developed in order to improve the details and image contrast on
"flat" areas of interest where AFM images are limited in contrast. The combination of AFM-UFM allows a near
field acoustic microscopic image to be generated. The AFM tip is used to detect the ultrasonic waves and
overcomes the limitation of wavelength that occurs in acoustic microscopy. By using the elastic changes under
the AFM tip, an image of much greater detail than the AFM topography can be generated.

Ultrasonic force microscopy allows the local mapping of elasticity in atomic force microscopy by the
application of ultrasonic vibration to the cantilever or sample. In an attempt to analyze the results of ultrasonic
force microscopy in a quantitative fashion, a force-distance curve measurement is done with ultrasonic
vibration applied to the cantilever base, and the results are compared with a model of the cantilever dynamics
and tip-sample interaction based on the finite-difference technique.

Ultraviolet microscopy
Ultraviolet microscopes have two main purposes. The first is to utilize the shorter wavelength of ultraviolet
electromagnetic energy to improve the image resolution beyond that of the diffraction limit of standard optical
microscopes. This technique is used for non-destructive inspection of devices with very small features such as
those found in modern semiconductors. The second application for UV microscopes is contrast enhancement
where the response of individual samples is enhanced, relative to their surrounding, due to the interaction of
light with the molecules within the sample itself. One example is in the growth of protein crystals. Protein
crystals are formed in salt solutions. As salt and protein crystals are both formed in the growth process, and
both are commonly transparent to the human eye, they cannot be differentiated with a standard optical
microscope. As the tryptophan of protein absorbs light at 280 nm, imaging with a UV microscope with 280 nm
bandpass filters makes it simple to differentiate between the two types of crystals. The protein crystals appear
dark while the salt crystals are transparent.

Infrared microscopy
The term infrared microscopy refers to microscopy performed at infrared wavelengths. In the typical instrument
configuration a Fourier Transform Infrared Spectrometer (FTIR) is combined with an optical microscope and
an infrared detector. The infrared detector can be a single point detector, a linear array or a 2D focal plane
array. The FTIR provides the ability to perform chemical analysis via infrared spectroscopy and the microscope
and point or array detector enable this chemical analysis to be spatially resolved, i.e. performed at different
regions of the sample. As such, the technique is also called infrared microspectroscopy [22][23] (an alternative
architecture involves the combination of a tuneable infrared light source and single point detector on a flying
objective). This technique is frequently used for infrared chemical imaging, where the image contrast is
determined by the response of individual sample regions to particular IR wavelengths selected by the user,
usually specific IR absorption bands and associated molecular resonances . A key limitation of conventional
infrared microspectroscopy is that the spatial resolution is diffraction-limited. Specifically the spatial resolution
https://en.wikipedia.org/wiki/Microscopy 9/16
7/24/2017 Microscopy - Wikipedia

is limited to a figure related to the wavelength of the light. For practical IR microscopes, the spatial resolution
is limited to 1-3X the wavelength, depending on the specific technique and instrument used. For mid-IR
wavelengths, this sets a practical spatial resolution limit of ~3-30 μm.

IR versions of sub-diffraction microscopy (see above) also exist.[22][23] These include IR NSOM,[24]
photothermal microspectroscopy, and atomic force microscope based infrared spectroscopy (AFM-IR).

Digital holographic microscopy


In digital holographic microscopy (DHM), interfering wave fronts
from a coherent (monochromatic) light-source are recorded on a
sensor. The image is digitally reconstructed by a computer from the
recorded hologram. Besides the ordinary bright field image, a
phase shift image is created.

DHM can operate both in reflection and transmission mode. In


reflection mode, the phase shift image provides a relative distance
measurement and thus represents a topography map of the
reflecting surface. In transmission mode, the phase shift image
provides a label-free quantitative measurement of the optical
thickness of the specimen. Phase shift images of biological cells
are very similar to images of stained cells and have successfully Human cells imaged by DHM phase shift
been analyzed by high content analysis software. (left) and phase contrast microscopy
(right).
A unique feature of DHM is the ability to adjust focus after the
image is recorded, since all focus planes are recorded
simultaneously by the hologram. This feature makes it possible to image moving particles in a volume or to
rapidly scan a surface. Another attractive feature is DHM’s ability to use low cost optics by correcting optical
aberrations by software.

Digital pathology (virtual microscopy)


Digital pathology is an image-based information environment enabled by computer technology that allows for
the management of information generated from a digital slide. Digital pathology is enabled in part by virtual
microscopy, which is the practice of converting glass slides into digital slides that can be viewed, managed, and
analyzed.

Laser microscopy
Laser microscopy is a rapidly growing field that uses laser illumination sources in various forms of
microscopy.[25] For instance, laser microscopy focused on biological applications uses ultrashort pulse lasers, in
a number of techniques labeled as nonlinear microscopy, saturation microscopy, and two-photon excitation
microscopy.[26]

High-intensity, short-pulse laboratory x-ray lasers have been under development for several years. When this
technology comes to fruition, it will be possible to obtain magnified three-dimensional images of elementary
biological structures in the living state at a precisely defined instant. For optimum contrast between water and
protein and for best sensitivity and resolution, the laser should be tuned near the nitrogen line at about 0.3
nanometers. Resolution will be limited mainly by the hydrodynamic expansion that occurs while the necessary
number of photons is being registered.[27] Thus, while the specimen is destroyed by the exposure, its
configuration can be captured before it explodes.[28][29][30][31][32][33]

https://en.wikipedia.org/wiki/Microscopy 10/16
7/24/2017 Microscopy - Wikipedia

Scientists have been working on practical designs and prototypes for x-ray holographic microscopes, despite
the prolonged development of the appropriate laser.[34][35][36][37][38][39][40][41]

Amateur microscopy
Amateur Microscopy is the investigation and observation of biological and non-biological specimens for
recreational purposes. Collectors of minerals, insects, seashells, and plants may use microscopes as tools to
uncover features that help them classify their collected items. Other amateurs may be interested in observing
the life found in pond water and of other samples. Microscopes may also prove useful for the water quality
assessment for people that keep a home aquarium. Photographic documentation and drawing of the microscopic
images are additional tasks that augment the spectrum of tasks of the amateur. There are even competitions for
photomicrograph art. Participants of this pastime may either use commercially prepared microscopic slides or
engage in the task of specimen preparation.

While microscopy is a central tool in the documentation of biological specimens, it is, in general, insufficient to
justify the description of a new species based on microscopic investigations alone. Often genetic and
biochemical tests are necessary to confirm the discovery of a new species. A laboratory and access to academic
literature is a necessity, which is specialized and, in general, not available to amateurs. There is, however, one
huge advantage that amateurs have above professionals: time to explore their surroundings. Often, advanced
amateurs team up with professionals to validate their findings and (possibly) describe new species.

In the late 1800s, amateur microscopy became a popular hobby in the United States and Europe. Several
'professional amateurs' were being paid for their sampling trips and microscopic explorations by
philanthropists, to keep them amused on the Sunday afternoon (e.g., the diatom specialist A. Grunow, being
paid by (among others) a Belgian industrialist). Professor John Phin published "Practical Hints on the Selection
and Use of the Microscope (Second Edition, 1878)," and was also the editor of the "American Journal of
Microscopy."

Examples of amateur microscopy images:

"house bee" Mouth Rice Stem cs 400X Rabbit Testis 100X Fern Prothallium 400X
100X

Application in forensic science


Microscopy has many applications in the forensic sciences; it provides precision, quality, accuracy, and
reproducibility of results[42]. These applications are almost limitless. This is due to the ability of microscope to
detect, resolve and image the smallest items of evidence, often without any alteration or destruction. The
microscope is used to identify and compare fibers, hairs, soils, and dust…etc. [43]

The aim of any microscope is to magnify images or photos of a small object and to see fine details. In forensic;
the type of specimen, the information one wishes to obtain from it and the type of microscope chosen for the
task will determine if the sample preparation is required. For example, ink lines, blood stains or bullets, no

https://en.wikipedia.org/wiki/Microscopy 11/16
7/24/2017 Microscopy - Wikipedia

treatment is required and the evidence shows directly from appropriate microscope without any form of sample
preparation, but for traces of particular matter, the sample preparation must be done before microscopical
examination occurs.[43]

A variety of microscopes are used in forensic science laboratory. The light microscopes are the most use in
forensic and these microscopes use photons to form images4, these microscopes which are most applicable for
examining forensic specimens as mentioned before are as follows:[44]

1. The compound microscope

2. The comparison microscope

3. The stereoscopic microscope

4. The polarizing microscope

5. The micro spectrophotometer

Other completely different approach to microscopy is electron microscope (EM), which gives a beam of
electrons in order to be focused onto the desired specimen with a design capable of giving a magnifying power
up to 100,000 times.6 Electron microscopes are also used but very little when compared to light microscopes.
(TEM) transmission electron microscope is more difficult to use and requires more sample preparation than
scanning electron microscope (SEM) and thus have found very few applications in forensic science.[43]

This diversity of the types of microscopes in forensic applications comes mainly from their magnification
ranges, which are (1- 1200X), (50 -30,000X) and (500- 250,000X) for the optical microscopy, SEM and TEM
respectively.[44]

See also
Acronyms in microscopy List of microscopists
Digital microscope Microdensitometer
Digital pathology Timeline of microscope technology
Imaging cycler microscopy Two-photon excitation microscopy
Infinity correction USB microscope
Interferometric microscopy
Köhler illumination
List of microscopists
References
1. Lane, Nick (6 March 2015). "The Unseen World: Reflections on Leeuwenhoek (1677) 'Concerning Little Animal'."
Philos Trans R Soc Lond B Biol Sci. 2015 Apr; 370 (1666): 20140344. [doi:10.1098/rstb.2014.0344]
2. Abramowitz M, Davidson MW (2007). "Introduction to Microscopy" (http://micro.magnet.fsu.edu/primer/anatomy/i
ntroduction.html). Molecular Expressions. Retrieved 2007-08-22.
3. Abramowitz M, Davidson MW (2003-08-01). "Darkfield Illumination" (http://micro.magnet.fsu.edu/primer/techniqu
es/darkfield.html). Retrieved 2008-10-21.
4. Abramowitz M, Davidson MW (2003-08-01). "Rheinberg Illumination" (http://micro.magnet.fsu.edu/primer/techniq
ues/rheinberg.html). Retrieved 2008-10-21.
5. Gill, Harindarpal (January 2010). "Evaluating the efficacy of tryptophan fluorescence and absorbance as a selection
tool for identifying protein crystals" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2833058). Acta
Crystallographica. F66: 364–372. PMC 2833058 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2833058)  .
PMID 20208182 (https://www.ncbi.nlm.nih.gov/pubmed/20208182). doi:10.1107/S1744309110002022 (https://doi.o
rg/10.1107%2FS1744309110002022).

https://en.wikipedia.org/wiki/Microscopy 12/16
7/24/2017 Microscopy - Wikipedia

6. Voie, A.H. (1993). "Imaging the intact guinea pig tympanic bulla by orthogonal-plane fluorescence optical sectioning
microscopy" (http://www.sciencedirect.com/science/article/pii/S0378595502004938). Hearing Research. 71 (1–2):
119–128. ISSN 0378-5955 (https://www.worldcat.org/issn/0378-5955). doi:10.1016/S0378-5955(02)00493-8 (http
s://doi.org/10.1016%2FS0378-5955%2802%2900493-8).
7. Greger, K.; J. Swoger; E. H. K. Stelzer (2007). "Basic building units and properties of a fluorescence single plane
illumination microscope" (http://link.aip.org/link/RSINAK/v78/i2/p023705/s1&Agg=doi). Review of Scientific
Instruments. 78 (2): 023705. Bibcode:2007RScI...78b3705G (http://adsabs.harvard.edu/abs/2007RScI...78b3705G).
ISSN 0034-6748 (https://www.worldcat.org/issn/0034-6748). PMID 17578115 (https://www.ncbi.nlm.nih.gov/pubm
ed/17578115). doi:10.1063/1.2428277 (https://doi.org/10.1063%2F1.2428277). Retrieved 2011-10-16.
8. Buytaert, J.A.N.; E. Descamps; D. Adriaens; J.J.J. Dirckx (2012). "The OPFOS Microscopy Family: High-
Resolution Optical Sectioning of Biomedical Specimens" (http://www.hindawi.com/journals/ari/2012/206238/).
Anatomy Research International. 2012: 206238. ISSN 2090-2743 (https://www.worldcat.org/issn/2090-2743).
arXiv:1106.3162 (https://arxiv.org/abs/1106.3162)  . doi:10.1155/2012/206238 (https://doi.org/10.1155%2F2012%2
F206238).
9. F. J. Duarte, in High Power Dye Lasers (Springer-Verlag, Berlin,1991) Chapter 2.
10. Duarte FJ (1993), Electro-optical interferometric microdensitometer system, US Patent 5255069 (http://www.patentg
enius.com/patent/5255069.html).
11. Peterson, Mark D.; Hayes, Patrick L.; Martinez, Imee Su; Cass, Laura C.; Achtyl, Jennifer L.; Weiss, Emily A.;
Geiger, Franz M. (2011-05-01). "Second harmonic generation imaging with a kHz amplifier [Invited]" (https://www.
osapublishing.org/ome/abstract.cfm?uri=ome-1-1-57). Optical Materials Express. 1 (1): 57.
doi:10.1364/ome.1.000057 (https://doi.org/10.1364%2Fome.1.000057).
12. Macias-Romero, Carlos; Didier, Marie E. P.; Jourdain, Pascal; Marquet, Pierre; Magistretti, Pierre; Tarun, Orly B.;
Zubkovs, Vitalijs; Radenovic, Aleksandra; Roke, Sylvie (2014-12-15). "High throughput second harmonic imaging
for label-free biological applications" (https://www.osapublishing.org/oe/abstract.cfm?uri=oe-22-25-31102). Optics
Express. 22 (25): 31102. doi:10.1364/oe.22.031102 (https://doi.org/10.1364%2Foe.22.031102).
13. Cheng, Li-Chung; Chang, Chia-Yuan; Lin, Chun-Yu; Cho, Keng-Chi; Yen, Wei-Chung; Chang, Nan-Shan; Xu,
Chris; Dong, Chen Yuan; Chen, Shean-Jen (2012-04-09). "Spatiotemporal focusing-based widefield multiphoton
microscopy for fast optical sectioning" (https://www.osapublishing.org/abstract.cfm?URI=oe-20-8-8939). Optics
Express. 20 (8): 8939. doi:10.1364/oe.20.008939 (https://doi.org/10.1364%2Foe.20.008939).
14. Oron, Dan; Tal, Eran; Silberberg, Yaron (2005-03-07). "Scanningless depth-resolved microscopy" (https://www.osap
ublishing.org/abstract.cfm?URI=oe-13-5-1468). Optics Express. 13 (5): 1468. doi:10.1364/opex.13.001468 (https://d
oi.org/10.1364%2Fopex.13.001468).
15. Ahi, Kiarash; Anwar, Mehdi (2016-05-26). "Modeling of terahertz images based on x-ray images: a novel approach
for verification of terahertz images and identification of objects with fine details beyond terahertz resolution" (http
s://www.researchgate.net/publication/303563365_Modeling_of_terahertz_images_based_on_x-ray_images_a_novel
_approach_for_verification_of_terahertz_images_and_identification_of_objects_with_fine_details_beyond_terahertz
_resolution). Terahertz Physics, Devices, and Systems X: Advanced Applications in Industry.
doi:10.1117/12.2228685 (https://doi.org/10.1117%2F12.2228685).
16. Solomon, Chris (2010). Fundamentals of Digital Image Processing. John Wiley & Sons, Ltd. ISBN 978 0 470 84473
1.
17. Nasse M. J.; Woehl J. C. (2010). "Realistic modeling of the illumination point spread function in confocal scanning
optical microscopy". J. Opt. Soc. Am. A. 27 (2): 295–302. PMID 20126241 (https://www.ncbi.nlm.nih.gov/pubmed/2
0126241). doi:10.1364/JOSAA.27.000295 (https://doi.org/10.1364%2FJOSAA.27.000295).
18. Wallace W, Schaefer LH, Swedlow JR (2001). "A workingperson's guide to deconvolution in light microscopy".
BioTechniques. 31 (5): 1076–8, 1080, 1082 passim. PMID 11730015 (https://www.ncbi.nlm.nih.gov/pubmed/117300
15).
19. Kaufmann Rainer; Müller Patrick; Hildenbrand Georg; Hausmann Michael; Cremer Christoph (2010). "Analysis of
Her2/neu membrane protein clusters in different types of breast cancer cells using localization microscopy". Journal
of Microscopy. 242 (1): 46–54. PMID 21118230 (https://www.ncbi.nlm.nih.gov/pubmed/21118230).
doi:10.1111/j.1365-2818.2010.03436.x (https://doi.org/10.1111%2Fj.1365-2818.2010.03436.x).
20. van de Linde S.; Wolter S.; Sauer S. (2011). "Single-molecule Photoswitching and Localization". Aust. J. Chem. 64:
503–511. doi:10.1071/CH10284 (https://doi.org/10.1071%2FCH10284).
21. K. Goda; K. K. Tsia; B. Jalali (2009). "Serial time-encoded amplified imaging for real-time observation of fast
dynamic phenomena". Nature. 458 (7242): 1145–9. Bibcode:2009Natur.458.1145G (http://adsabs.harvard.edu/abs/20
09Natur.458.1145G). PMID 19407796 (https://www.ncbi.nlm.nih.gov/pubmed/19407796). doi:10.1038/nature07980
(https://doi.org/10.1038%2Fnature07980).
22. H M Pollock and S G Kazarian, Microspectroscopy in the Mid-Infrared, in Encyclopedia of Analytical Chemistry
(Robert A. Meyers, Ed, 1-26 (2014), John Wiley & Sons Ltd,
23. http://onlinelibrary.wiley.com/doi/10.1002/9780470027318.a5609.pub2/abstract

https://en.wikipedia.org/wiki/Microscopy 13/16
7/24/2017 Microscopy - Wikipedia

24. H M Pollock and D A Smith, The use of near-field probes for vibrational spectroscopy and photothermal imaging, in
Handbook of vibrational spectroscopy, J.M. Chalmers and P.R. Griffiths (eds), John Wiley & Sons Ltd, Vol. 2, pp.
1472 - 1492 (2002)
25. Duarte FJ (2016). "Tunable laser microscopy". In Duarte FJ (Ed.). Tunable Laser Applications (3rd ed.). Boca Raton:
CRC Press. pp. 315–328. ISBN 9781482261066.
26. Thomas JL, Rudolph W (2008). "Biological Microscopy with Ultrashort Laser Pulses". In Duarte FJ. Tunable Laser
Applications (2nd ed.). Boca Raton: CRC Press. pp. 245–80 (https://books.google.com/books?id=FCDPZ7e0PEgC&
pg=PA245#v=onepage&q&f=false). ISBN 1-4200-6009-0.
27. Solem, J. C. (1983). "X-ray imaging on biological specimens" (http://www.osti.gov/scitech/biblio/5998195).
Proceedings of International Conference on Lasers '83: 635–640.
28. Solem, J. C. (1982). "High-intensity x-ray holography: An approach to high-resolution snapshot imaging of
biological specimens" (http://www.osti.gov/scitech/biblio/7056325-nr5URM/). Los Alamos National Laboratory
Technical Report LA-9508-MS.
29. Solem, J. C.; Baldwin, G. C. (1982). "Microholography of living organisms" (http://www.sciencemag.org/content/21
8/4569/229.abstract). Science. 218 (4569): 229–235. PMID 17838608 (https://www.ncbi.nlm.nih.gov/pubmed/17838
608). doi:10.1126/science.218.4569.229 (https://doi.org/10.1126%2Fscience.218.4569.229).
30. Solem, J. C.; Chapline, G. F. (1984). "X-ray biomicroholography" (http://opticalengineering.spiedigitallibrary.org/art
icle.aspx?articleid=1244496). Optical Engineering. 23 (2): 193.
31. Solem, J. C. (1985). "Microholography". McGraw-Hill Encyclopedia of Science and Technology.
32. Solem, J. C. (1984). "X-ray holography of biological specimens" (http://www.osti.gov/scitech/biblio/6314558).
Proceedings of Ninth International Congress on Photobiology, Philadelphia, PA, USA, 3 July 1984 (LA-UR-84-
3340; CONF-840783-3): 19.
33. Solem, J. C. (1986). "Imaging biological specimens with high-intensity soft X-rays" (https://www.osapublishing.org/
josab/abstract.cfm?uri=josab-3-11-1551). Journal of the Optical Society of America B. 3 (11): 1551–1565.
doi:10.1364/josab.3.001551 (https://doi.org/10.1364%2Fjosab.3.001551).
34. Haddad, W. S.; Solem, J. C.; Cullen, D.; Boyer, K.; Rhodes, C. K. (1987). "A description of the theory and apparatus
for digital reconstruction of Fourier transform holograms", Proceedings of Electronics Imaging '87, Nov. 2-5, 1987,
Boston; Journal of Electronic Imaging, (Institute for Graphic Communication, Inc., Boston), p. 693.
35. Haddad, W. S.; Cullen, D.; Boyer, K.; Rhodes, C. K.; Solem, J. C.; Weinstein, R. S. (1988). "Design for a Fourier-
transform holographic microscope" (http://link.springer.com/chapter/10.1007/978-3-540-39246-0_49). Proceedings
of International Symposium on X-Ray Microscopy II, Springer Series in Optical Sciences. 56: 284–287.
doi:10.1007/978-3-540-39246-0_49 (https://doi.org/10.1007%2F978-3-540-39246-0_49).
36. Haddad, W. S.; Solem, J. C.; Cullen, D.; Boyer, K.; Rhodes, C. K. (1988). "Design for a Fourier-transform
holographic microscope incorporating digital image reconstruction" (https://www.osapublishing.org/abstract.cfm?uri
=CLEO-1988-WS4). Proceedings of CLEO '88 Conference on Lasers and Electro-Optics, Anaheim, CA, April, 1988;
Optical Society of America, OSA Technical Digest. 7: WS4.
37. Haddad, W. S.; Cullen, D.; Solem, J. C.; Boyer, K.; Rhodes, C. K. (1988). "X-ray Fourier-transform holographic
microscope" (https://inis.iaea.org/search/search.aspx?orig_q=RN:21002870). Proceedings of the OSA Topical
Meeting on Short Wavelength Coherent Radiation: Generation and Applications, September 26–29, 1988, Cape Cod,
MA., Falcone, R.; Kirz, J.; eds. (Optical Society of America, Washington, DC): 284–289.
38. Solem, J. C.; Boyer, K.; Haddad, W. S.; Rhodes, C. K. (1990). Prosnitz, D.; ed. SPIE. "Prospects for X-ray
holography with free-electron lasers" (http://spie.org/Publications/Proceedings/Paper/10.1117/12.18609). Free
Electron Lasers and Applications. 1227: 105–116.
39. Haddad, W. S.; Cullen, D.; Solem, J. C.; Longworth, J. W.; McPherson, L. A.; Boyer, K.; Rhodes, C. K. (1991).
"Fourier-transform holographic microscope" (http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=960
724). Proceedings of SPIE Electronic Imaging '91, San Jose, CA; International Society for Optics and Photonics.
1448: 81–88.
40. Haddad, W. S.; Cullen, D.; Solem, J. C.; Longworth, J.; McPherson, A.; Boyer, K.; Rhodes, C. K. (1992). "Fourier-
transform holographic microscope" (https://www.osapublishing.org/ao/abstract.cfm?id=39783). Applied Optics. 31:
4973–4978. doi:10.1364/ao.31.004973 (https://doi.org/10.1364%2Fao.31.004973).
41. Boyer, K.; Solem, J. C.; Longworth, J.; Borisov, A.; Rhodes, C. K. (1996). "Biomedical three-dimensional
holographic microimaging at visible, ultraviolet and X-ray wavelengths" (http://www.nature.com/nm/journal/v2/n8/a
bs/nm0896-939.html). Nature Medicine. 2 (8): 939–941. doi:10.1038/nm0896-939 (https://doi.org/10.1038%2Fnm08
96-939).
42. [M; Kotrlý, New Possibilities of Using Microscopic Techniques in Forensic Field, Microscopy Society of America,
2015, 21: 1365-1366. M; Kotrlý, New Possibilities of Using Microscopic Techniques in Forensic Field, Microscopy
Society of America, 2015, 21: 1365-1366.] Check |url= value (help). Missing or empty |title= (help)
43. (PDF) [web.alfredstate.edu/benslewd/.../Microscopy%20Handout.pdf
web.alfredstate.edu/benslewd/.../Microscopy%20Handout.pdf] Check |url= value (help). Missing or empty |title=
(help)

https://en.wikipedia.org/wiki/Microscopy 14/16
7/24/2017 Microscopy - Wikipedia

44. [Basu, S.; Millette, J. R. Electron Microscopy in Forensic Occupational and Environmental Health Sciences
[Online]; Plenum Press: New York, 1986. Basu, S.; Millette, J. R. Electron Microscopy in Forensic Occupational and
Environmental Health Sciences [Online]; Plenum Press: New York, 1986.] Check |url= value (help). Missing or
empty |title= (help)

Further reading
Pluta, Maksymilian (1988). Advanced Light Cremer, C; Cremer, T (1978). "Considerations on a
Microscopy vol. 1 Principles and Basic Properties. laser-scanning-microscope with high resolution and
Elsevier. ISBN 978-0-444-98939-0. depth of field" (http://www.kip.uni-heidelberg.de/AG
Pluta, Maksymilian (1989). Advanced Light _Cremer/pdf-files/Cremer_Micros_Acta_1978.pdf)
Microscopy vol. 2 Specialised Methods. Elsevier. (PDF). Microscopica acta. 81 (1): 31–44.
ISBN 978-0-444-98918-5. PMID 713859 (https://www.ncbi.nlm.nih.gov/pubme
Bradbury, S.; Bracegirdle, B. (1998). Introduction to d/713859). Theoretical basis of super resolution 4Pi
Light Microscopy. BIOS Scientific Publishers. microscopy & design of a confocal laser scanning
ISBN 978-0-387-91515-9. fluorescence microscope
Inoue, Shinya (1986). Video Microscopy. Plenum Willis, Randall C (2007). "Portraits of life, one
Press. ISBN 978-0-306-42120-4. molecule at a time" (http://pubs.acs.org/subscribe/jou
rnals/ancham/79/i05/pdf/0307feature_willis.pdf)
(PDF). Analytical Chemistry. 79 (5): 1785–8.
PMID 17375393 (https://www.ncbi.nlm.nih.gov/pub
med/17375393). doi:10.1021/ac0718795 (https://doi.
org/10.1021%2Fac0718795)., a feature article on
sub-diffraction microscopy from the March 1, 2007
issue of Analytical Chemistry

Cremer, C; Cremer, T (1978). "Considerations on a


External links
General Wikimedia Commons has
media related to
Microscopy glossary (http://www.3dham.com/microgallery/glossa Microscopes.
ry.htm), Common terms used in amateur light microscopy.
Nikon MicroscopyU (http://www.microscopyu.com/) Extensive information on light microscopy
Olympus Microscopy (http://www.olympusmicro.com/index.html) Microscopy Resource center
Andor Microscopy Techniques (http://www.andor.com/microscopy_systems/techniques/) - Various
techniques used in microscopy.
Carl Zeiss "Microscopy from the very beginning" (http://www.zeiss.de/C1256B5E0047FF3F?Open), a
step by step tutorial into the basics of microscopy.
Microscopy in Detail (https://web.archive.org/web/20011214072652/http://www.biologie.uni-hamburg.d
e/b-online/e03/03.htm) - A resource with many illustrations elaborating the most common microscopy
techniques
WITec SNOM System (http://witec.de/products/snom/) - NSOM/SNOM and Hybrid Microscopy
techniques in combination with AFM, RAMAN, Confocal, Dark-field, DIC & Fluorescence Microscopy
techniques.
Manawatu Microscopy (http://confocal-manawatu.pbworks.com/) - first known collaboration
environment for Microscopy and Image Analysis.
Audio microscope glossary (http://www.histology-world.com/microscope/audiomicroscope/audiomicrosc
ope.htm)

Techniques

Ratio-metric Imaging Applications For Microscopes (https://web.archive.org/web/20081121044712/htt


p://www.pti-nj.com/EasyRatio/EasyRatioPro-Applications.html) Examples of Ratiometric Imaging Work
on a Microscope
Interactive Fluorescence Dye and Filter Database (https://www.micro-shop.zeiss.com/?s=2525647761b3
3&l=en&p=us&f=f) Carl Zeiss Interactive Fluorescence Dye and Filter Database.
https://en.wikipedia.org/wiki/Microscopy 15/16
7/24/2017 Microscopy - Wikipedia

Analysis tools for lifetime and spectral imaging (http://www.spechron.com/) Analysis tools for lifetime
and spectral imaging.
New approaches to microscopy (http://spie.org/x112606.xml) Eric Betzig: Beyond the Nobel Prize—
New approaches to microscopy.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Microscopy&oldid=790195241"

Categories: Microscopy Microbiology techniques Laboratory techniques Antonie van Leeuwenhoek


Science and technology in the Dutch Republic Dutch inventions

This page was last edited on 12 July 2017, at 05:44.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Microscopy 16/16
7/24/2017 Electron microscope - Wikipedia

Electron microscope
From Wikipedia, the free encyclopedia

An electron microscope is a microscope that uses a beam of


accelerated electrons as a source of illumination. As the wavelength of
an electron can be up to 100,000 times shorter than that of visible light
photons, electron microscopes have a higher resolving power than light
microscopes and can reveal the structure of smaller objects. A scanning
transmission electron microscope has achieved better than 50 pm
resolution in annular dark-field imaging mode[1] and magnifications of
up to about 10,000,000x whereas most light microscopes are limited by
diffraction to about 200 nm resolution and useful magnifications below
2000x.

Electron microscopes have electron optical lens systems that are


analogous to the glass lenses of an optical light microscope.

Electron microscopes are used to investigate the ultrastructure of a wide


range of biological and inorganic specimens including microorganisms,
cells, large molecules, biopsy samples, metals, and crystals. Industrially,
A modern transmission electron
electron microscopes are often used for quality control and failure
microscope
analysis. Modern electron microscopes produce electron micrographs
using specialized digital cameras and frame grabbers to capture the
image.

Contents
1 History
2 Types
2.1 Transmission electron microscope (TEM)
2.2 Scanning electron microscope (SEM)
2.3 Color
2.4 Reflection electron microscope (REM)
2.5 Scanning transmission electron microscope (STEM) Diagram of a transmission electron
3 Sample preparation microscope
4 Disadvantages
5 Applications
6 See also
7 References
8 External links
8.1 General
8.2 History
8.3 Other

History
The first electromagnetic lens was developed in 1926 by Hans Busch.[2]

According to Dennis Gabor, the physicist Leó Szilárd tried in 1928 to convince Busch to build an electron
microscope, for which he had filed a patent.[3]

https://en.wikipedia.org/wiki/Electron_microscope 1/12
7/24/2017 Electron microscope - Wikipedia

The physicist Ernst Ruska and the electrical engineer Max Knoll
constructed the prototype electron microscope in 1931, capable of four-
hundred-power magnification; the apparatus was the first demonstration
of the principles of electron microscopy.[4] Two years later, in 1933,
Ruska built an electron microscope that exceeded the resolution
attainable with an optical (light) microscope.[4] Moreover, Reinhold
Rudenberg, the scientific director of Siemens-Schuckertwerke, obtained
the patent for the electron microscope in May 1931.

In 1932, Ernst Lubcke of Siemens & Halske built and obtained images
from a prototype electron microscope, applying concepts described in
the Rudenberg patent applications.[5] Five years later (1937), the firm
financed the work of Ernst Ruska and Bodo von Borries, and employed
Helmut Ruska (Ernst’s brother) to develop applications for the
microscope, especially with biological specimens.[4][6] Also in 1937,
Manfred von Ardenne pioneered the scanning electron microscope.[7]
The first commercial electron microscope was produced in 1938 by
Siemens.[8] The first North American electron microscope was
constructed in 1938, at the University of Toronto, by Eli Franklin
Burton and students Cecil Hall, James Hillier, and Albert Prebus; and
Siemens produced a transmission electron microscope (TEM) in Electron microscope constructed by
Ernst Ruska in 1933
1939.[9] Although contemporary transmission electron microscopes are
capable of two million-power magnification, as scientific
instruments, they remain based upon Ruska’s prototype.

Types
Transmission electron microscope (TEM)

The original form of electron microscope, the


transmission electron microscope (TEM) uses a high
voltage electron beam to illuminate the specimen and
create an image. The electron beam is produced by an
electron gun, commonly fitted with a tungsten filament
cathode as the electron source. The electron beam is
accelerated by an anode typically at +100 keV (40 to 400
keV) with respect to the cathode, focused by electrostatic
and electromagnetic lenses, and transmitted through the
specimen that is in part transparent to electrons and in
part scatters them out of the beam. When it emerges from Diagram illustrating the phenomena resulting from the
the specimen, the electron beam carries information interaction of highly energetic electrons with matter.
about the structure of the specimen that is magnified by
the objective lens system of the microscope. The spatial variation in this information (the "image") may be
viewed by projecting the magnified electron image onto a fluorescent viewing screen coated with a phosphor or
scintillator material such as zinc sulfide. Alternatively, the image can be photographically recorded by exposing
a photographic film or plate directly to the electron beam, or a high-resolution phosphor may be coupled by
means of a lens optical system or a fibre optic light-guide to the sensor of a digital camera. The image detected
by the digital camera may be displayed on a monitor or computer.

The resolution of TEMs is limited primarily by spherical aberration, but a new generation of aberration
correctors have been able to partially overcome spherical aberration to increase resolution. Hardware correction
of spherical aberration for the high-resolution transmission electron microscopy (HRTEM) has allowed the
production of images with resolution below 0.5 angstrom (50 picometres)[1] and magnifications above 50
https://en.wikipedia.org/wiki/Electron_microscope 2/12
7/24/2017 Electron microscope - Wikipedia

million times.[10] The ability to determine the positions


of atoms within materials has made the HRTEM an
important tool for nano-technologies research and
development.[11]

Transmission electron microscopes are often used in


electron diffraction mode. The advantages of electron
diffraction over X-ray crystallography are that the
specimen need not be a single crystal or even a
polycrystalline powder, and also that the Fourier
transform reconstruction of the object's magnified Operating principe of a Transmission Electron
structure occurs physically and thus avoids the need Microscope
for solving the phase problem faced by the X-ray
crystallographers after obtaining their X-ray diffraction
patterns of a single crystal or polycrystalline powder.

One major disadvantage of the transmission electron microscope is the need for extremely thin sections of the
specimens, typically about 100 nanometers. Creating these thin sections for biological and materials specimens
is technically very challenging. Semiconductor thin sections can be made using a focused ion beam. Biological
tissue specimens are chemically fixed, dehydrated and embedded in a polymer resin to stabilize them
sufficiently to allow ultrathin sectioning. Sections of biological specimens, organic polymers and similar
materials may require staining with heavy atom labels in order to achieve the required image contrast.

Scanning electron microscope (SEM)

The SEM produces images by probing the specimen


with a focused electron beam that is scanned across a
rectangular area of the specimen (raster scanning).
When the electron beam interacts with the specimen, it
loses energy by a variety of mechanisms. The lost
energy is converted into alternative forms such as heat,
emission of low-energy secondary electrons and high-
energy backscattered electrons, light emission
(cathodoluminescence) or X-ray emission, all of which
provide signals carrying information about the
properties of the specimen surface, such as its
Operating principe of a Scanning Electron Microscope
topography and composition. The image displayed by
an SEM maps the varying intensity of any of these
signals into the image in a position corresponding to the position of the beam on the specimen when the signal
was generated. In the SEM image of an ant shown below and to the right, the image was constructed from
signals produced by a secondary electron detector, the normal or conventional imaging mode in most SEMs.

Generally, the image resolution of an SEM is lower than that of a TEM. However, because the SEM images the
surface of a sample rather than its interior, the electrons do not have to travel through the sample. This reduces
the need for extensive sample preparation to thin the specimen to electron transparency. The SEM is able to
image bulk samples that can fit on its stage and still be maneuvered, including a height less than the working
distance being used, often 4 millimeters for high resolution images. The SEM also has a great depth of field,
and so can produce images that are good representations of the three-dimensional surface shape of the sample.
Another advantage of SEMs comes with environmental scanning electron microscopes (ESEM) that can
produce images of good quality and resolution with hydrated samples or in low, rather than high, vacuum or
under chamber gases. This facilitates imaging unfixed biological samples that are unstable in the high vacuum
of conventional electron microscopes.

Color
https://en.wikipedia.org/wiki/Electron_microscope 3/12
7/24/2017 Electron microscope - Wikipedia

In their most common configurations, electron microscopes produce


images with a single brightness value per pixel, with the results usually
rendered in grayscale.[12] However, often these images are then
colorized through the use of feature-detection software, or simply by
hand-editing using a graphics editor. This may be done to clarify
structure or for aesthetic effect and generally does not add new
information about the specimen.[13]

In some configurations information about several specimen properties is


gathered per pixel, usually by the use of multiple detectors.[14] In SEM,
the attributes of topography and material contrast can be obtained by a
pair of backscattered electron detectors and such attributes can be
superimposed in a single color image by assigning a different primary
color to each attribute.[15] Similarly, a combination of backscattered and
secondary electron signals can be assigned to different colors and
superimposed on a single color micrograph displaying simultaneously
Image of bacillus subtilis taken with a
the properties of the specimen.[16]
1960s electron microscope.
Some types of detectors used in SEM have analytical capabilities, and
can provide several items of data at each pixel. Examples are the
Energy-dispersive X-ray spectroscopy (EDS) detectors used in
elemental analysis and Cathodoluminescence microscope (CL) systems
that analyse the intensity and spectrum of electron-induced
luminescence in (for example) geological specimens. In SEM systems
using these detectors it is common to color code the signals and
superimpose them in a single color image, so that differences in the
distribution of the various components of the specimen can be seen
clearly and compared. Optionally, the standard secondary electron
image can be merged with the one or more compositional channels, so
that the specimen's structure and composition can be compared. Such
images can be made while maintaining the full integrity of the original
An image of an ant in a scanning
signal, which is not modified in any way.
electron microscope

Reflection electron microscope (REM)

In the reflection electron microscope (REM) as in the TEM, an electron beam is incident on a surface but
instead of using the transmission (TEM) or secondary electrons (SEM), the reflected beam of elastically
scattered electrons is detected. This technique is typically coupled with reflection high energy electron
diffraction (RHEED) and reflection high-energy loss spectroscopy (RHELS). Another variation is spin-
polarized low-energy electron microscopy (SPLEEM), which is used for looking at the microstructure of
magnetic domains.[17]

Scanning transmission electron microscope (STEM)

The STEM rasters a focused incident probe across a specimen that (as with the TEM) has been thinned to
facilitate detection of electrons scattered through the specimen. The high resolution of the TEM is thus possible
in STEM. The focusing action (and aberrations) occur before the electrons hit the specimen in the STEM, but
afterward in the TEM. The STEMs use of SEM-like beam rastering simplifies annular dark-field imaging, and
other analytical techniques, but also means that image data is acquired in serial rather than in parallel fashion.
Often TEM can be equipped with the scanning option and then it can function both as TEM and STEM.

Sample preparation
https://en.wikipedia.org/wiki/Electron_microscope 4/12
7/24/2017 Electron microscope - Wikipedia

Materials to be viewed under an electron microscope may require


processing to produce a suitable sample. The technique required varies
depending on the specimen and the analysis required:

Chemical fixation – for biological specimens aims to stabilize the


specimen's mobile macromolecular structure by chemical
crosslinking of proteins with aldehydes such as formaldehyde and
glutaraldehyde, and lipids with osmium tetroxide.
Negative stain – suspensions containing nanoparticles or fine
biological material (such as viruses and bacteria) are briefly
mixed with a dilute solution of an electron-opaque solution such
as ammonium molybdate, uranyl acetate (or formate), or
phosphotungstic acid. This mixture is applied to a suitably coated
EM grid, blotted, then allowed to dry. Viewing of this preparation An insect coated in gold for viewing
in the TEM should be carried out without delay for best results. with a scanning electron microscope
The method is important in microbiology for fast but crude
morphological identification, but can also be used as the basis for
high resolution 3D reconstruction using EM tomography methodology when carbon films are used for
support. Negative staining is also used for observation of nanoparticles.
Cryofixation – freezing a specimen so rapidly, in liquid ethane, and maintained at liquid nitrogen or even
liquid helium temperatures, so that the water forms vitreous (non-crystalline) ice. This preserves the
specimen in a snapshot of its solution state. An entire field called cryo-electron microscopy has branched
from this technique. With the development of cryo-electron microscopy of vitreous sections
(CEMOVIS), it is now possible to observe samples from virtually any biological specimen close to its
native state.
Dehydration – or replacement of water with organic solvents such as ethanol or acetone, followed by
critical point drying or infiltration with embedding resins. Also freeze drying.
Embedding, biological specimens – after dehydration, tissue for observation in the transmission electron
microscope is embedded so it can be sectioned ready for viewing. To do this the tissue is passed through
a 'transition solvent' such as propylene oxide (epoxypropane) or acetone and then infiltrated with an
epoxy resin such as Araldite, Epon, or Durcupan;[18] tissues may also be embedded directly in water-
miscible acrylic resin. After the resin has been polymerized (hardened) the sample is thin sectioned
(ultrathin sections) and stained – it is then ready for viewing.
Embedding, materials – after embedding in resin, the specimen is usually ground and polished to a
mirror-like finish using ultra-fine abrasives. The polishing process must be performed carefully to
minimize scratches and other polishing artifacts that reduce image quality.
Metal shadowing – Metal (e.g. platinum) is evaporated from an overhead electrode and applied to the
surface of a biological sample at an angle.[19] The surface topography results in variations in the
thickness of the metal that are seen as variations in brightness and contrast in the electron microscope
image.
Replication – A surface shadowed with metal (e.g. platinum, or a mixture of carbon and platinum) at an
angle is coated with pure carbon evaporated from carbon electrodes at right angles to the surface. This is
followed by removal of the specimen material (e.g. in an acid bath, using enzymes or by mechanical
separation[20]) to produce a surface replica that records the surface ultrastructure and can be examined
using transmission electron microscopy.
Sectioning – produces thin slices of specimen, semitransparent to electrons. These can be cut on an
ultramicrotome with a diamond knife to produce ultra-thin sections about 60–90 nm thick. Disposable
glass knives are also used because they can be made in the lab and are much cheaper.
Staining – uses heavy metals such as lead, uranium or tungsten to scatter imaging electrons and thus give
contrast between different structures, since many (especially biological) materials are nearly
"transparent" to electrons (weak phase objects). In biology, specimens can be stained "en bloc" before
embedding and also later after sectioning. Typically thin sections are stained for several minutes with an
aqueous or alcoholic solution of uranyl acetate followed by aqueous lead citrate.[21]
Freeze-fracture or freeze-etch – a preparation method[22][23] particularly useful for examining lipid
membranes and their incorporated proteins in "face on" view.[24] The fresh tissue or cell suspension is
frozen rapidly (cryofixation), then fractured by breaking[25] or by using a microtome while maintained at

https://en.wikipedia.org/wiki/Electron_microscope 5/12
7/24/2017 Electron microscope - Wikipedia

liquid nitrogen temperature. The cold fractured surface (sometimes "etched" by increasing the
temperature to about −100 °C for several minutes to let some ice sublime) is then shadowed with
evaporated platinum or gold at an average angle of 45° in a high vacuum evaporator. A second coat of
carbon, evaporated perpendicular to the average surface plane is often performed to improve stability of
the replica coating. The specimen is returned to room temperature and pressure, then the extremely
fragile "pre-shadowed" metal replica of the fracture surface is released from the underlying biological
material by careful chemical digestion with acids, hypochlorite solution or SDS detergent. The still-
floating replica is thoroughly washed free from residual chemicals, carefully fished up on fine grids,
dried then viewed in the TEM.
Freeze-fracture replica immunogold labelling (FRIL) – the freeze-fracture method has been modified to
allow the identification of the components of the fracture face by immunogold labeling. Instead of
removing all the underlying tissue of the thawed replica as the final step before viewing in the
microscope the tissue thickness is minimized during or after the fracture process. The thin layer of tissue
remains bound to the metal replica so it can be immunogold labeled with antibodies to the structures of
choice. The thin layer of the original specimen on the replica with gold attached allows the identification
of structures in the fracture plane.[26] There are also related methods which label the surface of etched
cells[27] and other replica labeling variations.[28]
Ion beam milling – thins samples until they are transparent to electrons by firing ions (typically argon) at
the surface from an angle and sputtering material from the surface. A subclass of this is focused ion beam
milling, where gallium ions are used to produce an electron transparent membrane in a specific region of
the sample, for example through a device within a microprocessor. Ion beam milling may also be used for
cross-section polishing prior to SEM analysis of materials that are difficult to prepare using mechanical
polishing.
Conductive coating – an ultrathin coating of electrically conducting material, deposited either by high
vacuum evaporation or by low vacuum sputter coating of the sample. This is done to prevent the
accumulation of static electric fields at the specimen due to the electron irradiation required during
imaging. The coating materials include gold, gold/palladium, platinum, tungsten, graphite, etc.
Earthing – to avoid electrical charge accumulation on a conductive coated sample, it is usually
electrically connected to the metal sample holder. Often an electrically conductive adhesive is used for
this purpose.

Disadvantages
Electron microscopes are expensive to build and maintain, but the capital and running costs of confocal light
microscope systems now overlaps with those of basic electron microscopes. Microscopes designed to achieve
high resolutions must be housed in stable buildings (sometimes underground) with special services such as
magnetic field cancelling systems.

The samples largely have to be viewed in vacuum, as the molecules that make up air would scatter the
electrons. An exception is the environmental scanning electron microscope, which allows hydrated samples to
be viewed in a low-pressure (up to 20 Torr or 2.7 kPa) and/or wet environment.

Scanning electron microscopes operating in conventional high-vacuum mode usually image conductive
specimens; therefore non-conductive materials require conductive coating (gold/palladium alloy, carbon,
osmium, etc.). The low-voltage mode of modern microscopes makes possible the observation of non-
conductive specimens without coating. Non-conductive materials can be imaged also by a variable pressure (or
environmental) scanning electron microscope.

Small, stable specimens such as carbon nanotubes, diatom frustules and small mineral crystals (asbestos fibres,
for example) require no special treatment before being examined in the electron microscope. Samples of
hydrated materials, including almost all biological specimens have to be prepared in various ways to stabilize
them, reduce their thickness (ultrathin sectioning) and increase their electron optical contrast (staining). These
processes may result in artifacts, but these can usually be identified by comparing the results obtained by using
radically different specimen preparation methods. Since the 1980s, analysis of cryofixed, vitrified specimens
has also become increasingly used by scientists, further confirming the validity of this technique.[29][30][31]

https://en.wikipedia.org/wiki/Electron_microscope 6/12
7/24/2017 Electron microscope - Wikipedia

Applications

Semiconductor and data storage Materials research

Circuit edit[32] Device testing and characterization[46]


Defect analysis[33] Dynamic materials experiments[47]
Failure analysis[34] Electron beam-induced deposition[48]
Materials qualification[49]
Biology and life sciences Medical research[38]
Nanometrology[50]
Cryobiology[35]
Nanoprototyping[51]
Cryo-electron microscopy[36]
Diagnostic electron microscopy[37] Industry
Drug research (e.g. antibiotics)[38]
Electron tomography[39] Chemical/Petrochemical[52]
Particle analysis[40] Direct beam-writing fabrication[53]
Particle detection[41] Food science[54]
Protein localization[42] Forensics[55]
Structural biology[36] Fractography[56]
Tissue imaging[43] Micro-characterization[57]
Toxicology[44] Mining (mineral liberation analysis)[58]
Virology (e.g. viral load monitoring)[45] Pharmaceutical QC[59]

See also
Acronyms in microscopy Nanoscience
Electron diffraction Nanotechnology
Electron energy loss spectroscopy (EELS) Neutron microscope
Electron microscope images Scanning confocal electron microscopy
Energy filtered transmission electron Scanning electron microscope (SEM)
microscopy (EFTEM) Scanning tunneling microscope
Environmental scanning electron microscope Surface science
(ESEM) Transmission Electron Aberration-Corrected
Field emission microscope Microscope
HiRISE X-ray diffraction
In situ electron microscopy X-ray microscope
Microscope image processing
Microscopy
Nanoscience
References
1. Erni, Rolf; Rossell, MD; Kisielowski, C; Dahmen, U (2009). "Atomic-Resolution Imaging with a Sub-50-pm
Electron Probe". Physical Review Letters. 102 (9): 096101. Bibcode:2009PhRvL.102i6101E (http://adsabs.harvard.e
du/abs/2009PhRvL.102i6101E). PMID 19392535 (https://www.ncbi.nlm.nih.gov/pubmed/19392535).
doi:10.1103/PhysRevLett.102.096101 (https://doi.org/10.1103%2FPhysRevLett.102.096101).
2. Mathys, Daniel, Zentrum für Mikroskopie, University of Basel: Die Entwicklung der Elektronenmikroskopie vom
Bild über die Analyse zum Nanolabor, p. 8
3. Dannen, Gene (1998) Leo Szilard the Inventor: A Slideshow (1998, Budapest, conference talk) (http://www.dannen.c
om/budatalk.html). dannen.com
4. Ruska, Ernst (1986). "Ernst Ruska Autobiography" (http://nobelprize.org/nobel_prizes/physics/laureates/1986/ruska-
autobio.html). Nobel Foundation. Retrieved 2010-01-31.

https://en.wikipedia.org/wiki/Electron_microscope 7/12
7/24/2017 Electron microscope - Wikipedia

5. Rudenberg, H Gunther; Rudenberg, Paul G (2010). "Chapter 6 – Origin and Background of the Invention of the
Electron Microscope: Commentary and Expanded Notes on Memoir of Reinhold Rüdenberg". Advances in Imaging
and Electron Physics. 160. Elsevier. ISBN 978-0-12-381017-5. doi:10.1016/S1076-5670(10)60006-7 (https://doi.org/
10.1016%2FS1076-5670%2810%2960006-7).
6. Kruger DH; Schneck P; Gelderblom HR (May 2000). "Helmut Ruska the visualisation of viruses". Lancet. 355
(9216): 1713–7. PMID 10905259 (https://www.ncbi.nlm.nih.gov/pubmed/10905259). doi:10.1016/S0140-
6736(00)02250-9 (https://doi.org/10.1016%2FS0140-6736%2800%2902250-9).
7. von Ardenne, M; Beischer, D (1940). "Untersuchung von metalloxyd-rauchen mit dem universal-
elektronenmikroskop" (http://onlinelibrary.wiley.com/doi/10.1002/bbpc.19400460406/abstract). Zeitschrift
Electrochemie (in German). 46: 270–277. doi:10.1002/bbpc.19400460406 (inactive 2017-04-29).
8. History of electron microscopy, 1931–2000 (http://authors.library.caltech.edu/5456/1/hrst.mit.edu/hrs/materials/publi
c/ElectronMicroscope/EM_HistOverview.htm). Authors.library.caltech.edu (2002-12-10). Retrieved on 2017-04-29.
9. "James Hillier" (http://web.mit.edu/Invent/iow/hillier.html). Inventor of the Week: Archive. 2003-05-01. Retrieved
2010-01-31.
10. "The Scale of Things" (https://web.archive.org/web/20100201175106/http://www.sc.doe.gov/bes/scale_of_things.ht
ml). Office of Basic Energy Sciences, U.S. Department of Energy. 2006-05-26. Archived from the original (http://w
ww.sc.doe.gov/bes/scale_of_things.html) on 2010-02-01. Retrieved 2010-01-31.
11. O'Keefe MA; Allard LF (2004-01-18). "Sub-Ångstrom Electron Microscopy for Sub-Ångstrom Nano-Metrology" (ht
tp://www.osti.gov/bridge/servlets/purl/821768-E3YVgN/native/821768.pdf) (PDF). Information Bridge: DOE
Scientific and Technical Information – Sponsored by OSTI.
12. Burgess, Jeremy (1987). Under the Microscope: A Hidden World Revealed (https://books.google.com/books?id=30A
5AAAAIAAJ&pg=PA11). CUP Archive. p. 11. ISBN 0-521-39940-8.
13. "Introduction to Electron Microscopy" (http://www.fei.com/uploadedfiles/documents/content/introduction_to_em_bo
oklet_july_10.pdf) (PDF). FEI Company. p. 15. Retrieved 12 December 2012.
14. Antonovsky, A. (1984). "The application of colour to sem imaging for increased definition". Micron and
Microscopica Acta. 15 (2): 77–84. doi:10.1016/0739-6260(84)90005-4 (https://doi.org/10.1016%2F0739-6260%288
4%2990005-4).
15. Danilatos, G.D. (1986). "Colour micrographs for backscattered electron signals in the SEM". Scanning. 9 (3): 8–18.
doi:10.1111/j.1365-2818.1986.tb04287.x (https://doi.org/10.1111%2Fj.1365-2818.1986.tb04287.x).
16. Danilatos, G.D. (1986). "Environmental scanning electron microscopy in colour". J. Microscopy. 142: 317–325.
doi:10.1002/sca.4950080104 (https://doi.org/10.1002%2Fsca.4950080104).
17. "SPLEEM" (https://web.archive.org/web/20100529185331/http://ncem.lbl.gov/frames/spleem.html). National Center
for Electron Microscopy (NCEM). Archived from the original (http://ncem.lbl.gov/frames/spleem.html) on 2010-05-
29. Retrieved 2010-01-31.
18. Luft, J.H. (1961). "Improvements in epoxy resin embedding methods". The Journal of biophysical and biochemical
cytology. 9 (2). p. 409. PMC 2224998 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2224998)  .
PMID 13764136 (https://www.ncbi.nlm.nih.gov/pubmed/13764136).
19. Williams, R. C.; Wyckoff, R. W. (1945-06-08). "ELECTRON SHADOW MICROGRAPHY OF THE TOBACCO
MOSAIC VIRUS PROTEIN". Science. 101 (2632): 594–596. Bibcode:1945Sci...101..594W (http://adsabs.harvard.e
du/abs/1945Sci...101..594W). PMID 17739444 (https://www.ncbi.nlm.nih.gov/pubmed/17739444).
doi:10.1126/science.101.2632.594 (https://doi.org/10.1126%2Fscience.101.2632.594).
20. Juniper, B.E.; Bradley, D.E. (1958). "The carbon replica technique in the study of the ultrastructure of leaf surfaces".
Journal of ultrastructure research. 2 (1): 16–27. doi:10.1016/s0022-5320(58)90045-5 (https://doi.org/10.1016%2Fs0
022-5320%2858%2990045-5).
21. Reynolds, E. S. (1963). "The use of lead citrate at high pH as an electron-opaque stain in electron microscopy" (http
s://www.ncbi.nlm.nih.gov/pmc/articles/PMC2106263). Journal of Cell Biology. 17: 208–212. PMC 2106263 (https://
www.ncbi.nlm.nih.gov/pmc/articles/PMC2106263)  . PMID 13986422 (https://www.ncbi.nlm.nih.gov/pubmed/1398
6422). doi:10.1083/jcb.17.1.208 (https://doi.org/10.1083%2Fjcb.17.1.208).
22. Meryman H.T. and Kafig E. (1955). The study of frozen specimens, ice crystals and ices crystal growth by electron
microscopy. Naval Med. Res. Ints. Rept NM 000 018.01.09 Vol. 13 pp 529–544
23. Steere, Russell L. (1957-01-25). "ELECTRON MICROSCOPY OF STRUCTURAL DETAIL IN FROZEN
BIOLOGICAL SPECIMENS" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2224015). The Journal of
Biophysical and Biochemical Cytology. 3 (1): 45–60. PMC 2224015 (https://www.ncbi.nlm.nih.gov/pmc/articles/PM
C2224015)  . PMID 13416310 (https://www.ncbi.nlm.nih.gov/pubmed/13416310).
24. Moor H, Mühlethaler K. FINE STRUCTURE IN FROZEN-ETCHED YEAST CELLS. The Journal of Cell Biology.
1963;17(3):609–628.
25. Bullivant, Stanley; Ames, Adelbert (1966-06-01). "A SIMPLE FREEZE-FRACTURE REPLICATION METHOD
FOR ELECTRON MICROSCOPY" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2106967). The Journal of Cell
Biology. 29 (3): 435–447. PMC 2106967 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2106967)  .
PMID 5962938 (https://www.ncbi.nlm.nih.gov/pubmed/5962938).

https://en.wikipedia.org/wiki/Electron_microscope 8/12
7/24/2017 Electron microscope - Wikipedia

26. Gruijters, W. T.; Kistler, J; Bullivant, S; Goodenough, D. A. (1987-03-01). "Immunolocalization of MP70 in lens
fiber 16-17-nm intercellular junctions" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2114558). The Journal of
Cell Biology. 104 (3): 565–572. PMC 2114558 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2114558)  .
PMID 3818793 (https://www.ncbi.nlm.nih.gov/pubmed/3818793).
27. da Silva, Pedro Pinto; Branton, Daniel (1970-06-01). "MEMBRANE SPLITTING IN FREEZE-ETCHING" (https://
www.ncbi.nlm.nih.gov/pmc/articles/PMC2107921). The Journal of Cell Biology. 45 (3): 598–605. PMC 2107921 (ht
tps://www.ncbi.nlm.nih.gov/pmc/articles/PMC2107921)  . PMID 4918216 (https://www.ncbi.nlm.nih.gov/pubmed/4
918216).
28. Rash, J. E.; Johnson, T. J.; Hudson, C. S.; Giddings, F. D.; Graham, W. F.; Eldefrawi, M. E. (1982-11-01). "Labelled-
replica techniques: post-shadow labelling of intramembrane particles in freeze-fracture replicas". Journal of
Microscopy. 128 (Pt 2): 121–138. PMID 6184475 (https://www.ncbi.nlm.nih.gov/pubmed/6184475).
29. Adrian, Marc; Dubochet, Jacques; Lepault, Jean; McDowall, Alasdair W. (1984). "Cryo-electron microscopy of
viruses". Nature. 308 (5954): 32–36. Bibcode:1984Natur.308...32A (http://adsabs.harvard.edu/abs/1984Natur.308...3
2A). PMID 6322001 (https://www.ncbi.nlm.nih.gov/pubmed/6322001). doi:10.1038/308032a0 (https://doi.org/10.10
38%2F308032a0).
30. Sabanay, I.; Arad, T.; Weiner, S.; Geiger, B. (1991). "Study of vitrified, unstained frozen tissue sections by
cryoimmunoelectron microscopy". Journal of Cell Science. 100 (1): 227–236. PMID 1795028 (https://www.ncbi.nl
m.nih.gov/pubmed/1795028).
31. Kasas, S.; Dumas, G.; Dietler, G.; Catsicas, S.; Adrian, M. (2003). "Vitrification of cryoelectron microscopy
specimens revealed by high-speed photographic imaging". Journal of Microscopy. 211 (1): 48–53.
doi:10.1046/j.1365-2818.2003.01193.x (https://doi.org/10.1046%2Fj.1365-2818.2003.01193.x).
32. Boehme, L.; Bresin, M.; Botman, A.; Ranney, J.; Hastings, J.T. (2015). "Focused electron beam induced etching of
copper in sulfuric acid solutions". Nanotechnology. 26 (49): 495301. Bibcode:2015Nanot..26W5301B (http://adsabs.
harvard.edu/abs/2015Nanot..26W5301B). PMID 26567988 (https://www.ncbi.nlm.nih.gov/pubmed/26567988).
doi:10.1088/0957-4484/26/49/495301 (https://doi.org/10.1088%2F0957-4484%2F26%2F49%2F495301).
33. Kacher, J.; Cui, B.; Robertson, I.M. (2015). "In situ and tomographic characterization of damage and dislocation
processes in irradiated metallic alloys by transmission electron microscopy". Journal of Materials Research. 30 (9):
1202–1213. Bibcode:2015JMatR..30.1202K (http://adsabs.harvard.edu/abs/2015JMatR..30.1202K).
doi:10.1557/jmr.2015.14 (https://doi.org/10.1557%2Fjmr.2015.14).
34. Rai, R.S.; Subramanian, S. (2009). "Role of transmission electron microscopy in the semiconductor industry for
process development and failure analysis". Progress in crystal growth and characterization of materials. 55 (3–4):
63–97. doi:10.1016/j.pcrysgrow.2009.09.002 (https://doi.org/10.1016%2Fj.pcrysgrow.2009.09.002).
35. Morris, G.J.; Goodrich, M.; Acton, E.; Fonseca, F. (2006). "The high viscosity encountered during freezing in
glycerol solutions: Effects on cryopreservation". Cryobiology. 52 (3): 323–334. PMID 16499898 (https://www.ncbi.n
lm.nih.gov/pubmed/16499898). doi:10.1016/j.cryobiol.2006.01.003 (https://doi.org/10.1016%2Fj.cryobiol.2006.01.0
03).
36. von Appen, A.; Beck, M. (2016). "Structure determination of the nuclear pore complex with three-dimensional cryo
electron microscopy". Journal of Molecular Biology. 428 (10A): 2001–2010. PMID 26791760 (https://www.ncbi.nl
m.nih.gov/pubmed/26791760). doi:10.1016/j.jmb.2016.01.004 (https://doi.org/10.1016%2Fj.jmb.2016.01.004).
37. Florian, P.E.; Rouillé, Y.; Ruta, S.; Nichita, N.; Roseanu, A. (2016). "Recent advances in human viruses imaging
studies". Journal of Basic Microbiology. 56 (6): 591–607. PMID 27059598 (https://www.ncbi.nlm.nih.gov/pubmed/2
7059598). doi:10.1002/jobm.201500575 (https://doi.org/10.1002%2Fjobm.201500575).
38. Cushnie, T.P.; O’Driscoll, N.H.; Lamb, A.J. (2016). "Morphological and ultrastructural changes in bacterial cells as
an indicator of antibacterial mechanism of action". Cellular and Molecular Life Sciences. 73 (23): 4471–4492.
PMID 27392605 (https://www.ncbi.nlm.nih.gov/pubmed/27392605). doi:10.1007/s00018-016-2302-2 (https://doi.or
g/10.1007%2Fs00018-016-2302-2).
39. Li, M.-H.; Yang, Y.-Q.; Huang, B.; Luo, X.; Zhang, W.; Han, M.; Ru, J.-G. (2014). "Development of advanced
electron tomography in materials science based on TEM and STEM". Transactions of Nonferrous Metals Society of
China. 24 (10): 3031–3050. doi:10.1016/S1003-6326(14)63441-5 (https://doi.org/10.1016%2FS1003-6326%2814%
2963441-5).
40. Li, W.J.; Shao, L.Y.; Zhang, D.Z.; Ro, C.U.; Hu, M.; Bi, X.H.; Geng, H.; Matsuki, A.; Niu, H.Y.; Chen, J.M. (2016).
"A review of single aerosol particle studies in the atmosphere of East Asia: morphology, mixing state, source, and
heterogeneous reactions". Journal of Cleaner Production. 112 (2): 1330–1349. doi:10.1016/j.jclepro2015.04.050
(inactive 2017-04-29).
41. Sousa, R.G.; Esteves, T.; Rocha, S.; Figueiredo, F.; Quelhas, P.; Silva, L.M. (2015). "Automatic detection of
immunogold particles from electron microscopy images". Image Analysis and Recognition. Lecture Notes in
Computer Science. 9164: 377–384. ISBN 978-3-319-20800-8. doi:10.1007/978-3-319-20801-5_41 (https://doi.org/1
0.1007%2F978-3-319-20801-5_41).

https://en.wikipedia.org/wiki/Electron_microscope 9/12
7/24/2017 Electron microscope - Wikipedia

42. Perkins, G.A. (2014). "The use of miniSOG in the localization of mitochondrial proteins". Methods in Enzymology.
Methods in Enzymology. 547: 165–179. ISBN 9780128014158. PMID 25416358 (https://www.ncbi.nlm.nih.gov/pub
med/25416358). doi:10.1016/B978-0-12-801415-8.00010-2 (https://doi.org/10.1016%2FB978-0-12-801415-8.00010
-2).
43. Chen, X.D.; Ren, L.Q.; Zheng, B.; Liu, H. (2013). "Physics and engineering aspects of cell and tissue imaging
systems: microscopic devices and computer assisted diagnosis". Biophotonics in Pathology: Pathology at the
Crossroads. 185: 1–22. PMID 23542929 (https://www.ncbi.nlm.nih.gov/pubmed/23542929). doi:10.3233/978-1-
61499-234-9-1 (inactive 2017-04-29).
44. Fagerland, J.A.; Wall, H.G.; Pandher, K.; LeRoy, B.E.; Gagne, G.D. (2012). "Ultrastructural analysis in preclinical
safety evaluation". Toxicologic Pathology. 40 (2): 391–402. PMID 22215513 (https://www.ncbi.nlm.nih.gov/pubme
d/22215513). doi:10.1177/0192623311430239 (https://doi.org/10.1177%2F0192623311430239).
45. Heider, S.; Metzner, C. (2014). "Quantitative real-time single particle analysis of virions" (https://www.ncbi.nlm.nih.
gov/pmc/articles/PMC4139191). Virology. 462–463: 199–206. PMC 4139191 (https://www.ncbi.nlm.nih.gov/pmc/ar
ticles/PMC4139191)  . PMID 24999044 (https://www.ncbi.nlm.nih.gov/pubmed/24999044).
doi:10.1016/j.virol.2014.06.005 (https://doi.org/10.1016%2Fj.virol.2014.06.005).
46. Tsekouras, G.; Mozer, A.J.; Wallace, G.G. (2008). "Enhanced performance of dye sensitized solar cells utilizing
platinum electrodeposit counter electrodes". Journal of the Electrochemical Society. 155 (7): K124–K128.
doi:10.1149/1.2919107 (https://doi.org/10.1149%2F1.2919107).
47. Besenius, P.; Portale, G.; Bomans, P.H.H.; Janssen, H.M.; Palmans, A.R.A.; Meijer, E.W. (2010). "Controlling the
growth and shape of chiral supramolecular polymers in water". Proceedings of the National Academy of Sciences of
the United States of America. 107 (42): 17888–17893. Bibcode:2010PNAS..10717888B (http://adsabs.harvard.edu/a
bs/2010PNAS..10717888B). doi:10.1073/pnas.1009592107 (https://doi.org/10.1073%2Fpnas.1009592107).
48. Furuya, K. (2008). "Nanofabrication by advanced electron microscopy using intense and focused beam". Science and
Technology of Advanced Materials. 9 (1): Article 014110. Bibcode:2008STAdM...9a4110F (http://adsabs.harvard.ed
u/abs/2008STAdM...9a4110F). doi:10.1088/1468-6996/9/1/014110 (https://doi.org/10.1088%2F1468-6996%2F9%2
F1%2F014110).
49. Maloy, S.A.; Sommer, W.F.; James, M.R.; Romero, T.J.; Lopez, M.R.; Zimmermann, E.; Ledbetter, J.M. (2000).
"The accelerator production of tritium materials test program". Nuclear Technology. 132 (1): 103–114. ISSN 0029-
5450 (https://www.worldcat.org/issn/0029-5450).
50. Ukraintsev, V.; Banke, B. (2012). "Review of reference metrology for nanotechnology: significance, challenges, and
solutions". Journal of Micro/Nanolithography, MEMS, and MOEMS. 11 (1): 011010.
doi:10.1117/1.JMM.11.1.011010 (https://doi.org/10.1117%2F1.JMM.11.1.011010).
51. Wilhelmi, O.; Roussel, L.; Faber, P.; Reyntjens, S.; Daniel, G. (2010). "Focussed ion beam fabrication of large and
complex nanopatterns". Journal of Experimental Nanoscience. 5 (3): 244–250. Bibcode:2010JENan...5..244W (htt
p://adsabs.harvard.edu/abs/2010JENan...5..244W). doi:10.1080/17458080903487448 (https://doi.org/10.1080%2F17
458080903487448).
52. Vogt, E.T.C.; Whiting, G.T.; Chowdhury, A.D.; Weckhuysen, B.M. (2015). "Zeolites and zeotypes for oil and gas
conversion". Advances in Catalysis. Advances in Catalysis. 58: 143–314. ISBN 9780128021262.
doi:10.1016/bs.acat.2015.10.001 (https://doi.org/10.1016%2Fbs.acat.2015.10.001).
53. Lai, S.E.; Hong, Y.J.; Chen, Y.T.; Kang, Y.T.; Chang, P.; Yew, T.R. (2015). "Direct-writing of Cu nano-patterns with
an electron beam". Microscopy and Microanalysis. 21 (6): 1639–1643. Bibcode:2015MiMic..21.1639L (http://adsab
s.harvard.edu/abs/2015MiMic..21.1639L). PMID 26381450 (https://www.ncbi.nlm.nih.gov/pubmed/26381450).
doi:10.1017/S1431927615015111 (https://doi.org/10.1017%2FS1431927615015111).
54. Sicignano, A.; Di Monaco, R.; Masi, P.; Cavella, S. (2015). "From raw material to dish: pasta quality step by step".
Journal of the Science of Food and Agriculture. 95 (13): 2579–2587. PMID 25783568 (https://www.ncbi.nlm.nih.go
v/pubmed/25783568). doi:10.1002/jsfa.7176 (https://doi.org/10.1002%2Fjsfa.7176).
55. Brożek-Mucha, Z. (2014). "Scanning electron microscopy and X-ray microanalysis for chemical and morphological
characterisation of the inorganic component of gunshot residue: selected problems" (https://www.ncbi.nlm.nih.gov/p
mc/articles/PMC4082842). Biomed Research International. 2014: 428038. PMC 4082842 (https://www.ncbi.nlm.ni
h.gov/pmc/articles/PMC4082842)  . PMID 25025050 (https://www.ncbi.nlm.nih.gov/pubmed/25025050).
doi:10.1155/2014/428038 (https://doi.org/10.1155%2F2014%2F428038).
56. Carbonell-Verdu, A.; Garcia-Sanoguera, D.; Jorda-Vilaplana, A.; Sanchez-Nacher, L.; Balart, R. (2016). "A new
biobased plasticizer for poly(vinyl chloride) based on epoxidized cottonseed oil". Journal of Applied Polymer
Science. 33 (27): 43642. doi:10.1002/app.43642 (https://doi.org/10.1002%2Fapp.43642).
57. Ding, J.; Zhang, Z.M.; Wang, J.Q.; Han, E.H.; Tang, W.B.; Zhang, M.L.; Sun, Z.Y. (2015). "Micro-characterization
of dissimilar metal weld joint for connecting the pipe-nozzle to the safe-end in generation III nuclear power plant" (h
ttps://www.researchgate.net/publication/281980540_Micro-characterization_of_dissimilar_metalweld_joint_for_con
necting_pipenozzle_to_safe-end_in_generation_iii_nuclear_power_plant). Acta Metallurgica Sinica. 51 (4): 425–
439. doi:10.11900/0412.1961.2014.00299 (inactive 2017-04-29).

https://en.wikipedia.org/wiki/Electron_microscope 10/12
7/24/2017 Electron microscope - Wikipedia

58. Tsikouras, B.; Pe-Piper, G.; Piper, D.J.W.; Schaffer, M. (2011). "Varietal heavy mineral analysis of sediment
provenance, Lower Cretaceous Scotian Basin, eastern Canada". Sedimentary Geology. 237 (3–4): 150–165.
Bibcode:2011SedG..237..150T (http://adsabs.harvard.edu/abs/2011SedG..237..150T).
doi:10.1016/j.sedgeo.2011.02.011 (https://doi.org/10.1016%2Fj.sedgeo.2011.02.011).
59. Li, X.; Jiang, C.; Pan, L.L.; Zhang, H.Y.; Hu, L.; Li, T.X.; Yang, X.H. (2015). "Effects of preparing techniques and
aging on dissolution behavior of the solid dispersions of NF/Soluplus/Kollidon SR: identification and classification
by a combined analysis by FT-IR spectroscopy and computational approaches". Drug Development and Industrial
Pharmacy. 41 (1): 2–1. PMID 25026247 (https://www.ncbi.nlm.nih.gov/pubmed/25026247).
doi:10.3109/03639045.2014.938080 (https://doi.org/10.3109%2F03639045.2014.938080).

External links
An Introduction to Electron Microscopy (http://www.fei.com/reso
Wikimedia Commons has
urces/student-learning/introduction-to-electron-microscopy/resour
media related to Electron
ces.aspx): resources for teachers and students microscope images.
Cell Centered Database – Electron microscopy data (http://ccdb.u
csd.edu/sand/main?typeid=4&event=showMPByType&start=1)
Science Aid: Electron Microscopy (http://scienceaid.co.uk/biology/cell/analysingcells.html): high school
(GCSE, A Level) resource

General
Animations and explanations of various types of microscopy including electron microscopy (http://toutes
tquantique.fr/en/microscopy/) (Université Paris Sud)
Environmental Scanning Electron Microscopy (ESEM) (http://www.danilatos.com/)
ETH Zurich website (http://www.microscopy.ethz.ch/): graphics and images illustrating various
procedures
Eva Nogales's seminar: "Introduction to Electron Microscopy" (http://www.ibiology.org/ibioseminars/tec
hniques/eva-nogales-part-1.html)
FEI Image Contest (http://www.fei.com/2013-image-contest/): FEI has had a microscopy image contest
every year since 2008
Introduction to electron microscopy (http://www.gizmag.com/lenless-electron-microscope/21751/) by
David Szondy
Nanohedron.com image gallery (http://www.nanohedron.com/): images generated by electron
microscopy
X-ray element analysis in electron microscopy (http://www.microanalyst.net/index_e.phtml): information
portal with X-ray microanalysis and EDX contents

History

John H.L. Watson's recollections at the University of Toronto when he worked with Hillier and Prebus (ht
tp://www.physics.utoronto.ca/physics-at-uoft/history/the-electron-microscope/the-electron-microscope-a-
personal-recollection)
Rubin Borasky Electron Microscopy Collection, 1930–1988 (http://americanhistory.si.edu/archives/d845
2.htm) (Archives Center, National Museum of American History, Smithsonian Institution)

Other

The Royal Microscopical Society, Electron Microscopy Section (UK) (http://www.rms.org.uk/em.shtml)


Albert Lleal. Natural history subjects at Scanning Electron Microscope SEM (http://www.albertlleal.com/
portfolio/scanning-electron-microscopy/)
EM Images (http://histology-world.com/photoalbum/thumbnails.php?album=55)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Electron_microscope&oldid=790576148"

https://en.wikipedia.org/wiki/Electron_microscope 11/12
7/24/2017 Electron microscope - Wikipedia

Categories: Electron microscopy Microscopes Accelerator physics Anatomical pathology Pathology


Scientific techniques German inventions Protein imaging

This page was last edited on 14 July 2017, at 17:08.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Electron_microscope 12/12
7/24/2017 Atomic-force microscopy - Wikipedia

Atomic-force microscopy
From Wikipedia, the free encyclopedia

Atomic-force microscopy (AFM) or scanning-force Microscopy (SFM) is a very-high-resolution type of


scanning probe microscopy (SPM), with demonstrated resolution on the order of fractions of a nanometer, more
than 1000 times better than the optical diffraction limit.

Contents
1 Overview
1.1 Abilities
1.2 Other microscopy technologies
1.3 Configuration
1.3.1 Detector
1.3.2 Image formation
1.4 History
1.5 Applications An atomic-force microscope on the left
2 Principles with controlling computer on the right.
2.1 Imaging modes
2.1.1 Contact mode
2.1.2 Tapping mode
2.1.3 Non-contact mode
3 Topographic image
3.1 What is the topographic image of atomic-force
microscope?
3.2 Topographic image of FM-AFM
4 Force spectroscopy
4.1 Biological applications and other
5 Identification of individual surface atoms
6 Probe
7 AFM cantilever-deflection measurement
Atomic Force Microscope
7.1 Beam-deflection measurement
7.2 Other deflection-measurement methods
8 Piezoelectric scanners
9 Advantages and disadvantages
9.1 Advantages
9.2 Disadvantages
10 Other applications in various fields of study
11 See also
12 References
13 Further reading
14 External links

Overview
Atomic-force microscopy (AFM) or scanning-force microscopy (SFM) is a type of scanning probe
microscopy (SPM), with demonstrated resolution on the order of fractions of a nanometer, more than 1000
times better than the optical diffraction limit. The information is gathered by "feeling" or "touching" the surface
with a mechanical probe. Piezoelectric elements that facilitate tiny but accurate and precise movements on
(electronic) command enable very precise scanning.

Abilities
https://en.wikipedia.org/wiki/Atomic-force_microscopy 1/17
7/24/2017 Atomic-force microscopy - Wikipedia

The AFM has three major abilities: force measurement, imaging, and
manipulation.

In force measurement, AFMs can be used to measure the forces


between the probe and the sample as a function of their mutual
separation. This can be applied to perform force spectroscopy.

For imaging, the reaction of the probe to the forces that the sample
imposes on it can be used to form an image of the three-dimensional
shape (topography) of a sample surface at a high resolution. This is
achieved by raster scanning the position of the sample with respect
to the tip and recording the height of the probe that corresponds to a
constant probe-sample interaction (see section topographic imaging
in AFM for more details). The surface topography is commonly
Block diagram of atomic-force
displayed as a pseudocolor plot.
microscope using beam deflection
In manipulation, the forces between tip and sample can also be used detection. As the cantilever is displaced
to change the properties of the sample in a controlled way. Examples via its interaction with the surface, so too
of this include atomic manipulation, scanning probe lithography and will the reflection of the laser beam be
local stimulation of cells. displaced on the surface of the
photodiode.
Simultaneous with the acquisition of topographical images, other
properties of the sample can be measured locally and displayed as an image, often with similarly high
resolution. Examples of such properties are mechanical properties like stiffness or adhesion strength and
electrical properties such as conductivity or surface potential. In fact, the majority of SPM techniques are
extensions of AFM that use this modality.

Other microscopy technologies

The major difference between atomic-force microscopy and competing technologies such as optical microscopy
and electron microscopy is that AFM does not use lenses or beam irradiation. Therefore, it does not suffer from
a limitation in spacial resolution due to diffraction and aberration, and preparing a space for guiding the beam
(by creating a vacuum) and staining the sample are not necessary.

There are several types of scanning microscopy including scanning probe microscopy (which includes AFM,
scanning tunneling microscopy (STM) and near-field scanning optical microscope (SNOM/NSOM), STED
microscopy (STED), and scanning electron microscopy). Although SNOM and STED use visible, infrared or
even terahertz light to illuminate the sample, their resolution is not constrained by the diffraction limit.

Configuration

Fig. 3 shows an AFM typically consisting of the following features:[1]

The small spring-like cantilever (1) is carried by the support (2). Optionally, a piezoelectric element (3)
oscillates the cantilever (1). The sharp tip (4) is fixed to the free end of the cantilever (1). The detector (5)
records the deflection and motion of the cantilever (1). The sample (6) is mounted on the sample stage (8). An
xyz drive (7) permits to displace the sample (6) and the sample stage (8) in x, y, and z directions with respect to
the tip apex (4). Although Fig. 3 shows the drive attached to the sample, the drive can also be attached to the
tip, or independent drives can be attached to both, since it is the relative displacement of the sample and tip that
needs to be controlled. Controllers and plotter are not shown in Fig. 3. Numbers in parentheses correspond to
numbered features in Fig. 3. Coordinate directions are defined by the coordinate system (0).

According to the configuration described above, the interaction between tip and sample, which can be an
atomic scale phenomenon, is transduced into changes of the motion of cantilever which is a macro scale
phenomenon. Several different aspects of the cantilever motion can be used to quantify the interaction between
https://en.wikipedia.org/wiki/Atomic-force_microscopy 2/17
7/24/2017 Atomic-force microscopy - Wikipedia

the tip and sample, most commonly the value of the deflection, the
amplitude of an imposed oscillation of the cantilever, or the shift in
resonance frequency of the cantilever (see section Imaging Modes).

Detector

The detector (5) of AFM measures the deflection (displacement with


respect to the equilibrium position) of the cantilever and converts it into
an electrical signal. The intensity of this signal will be proportional to
the displacement of the cantilever.
Fig. 3: Typical configuration of an
Various methods of detection can be used, e.g. interferometry, optical
AFM.
levers, the piezoresistive method, the piezoelectric method, and STM-
(1):: Cantilever, (2): Support for
based detectors (see section "AFM cantilever deflection
cantilever, (3): Piezoelectric
measurement".).
element(to oscillate cantilever at its
eigen frequency.), (4): Tip (Fixed to
Image formation
open end of a cantilever, acts as the
probe), (5): Detector of deflection and
Note: The following paragraphs assume that 'contact mode' is used (see motion of the cantilever, (6): Sample
section Imaging Modes). For other imaging modes, the process is to be measured by AFM, (7): xyz
similar, except that 'deflection' should be replaced by the appropriate drive, (moves sample (6) and stage
feedback variable. (8) in x, y, and z directions with
respect to a tip apex (4)), and (8):
When using the AFM to image a sample, the tip is brought into contact
Stage.
with the sample, and the sample is raster scanned along an x-y grid (fig
4). Most commonly, an electronic feedback loop is employed to keep
the probe-sample force constant during scanning. This feedback loop has the cantilever deflection as input, and
its output controls the distance along the z axis between the probe support (2 in fig. 3) and the sample support
(8 in fig 3). As long as the tip remains in contact with the sample, and the sample is scanned in the x-y plane,
height variations in the sample will change the deflection of the cantilever. The feedback then adjusts the height
of the probe support so that the deflection is restored to a user-defined value (the setpoint). A properly adjusted
feedback loop adjusts the support-sample separation continuously during the scanning motion, such that the
deflection remains approximately constant. In this situation, the feedback output equals the sample surface
topography to within a small error.

Historically, a different operation method has been used, in which the sample-probe support distance is kept
constant and not controlled by a feedback (servo mechanism). In this mode, usually referred to as 'constant
height mode', the deflection of the cantilever is recorded as a function of the sample x-y position. As long as the
tip is in contact with the sample, the deflection then corresponds to surface topography. The main reason this
method is not very popular anymore, is that the forces between tip and sample are not controlled, which can
lead to forces high enough to damage the tip or the sample. It is however common practice to record the
deflection even when scanning in 'constant force mode', with feedback. This reveals the small tracking error of
the feedback, and can sometimes reveal features that the feedback was not able to adjust for.

The AFM signals, such as sample height or cantilever deflection, are recorded on a computer during the x-y
scan. They are plotted in a pseudocolor image, in which each pixel represents an x-y position on the sample,
and the color represents the recorded signal.

History

AFM was invented by IBM Scientists in 1982.The precursor to the AFM, the scanning tunneling microscope
(STM), was developed by Gerd Binnig and Heinrich Rohrer in the early 1980s at IBM Research - Zurich, a
development that earned them the Nobel Prize for Physics in 1986. Binnig invented[1] the atomic-force
microscope and the first experimental implementation was made by Binnig, Quate and Gerber in 1986.[2]

https://en.wikipedia.org/wiki/Atomic-force_microscopy 3/17
7/24/2017 Atomic-force microscopy - Wikipedia

The first commercially available atomic-force microscope was


introduced in 1989. The AFM is one of the foremost tools for imaging,
measuring, and manipulating matter at the nanoscale.

Applications

The AFM has been applied to problems in a wide range of disciplines of


the natural sciences, including solid-state physics, semiconductor
science and technology, molecular engineering, polymer chemistry and
physics, surface chemistry, molecular biology, cell biology and Fig. 5: Topographic image forming
medicine. by AFM.
(1): Tip apex, (2): Sample surface,
Applications in the field of solid state physics include (a) the (3): Z-orbit of Tip apex, (4):
identification of atoms at a surface, (b) the evaluation of interactions Cantilever.
between a specific atom and its neighboring atoms, and (c) the study of
changes in physical properties arising from changes in an atomic
arrangement through atomic manipulation.

In cellular biology, AFM can be used to (a) attempt to distinguish cancer cells and normal cells based on a
hardness of cells, and (b) to evaluate interactions between a specific cell and its neighboring cells in a
competitive culture system.

In some variations, electric potentials can also be scanned using conducting cantilevers. In more advanced
versions, currents can be passed through the tip to probe the electrical conductivity or transport of the
underlying surface, but this is a challenging task with few research groups reporting consistent data (as of
2004).[3]

Principles
The AFM consists of a cantilever with a sharp tip (probe) at its end that is
used to scan the specimen surface. The cantilever is typically silicon or
silicon nitride with a tip radius of curvature on the order of nanometers.
When the tip is brought into proximity of a sample surface, forces between
the tip and the sample lead to a deflection of the cantilever according to
Hooke's law.[4] Depending on the situation, forces that are measured in
AFM include mechanical contact force, van der Waals forces, capillary
forces, chemical bonding, electrostatic forces, magnetic forces (see magnetic
force microscope, MFM), Casimir forces, solvation forces, etc. Along with
force, additional quantities may simultaneously be measured through the use Electron micrograph of a used
of specialized types of probes (see scanning thermal microscopy, scanning AFM cantilever. Image width
joule expansion microscopy, photothermal microspectroscopy, etc.). ~100 micrometers

The AFM can be operated in a number of modes, depending on the


application. In general, possible imaging modes are divided into static (also
called contact) modes and a variety of dynamic (non-contact or "tapping")
modes where the cantilever is vibrated or oscillated at a given frequency.[5]

Imaging modes

AFM operation is usually described as one of three modes, according to the Electron micrograph of a used
nature of the tip motion: contact mode, also called static mode (as opposed AFM cantilever. Image width
to the other two modes, which are called dynamic modes); tapping mode, ~30 micrometers

https://en.wikipedia.org/wiki/Atomic-force_microscopy 4/17
7/24/2017 Atomic-force microscopy - Wikipedia

also called intermittent contact, AC mode, or vibrating mode, or, after


the detection mechanism, amplitude modulation AFM; non-contact
mode, or, again after the detection mechanism, frequency modulation
AFM.

It should be noted that despite the nomenclature, repulsive contact can


occur or be avoided both in amplitude modulation AFM and frequency
modulation AFM, depending on the settings.

Contact mode
Atomic-force microscope
In contact mode, the tip is "dragged" across the surface of the sample topographical scan of a glass surface.
and the contours of the surface are measured either using the deflection The micro and nano-scale features of
of the cantilever directly or, more commonly, using the feedback signal the glass can be observed, portraying
required to keep the cantilever at a constant position. Because the the roughness of the material. The
measurement of a static signal is prone to noise and drift, low stiffness image space is (x,y,z) = (20 µm ×
cantilevers (i.e. cantilevers with a low spring constant, k) are used to 20 µm × 420 nm).
achieve a large enough deflection signal while keeping the interaction
force low. Close to the surface of the sample, attractive forces can be quite strong, causing the tip to "snap-in"
to the surface. Thus, contact mode AFM is almost always done at a depth where the overall force is repulsive,
that is, in firm "contact" with the solid surface.

Tapping mode

In ambient conditions, most samples develop a liquid meniscus layer.


Because of this, keeping the probe tip close enough to the sample for short-
range forces to become detectable while preventing the tip from sticking to
the surface presents a major problem for contact mode in ambient
conditions. Dynamic contact mode (also called intermittent contact, AC
mode or tapping mode) was developed to bypass this problem.[7] Nowadays,
tapping mode is the most frequently used AFM mode when operating in
ambient conditions or in liquids.

In tapping mode, the cantilever is driven to oscillate up and down at or near


its resonance frequency. This oscillation is commonly achieved with a small
piezo element in the cantilever holder, but other possibilities include an AC
magnetic field (with magnetic cantilevers), piezoelectric cantilevers, or Single polymer chains (0.4 nm
periodic heating with a modulated laser beam. The amplitude of this thick) recorded in a tapping
oscillation usually varies from several nm to 200 nm. In tapping mode, the mode under aqueous media with
frequency and amplitude of the driving signal are kept constant, leading to a different pH.[6]
constant amplitude of the cantilever oscillation as long as there is no drift or
interaction with the surface. The interaction of forces acting on the
cantilever when the tip comes close to the surface, Van der Waals forces, dipole-dipole interactions,
electrostatic forces, etc. cause the amplitude of the cantilever's oscillation to change (usually decrease) as the
tip gets closer to the sample. This amplitude is used as the parameter that goes into the electronic servo that
controls the height of the cantilever above the sample. The servo adjusts the height to maintain a set cantilever
oscillation amplitude as the cantilever is scanned over the sample. A tapping AFM image is therefore produced
by imaging the force of the intermittent contacts of the tip with the sample surface.[8]

Although the peak forces applied during the contacting part of the oscillation can be much higher than typically
used in contact mode, tapping mode generally lessens the damage done to the surface and the tip compared to
the amount done in contact mode. This can be explained by the short duration of the applied force, and because
the lateral forces between tip and sample are significantly lower in tapping mode over contact mode. Tapping
mode imaging is gentle enough even for the visualization of supported lipid bilayers or adsorbed single

https://en.wikipedia.org/wiki/Atomic-force_microscopy 5/17
7/24/2017 Atomic-force microscopy - Wikipedia

polymer molecules (for instance, 0.4 nm thick chains of synthetic polyelectrolytes) under liquid medium. With
proper scanning parameters, the conformation of single molecules can remain unchanged for hours,[6] and even
single molecular motors can be imaged while moving.

When operating in tapping mode, the phase of the cantilever's oscillation with respect to the driving signal can
be recorded as well. This signal channel contains information about the energy dissipated by the cantilever in
each oscillation cycle. Samples that contain regions of varying stiffness or with different adhesion properties
can give a contrast in this channel that is not visible in the topographic image. Extracting the sample's material
properties in a quantitative manner from phase images, however, is often not feasible.

Non-contact mode

In non-contact atomic force microscopy mode, the


tip of the cantilever does not contact the sample
surface. The cantilever is instead oscillated at either
its resonant frequency (frequency modulation) or
just above (amplitude modulation) where the
amplitude of oscillation is typically a few
nanometers (<10 nm) down to a few picometers.[9]
The van der Waals forces, which are strongest from
1 nm to 10 nm above the surface, or any other long-
range force that extends above the surface acts to
decrease the resonance frequency of the cantilever.
This decrease in resonant frequency combined with
the feedback loop system maintains a constant
oscillation amplitude or frequency by adjusting the
average tip-to-sample distance. Measuring the tip-
to-sample distance at each (x,y) data point allows
the scanning software to construct a topographic AFM – non-contact mode
image of the sample surface.

Non-contact mode AFM does not suffer from tip or sample degradation effects that are sometimes observed
after taking numerous scans with contact AFM. This makes non-contact AFM preferable to contact AFM for
measuring soft samples, e.g. biological samples and organic thin film. In the case of rigid samples, contact and
non-contact images may look the same. However, if a few monolayers of adsorbed fluid are lying on the
surface of a rigid sample, the images may look quite different. An AFM operating in contact mode will
penetrate the liquid layer to image the underlying surface, whereas in non-contact mode an AFM will oscillate
above the adsorbed fluid layer to image both the liquid and surface.

Schemes for dynamic mode operation include frequency modulation where a phase-locked loop is used to track
the cantilever's resonance frequency and the more common amplitude modulation with a servo loop in place to
keep the cantilever excitation to a defined amplitude. In frequency modulation, changes in the oscillation
frequency provide information about tip-sample interactions. Frequency can be measured with very high
sensitivity and thus the frequency modulation mode allows for the use of very stiff cantilevers. Stiff cantilevers
provide stability very close to the surface and, as a result, this technique was the first AFM technique to provide
true atomic resolution in ultra-high vacuum conditions.[10]

In amplitude modulation, changes in the oscillation amplitude or phase provide the feedback signal for
imaging. In amplitude modulation, changes in the phase of oscillation can be used to discriminate between
different types of materials on the surface. Amplitude modulation can be operated either in the non-contact or
in the intermittent contact regime. In dynamic contact mode, the cantilever is oscillated such that the separation
distance between the cantilever tip and the sample surface is modulated.

Amplitude modulation has also been used in the non-contact regime to image with atomic resolution by using
very stiff cantilevers and small amplitudes in an ultra-high vacuum environment.
https://en.wikipedia.org/wiki/Atomic-force_microscopy 6/17
7/24/2017 Atomic-force microscopy - Wikipedia

Topographic image
Image formation is a plotting method that produces a color mapping through changing the x-y position of the
tip while scanning and recording the measured variable, i.e. the intensity of control signal, to each x-y
coordinate. The color mapping shows the measured value corresponding to each coordinate. The image
expresses the intensity of a value as a hue. Usually, the correspondence between the intensity of a value and a
hue is shown as a color scale in the explanatory notes accompanying the image.

What is the topographic image of atomic-force microscope?

Operation mode of Image forming of the AFM are generally classified into two groups from the viewpoint
whether it uses z-Feedback loop (not shown) to maintain the tip-sample distance to keep signal intensity
exported by the detector. The first one (using z-Feedback loop), said to be "constant XX mode" (XX is
something which kept by z-Feedback loop).

Topographic Image Formation Mode is based on abovementioned "constant XX mode", z-Feedback loop
controls the relative distance between the probe and the sample through outputting control signals to keep
constant one of frequency, vibration and phase which typically corresponds to the motion of cantilever (for
instance, voltage is applied to the Z-piezoelectric element and it moves the sample up and down towards the Z
direction.

Details will be explained in the case that especially "constant df mode"(FM-AFM) among AFM as an instance
in next section.

Topographic image of FM-AFM

When the distance between the probe and the sample is brought to the range where atomic force may be
detected, while a cantilever is excited in its natural eigen frequency (f0), a phenomenon occurs that the
resonance frequency (f) of the cantilever shifts from its original resonance frequency (natural eigen frequency).
In other words, in the range where atomic force may be detected, the frequency shift (df=f-f0) will be observed.
So, when the distance between the probe and the sample is in the non-contact region, the frequency shift
increases in negative direction as the distance between the probe and the sample gets smaller.

When the sample has concavity and convexity, the distance between the tip-apex and the sample varies in
accordance with the concavity and convexity accompanied with a scan of the sample along x-y direction
(without height regulation in z-direction) . As a result, the frequency shift arises. The image in which the values
of the frequency obtained by a raster scan along the x-y direction of the sample surface are plotted against the
x-y coordination of each measurement point is called a constant-height image.

On the other hand, the df may be kept constant by moving the probe upward and downward (See (3) of FIG.5)
in z-direction using a negative feedback (by using z-feedback loop) while the raster scan of the sample surface
along the x-y direction . The image in which the amounts of the negative feedback (the moving distance of the
probe upward and downward in z-direction) are plotted against the x-y coordination of each measurement point
is a topographic image. In other words, the topographic image is a trace of the tip of the probe regulated so that
the df is constant and it may also be considered to be a plot of a constant-height surface of the df.

Therefore, the topographic image of the AFM is not the exact surface morphology itself, but actually the image
influenced by the bond-order between the probe and the sample, however, the topographic image of the AFM is
considered to reflect the geographical shape of the surface more than the topographic image of a scanning
tunnel microscope.

Force spectroscopy

https://en.wikipedia.org/wiki/Atomic-force_microscopy 7/17
7/24/2017 Atomic-force microscopy - Wikipedia

Another major application of AFM (besides imaging) is force spectroscopy, the direct measurement of tip-
sample interaction forces as a function of the gap between the tip and sample (the result of this measurement is
called a force-distance curve). For this method, the AFM tip is extended towards and retracted from the surface
as the deflection of the cantilever is monitored as a function of piezoelectric displacement. These measurements
have been used to measure nanoscale contacts, atomic bonding, Van der Waals forces, and Casimir forces,
dissolution forces in liquids and single molecule stretching and rupture forces.[11] Furthermore, AFM was used
to measure, in an aqueous environment, the dispersion force due to polymer adsorbed on the substrate.[12]
Forces of the order of a few piconewtons can now be routinely measured with a vertical distance resolution of
better than 0.1 nanometers. Force spectroscopy can be performed with either static or dynamic modes. In
dynamic modes, information about the cantilever vibration is monitored in addition to the static deflection.[13]

Problems with the technique include no direct measurement of the tip-sample separation and the common need
for low-stiffness cantilevers, which tend to 'snap' to the surface. These problems are not insurmountable. An
AFM that directly measures the tip-sample separation has been developed.[14] The snap-in can be reduced by
measuring in liquids or by using stiffer cantilevers, but in the latter case a more sensitive deflection sensor is
needed. By applying a small dither to the tip, the stiffness (force gradient) of the bond can be measured as
well.[15]

Biological applications and other

Force spectroscopy is used in biophysics to measure the mechanical properties.[16][17] of living material (such
as tissue or cells).[18][19][20] Another application was to measure the interaction forces between from one hand a
material stuck on the tip of the cantilever, and from another hand the surface of particles either free or occupied
by the same material. From the adhesion force distribution curve, a mean value of the forces has been derived.
It allowed to make a cartography of the surface of the particles, covered or not by the material.[21]

Identification of individual surface atoms


The AFM can be used to image and manipulate atoms and structures on a variety of surfaces. The atom at the
apex of the tip "senses" individual atoms on the underlying surface when it forms incipient chemical bonds
with each atom. Because these chemical interactions subtly alter the tip's vibration frequency, they can be
detected and mapped. This principle was used to distinguish between atoms of silicon, tin and lead on an alloy
surface, by comparing these 'atomic fingerprints' to values obtained from large-scale density functional theory
(DFT) simulations.[22]

The trick is to first measure these forces precisely for each type of atom expected in the sample, and then to
compare with forces given by DFT simulations. The team found that the tip interacted most strongly with
silicon atoms, and interacted 24% and 41% less strongly with tin and lead atoms, respectively. Thus, each
different type of atom can be identified in the matrix as the tip is moved across the surface.

Probe
An AFM probe has a sharp tip on the free-swinging end of a cantilever that is protruding from a holder.[23] The
dimensions of the cantilever are in the scale of micrometers. The radius of the tip is usually on the scale of a
few nanometers to a few tens of nanometers. (Specialized probes exist with much larger end radii, for example
probes for indentation of soft materials.) The cantilever holder, also called holder chip – often 1.6 mm by
3.4 mm in size – allows the operator to hold the AFM cantilever/probe assembly with tweezers and fit it into
the corresponding holder clips on the scanning head of the atomic-force microscope.

This device is most commonly called an "AFM probe", but other names include "AFM tip" and "cantilever"
(employing the name of a single part as the name of the whole device). An AFM probe is a particular type of
SPM (scanning probe microscopy) probe.
https://en.wikipedia.org/wiki/Atomic-force_microscopy 8/17
7/24/2017 Atomic-force microscopy - Wikipedia

AFM probes are manufactured with MEMS technology. Most AFM probes used are made from silicon (Si), but
borosilicate glass and silicon nitride are also in use. AFM probes are considered consumables as they are often
replaced when the tip apex becomes dull or contaminated or when the cantilever is broken. They can cost from
a couple of tens of dollars up to hundreds of dollars per cantilever for the most specialized cantilever/probe
combinations.

Just the tip is brought very close to the surface of the object under investigation, the cantilever is deflected by
the interaction between the tip and the surface, which is what the AFM is designed to measure. A spatial map
of the interaction can be made by measuring the deflection at many points on a 2D surface.

Several types of interaction can be detected. Depending on the interaction under investigation, the surface of
the tip of the AFM probe needs to be modified with a coating. Among the coatings used are gold – for covalent
bonding of biological molecules and the detection of their interaction with a surface,[24] diamond for increased
wear resistance[25] and magnetic coatings for detecting the magnetic properties of the investigated surface.[26]
Another solution exists to achieve high resolution magnetic imaging : having the probe equip with a
microSQUID. The AFM tips is fabricated using silicon micro machining and the precise positioning of the
microSQUID loop is done by electron beam lithography.[27]

The surface of the cantilevers can also be modified. These coatings are mostly applied in order to increase the
reflectance of the cantilever and to improve the deflection signal.

AFM cantilever-deflection measurement


Beam-deflection measurement

The most common method for cantilever-deflection


measurements is the beam-deflection method. In
this method, laser light from a solid-state diode is
reflected off the back of the cantilever and collected
by a position-sensitive detector (PSD) consisting of
two closely spaced photodiodes, whose output
signal is collected by a differential amplifier.
Angular displacement of the cantilever results in
one photodiode collecting more light than the other
photodiode, producing an output signal (the
difference between the photodiode signals
normalized by their sum), which is proportional to
the deflection of the cantilever. The sensitivity of AFM beam-deflection detection
the beam-deflection method is very high, a noise
1
floor on the order of 10 fm Hz−  ⁄2 can be obtained
routinely in a well-designed system. Although this method is sometimes called the 'optical lever' method, the
signal is not amplified if the beam path is made longer. A longer beam path increases the motion of the
reflected spot on the photodiodes, but also widens the spot by the same amount due to diffraction, so that the
same amount of optical power is moved from one photodiode to the other. The 'optical leverage' (output signal
of the detector divided by deflection of the cantilever) is inversely proportional to the numerical aperture of the
beam focusing optics, as long as the focused laser spot is small enough to fall completely on the cantilever. It is
also inversely proportional to the length of the cantilever.

The relative popularity of the beam-deflection method can be explained by its high sensitivity and simple
operation, and by the fact that cantilevers do not require electrical contacts or other special treatments, and can
therefore be fabricated relatively cheaply with sharp integrated tips.

Other deflection-measurement methods


https://en.wikipedia.org/wiki/Atomic-force_microscopy 9/17
7/24/2017 Atomic-force microscopy - Wikipedia

Many other methods for beam-deflection measurements exist.

Piezoelectric detection – Cantilevers made from quartz[28] (such as the qPlus configuration), or other
piezoelectric materials can directly detect deflection as an electrical signal. Cantilever oscillations down
to 10pm have been detected with this method.
Laser Doppler vibrometry – A laser Doppler vibrometer can be used to produce very accurate deflection
measurements for an oscillating cantilever[29] (thus is only used in non-contact mode). This method is
expensive and is only used by relatively few groups.
Scanning tunneling microscope (STM) — The first atomic microscope used an STM complete with its
own feedback mechanism to measure deflection.[5] This method is very difficult to implement, and is
slow to react to deflection changes compared to modern methods.
Optical interferometry – Optical interferometry can be used to measure cantilever deflection.[30] Due to
the nanometre scale deflections measured in AFM, the interferometer is running in the sub-fringe regime,
thus, any drift in laser power or wavelength has strong effects on the measurement. For these reasons
optical interferometer measurements must be done with great care (for example using index matching
fluids between optical fibre junctions), with very stable lasers. For these reasons optical interferometry is
rarely used.
Capacitive detection – Metal coated cantilevers can form a capacitor with another contact located behind
the cantilever.[31] Deflection changes the distance between the contacts and can be measured as a change
in capacitance.
Piezoresistive detection – Cantilevers can be fabricated with piezoresistive elements that act as a strain
gauge. Using a Wheatstone bridge, strain in the AFM cantilever due to deflection can be measured.[32]
This is not commonly used in vacuum applications, as the piezoresistive detection dissipates energy from
the system affecting Q of the resonance.

Piezoelectric scanners
AFM scanners are made from piezoelectric material, which expands and contracts proportionally to an applied
voltage. Whether they elongate or contract depends upon the polarity of the voltage applied. Traditionally the
tip or sample is mounted on a 'tripod' of three piezo crystals, with each responsible for scanning in the x,y and z
directions.[5] In 1986, the same year as the AFM was invented, a new piezoelectric scanner, the tube scanner,
was developed for use in STM.[33] Later tube scanners were incorporated into AFMs. The tube scanner can
move the sample in the x, y, and z directions using a single tube piezo with a single interior contact and four
external contacts. An advantage of the tube scanner compared to the original tripod design, is better vibrational
isolation, resulting from the higher resonant frequency of the single element construction, in combination with
a low resonant frequency isolation stage. A disadvantage is that the x-y motion can cause unwanted z motion
resulting in distortion. Another popular design for AFM scanners is the flexure stage, which uses separate
piezos for each axis, and couples them through a flexure mechanism.

Scanners are characterized by their sensitivity, which is the ratio of piezo movement to piezo voltage, i.e., by
how much the piezo material extends or contracts per applied volt. Because of differences in material or size,
the sensitivity varies from scanner to scanner. Sensitivity varies non-linearly with respect to scan size. Piezo
scanners exhibit more sensitivity at the end than at the beginning of a scan. This causes the forward and reverse
scans to behave differently and display hysteresis between the two scan directions.[34] This can be corrected by
applying a non-linear voltage to the piezo electrodes to cause linear scanner movement and calibrating the
scanner accordingly.[34] One disadvantage of this approach is that it requires re-calibration because the precise
non-linear voltage needed to correct non-linear movement will change as the piezo ages (see below). This
problem can be circumvented by adding a linear sensor to the sample stage or piezo stage to detect the true
movement of the piezo. Deviations from ideal movement can be detected by the sensor and corrections applied
to the piezo drive signal to correct for non-linear piezo movement. This design is known as a 'closed loop'
AFM. Non-sensored piezo AFMs are referred to as 'open loop' AFMs.

https://en.wikipedia.org/wiki/Atomic-force_microscopy 10/17
7/24/2017 Atomic-force microscopy - Wikipedia

The sensitivity of piezoelectric materials decreases exponentially with time. This causes most of the change in
sensitivity to occur in the initial stages of the scanner's life. Piezoelectric scanners are run for approximately 48
hours before they are shipped from the factory so that they are past the point where they may have large
changes in sensitivity. As the scanner ages, the sensitivity will change less with time and the scanner would
seldom require recalibration,[35][36] though various manufacturer manuals recommend monthly to semi-
monthly calibration of open loop AFMs.

Advantages and disadvantages


Just like any other tool, an AFM's usefulness has limitations. When
determining whether or not analyzing a sample with an AFM is
appropriate, there are various advantages and disadvantages that must
be considered.

Advantages

AFM has several advantages over the scanning electron microscope


(SEM). Unlike the electron microscope, which provides a two-
dimensional projection or a two-dimensional image of a sample, the
AFM provides a three-dimensional surface profile. In addition, samples The first atomic-force microscope
viewed by AFM do not require any special treatments (such as
metal/carbon coatings) that would irreversibly change or damage the
sample, and does not typically suffer from charging artifacts in the final image. While an electron microscope
needs an expensive vacuum environment for proper operation, most AFM modes can work perfectly well in
ambient air or even a liquid environment. This makes it possible to study biological macromolecules and even
living organisms. In principle, AFM can provide higher resolution than SEM. It has been shown to give true
atomic resolution in ultra-high vacuum (UHV) and, more recently, in liquid environments. High resolution
AFM is comparable in resolution to scanning tunneling microscopy and transmission electron microscopy.
AFM can also be combined with a variety of optical microscopy and spectroscopy techniques such as
fluorescent microscopy of infrared spectroscopy, giving rise to scanning near-field optical microscopy, nano-
FTIR and further expanding its applicability. Combined AFM-optical instruments have been applied primarily
in the biological sciences but have recently attracted strong interest in photovoltaics[8] and energy-storage
research,[37] polymer sciences,[38] nanotechnology[39][40] and even medical research.[41]

Disadvantages

A disadvantage of AFM compared with the scanning electron microscope (SEM) is the single scan image size.
In one pass, the SEM can image an area on the order of square millimeters with a depth of field on the order of
millimeters, whereas the AFM can only image a maximum scanning area of about 150×150 micrometers and a
maximum height on the order of 10-20 micrometers. One method of improving the scanned area size for AFM
is by using parallel probes in a fashion similar to that of millipede data storage.

The scanning speed of an AFM is also a limitation. Traditionally, an AFM cannot scan images as fast as an
SEM, requiring several minutes for a typical scan, while an SEM is capable of scanning at near real-time,
although at relatively low quality. The relatively slow rate of scanning during AFM imaging often leads to
thermal drift in the image[42][43][44] making the AFM less suited for measuring accurate distances between
topographical features on the image. However, several fast-acting designs[45][46] were suggested to increase
microscope scanning productivity including what is being termed videoAFM (reasonable quality images are
being obtained with videoAFM at video rate: faster than the average SEM). To eliminate image distortions
induced by thermal drift, several methods have been introduced.[42][43][44]

https://en.wikipedia.org/wiki/Atomic-force_microscopy 11/17
7/24/2017 Atomic-force microscopy - Wikipedia

AFM images can also be affected by nonlinearity, hysteresis,[34] and creep of the piezoelectric material and
cross-talk between the x, y, z axes that may require software enhancement and filtering. Such filtering could
"flatten" out real topographical features. However, newer AFMs utilize real-time correction software (for
example, feature-oriented scanning[35][42]) or closed-loop scanners, which practically eliminate these problems.
Some AFMs also use separated orthogonal scanners (as opposed to a single tube), which also serve to eliminate
part of the cross-talk problems.

As with any other imaging technique, there is the possibility of image


artifacts, which could be induced by an unsuitable tip, a poor operating
environment, or even by the sample itself, as depicted on the right.
These image artifacts are unavoidable; however, their occurrence and
effect on results can be reduced through various methods. Artifacts
resulting from a too-coarse tip can be caused for example by Showing an AFM artifact arising
inappropriate handling or de facto collisions with the sample by either from a tip with a high radius of
scanning too fast or having an unreasonably rough surface, causing curvature with respect to the feature
actual wearing of the tip. that is to be visualized.

Due to the nature of AFM probes, they cannot normally measure steep
walls or overhangs. Specially made cantilevers and AFMs can be used
to modulate the probe sideways as well as up and down (as with
dynamic contact and non-contact modes) to measure sidewalls, at the
cost of more expensive cantilevers, lower lateral resolution and
additional artifacts.

Other applications in various fields of study AFM artifact, steep sample


topography
The latest efforts in integrating nanotechnology and biological research
have been successful and show much promise for the future. Since
nanoparticles are a potential vehicle of drug delivery, the biological
responses of cells to these nanoparticles are continuously being
explored to optimize their efficacy and how their design could be
improved.[47] Pyrgiotakis et al. were able to study the interaction
between CeO2 and Fe2O3 engineered nanoparticles and cells by
attaching the engineered nanoparticles to the AFM tip.[48] Beyond the
interactions with external synthetic materials, cells have been imaged
with X-ray crystallography and there has been much curiosity about
their behavior in vivo. Studies have taken advantage of AFM to obtain
further information on the behavior of live cells in biological media.
Real-time atomic force spectroscopy (or nanoscopy) and dynamic
atomic force spectroscopy have been used to study live cells and
membrane proteins and their dynamic behavior at high resolution, on AFM image of part of a Golgi
the nanoscale. Imaging and obtaining information on the topography apparatus isolated from HeLa cells
and the properties of the cells has also given insight into chemical
processes and mechanisms that occur through cell-cell interaction and
interactions with other signaling molecules (ex. ligands). Evans and Calderwood used single cell force
microscopy to study cell adhesion forces, bond kinetics/dynamic bond strength and its role in chemical
processes such as cell signaling.[49] Scheuring, Lévy, and Rigaud reviewed studies in which AFM to explore
the crystal structure of membrane proteins of photosynthetic bacteria.[50] Alsteen et al. have used AFM-based
nanoscopy to perform a real-time analysis of the interaction between live mycobacteria and antimycobacterial
drugs (specifically isoniazid, ethionamide, ethambutol, and streptomycine),[51] which serves as an example of
the more in-depth analysis of pathogen-drug interactions that can be done through AFM.

https://en.wikipedia.org/wiki/Atomic-force_microscopy 12/17
7/24/2017 Atomic-force microscopy - Wikipedia

See also
AFM-based infrared spectroscopy (AFM-IR) Surface force apparatus
Frictional force mapping SPM-based nanoscale spectroscopy(nano-FTIR)
Photoconductive atomic force microscopy
Scanning voltage microscopy
Surface force apparatus
References
1. Patent US4724318 - Atomic-force microscope and method for imaging surfaces with atomic resolution (https://www.
google.com/patents/US4724318)
2. Binnig, G.; Quate, C. F.; Gerber, C. (1986). "Atomic-Force Microscope". Physical Review Letters. 56: 930–933.
Bibcode:1986PhRvL..56..930B (http://adsabs.harvard.edu/abs/1986PhRvL..56..930B). PMID 10033323 (https://ww
w.ncbi.nlm.nih.gov/pubmed/10033323). doi:10.1103/physrevlett.56.930 (https://doi.org/10.1103%2Fphysrevlett.56.9
30).
3. Lang, K.M.; D. A. Hite; R. W. Simmonds; R. McDermott; D. P. Pappas; John M. Martinis (2004). "Conducting
atomic-force microscopy for nanoscale tunnel barrier characterization" (http://rsi.aip.org/resource/1/rsinak/v75/i8/p2
726_s1). Review of Scientific Instruments. 75 (8): 2726–2731. Bibcode:2004RScI...75.2726L (http://adsabs.harvard.e
du/abs/2004RScI...75.2726L). doi:10.1063/1.1777388 (https://doi.org/10.1063%2F1.1777388).
4. Cappella, B; Dietler, G (1999). "Force-distance curves by atomic-force microscopy" (https://web.archive.org/web/20
121203031934/http://www.see.ed.ac.uk/~vkoutsos/Force-distance%20curves%20by%20atomic%20force%20micros
copy.pdf) (PDF). Surface Science Reports. 34 (1–3): 1–104. Bibcode:1999SurSR..34....1C (http://adsabs.harvard.edu/
abs/1999SurSR..34....1C). doi:10.1016/S0167-5729(99)00003-5 (https://doi.org/10.1016%2FS0167-5729%2899%29
00003-5). Archived from the original (http://www.see.ed.ac.uk/~vkoutsos/Force-distance%20curves%20by%20atom
ic%20force%20microscopy.pdf) (PDF) on 2012-12-03.
5. Binnig, G.; Quate, C. F.; Gerber, Ch. (1986). "Atomic-Force Microscope". Physical Review Letters. 56 (9): 930–933.
Bibcode:1986PhRvL..56..930B (http://adsabs.harvard.edu/abs/1986PhRvL..56..930B). ISSN 0031-9007 (https://ww
w.worldcat.org/issn/0031-9007). PMID 10033323 (https://www.ncbi.nlm.nih.gov/pubmed/10033323).
doi:10.1103/PhysRevLett.56.930 (https://doi.org/10.1103%2FPhysRevLett.56.930).
6. Roiter, Y; Minko, S (Nov 2005). "AFM single molecule experiments at the solid-liquid interface: in situ
conformation of adsorbed flexible polyelectrolyte chains". Journal of the American Chemical Society. 127 (45):
15688–9. ISSN 0002-7863 (https://www.worldcat.org/issn/0002-7863). PMID 16277495 (https://www.ncbi.nlm.nih.
gov/pubmed/16277495). doi:10.1021/ja0558239 (https://doi.org/10.1021%2Fja0558239).
7. Zhong, Q; Inniss, D; Kjoller, K; Elings, V (1993). "Fractured polymer/silica fiber surface studied by tapping mode
atomic-force microscopy". Surface Science Letters. 290: L688. Bibcode:1993SurSL.290L.688Z (http://adsabs.harvar
d.edu/abs/1993SurSL.290L.688Z). doi:10.1016/0167-2584(93)90906-Y (https://doi.org/10.1016%2F0167-2584%28
93%2990906-Y).
8. Geisse, Nicholas A. (July–August 2009). "AFM and Combined Optical Techniques" (http://www.sciencedirect.com/s
cience/article/pii/S1369702109702019). Materials Today. 12 (7–8): 40–45. doi:10.1016/S1369-7021(09)70201-9 (htt
ps://doi.org/10.1016%2FS1369-7021%2809%2970201-9). Retrieved 4 November 2011.
9. Gross, L.; Mohn, F.; Moll, N.; Liljeroth, P.; Meyer, G. (27 August 2009). "The Chemical Structure of a Molecule
Resolved by Atomic-Force Microscopy". Science. 325 (5944): 1110–1114. Bibcode:2009Sci...325.1110G (http://adsa
bs.harvard.edu/abs/2009Sci...325.1110G). PMID 19713523 (https://www.ncbi.nlm.nih.gov/pubmed/19713523).
doi:10.1126/science.1176210 (https://doi.org/10.1126%2Fscience.1176210).
10. Giessibl, Franz J. (2003). "Advances in atomic-force microscopy". Reviews of Modern Physics. 75 (3): 949–983.
Bibcode:2003RvMP...75..949G (http://adsabs.harvard.edu/abs/2003RvMP...75..949G). arXiv:cond-mat/0305119 (htt
ps://arxiv.org/abs/cond-mat/0305119)  . doi:10.1103/RevModPhys.75.949 (https://doi.org/10.1103%2FRevModPhy
s.75.949).
11. Hinterdorfer, P; Dufrêne, Yf (May 2006). "Detection and localization of single molecular recognition events using
atomic-force microscopy". Nature Methods. 3 (5): 347–55. ISSN 1548-7091 (https://www.worldcat.org/issn/1548-70
91). PMID 16628204 (https://www.ncbi.nlm.nih.gov/pubmed/16628204). doi:10.1038/nmeth871 (https://doi.org/10.
1038%2Fnmeth871).
12. Ferrari, L.; Kaufmann, J.; Winnefeld, F.; Plank, J. (Jul 2010). "Interaction of cement model systems with
superplasticizers investigated by atomic-force microscopy, zeta potential, and adsorption measurements". J Colloid
Interface Sci. 347 (1): 15–24. PMID 20356605 (https://www.ncbi.nlm.nih.gov/pubmed/20356605).
doi:10.1016/j.jcis.2010.03.005 (https://doi.org/10.1016%2Fj.jcis.2010.03.005).

https://en.wikipedia.org/wiki/Atomic-force_microscopy 13/17
7/24/2017 Atomic-force microscopy - Wikipedia

13. Butt, H; Cappella, B; Kappl, M (2005). "Force measurements with the atomic-force microscope: Technique,
interpretation and applications". Surface Science Reports. 59: 1–152. Bibcode:2005SurSR..59....1B (http://adsabs.har
vard.edu/abs/2005SurSR..59....1B). doi:10.1016/j.surfrep.2005.08.003 (https://doi.org/10.1016%2Fj.surfrep.2005.08.
003).
14. Gavin M. King; Ashley R. Carter; Allison B. Churnside; Louisa S. Eberle & Thomas T. Perkins (2009). "Ultrastable
Atomic-Force Microscopy: Atomic-Scale Stability and Registration in Ambient Conditions" (https://www.ncbi.nlm.n
ih.gov/pmc/articles/PMC2953871). Nano Letters. 9 (4): 1451–1456. Bibcode:2009NanoL...9.1451K (http://adsabs.ha
rvard.edu/abs/2009NanoL...9.1451K). PMC 2953871 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2953871)  .
PMID 19351191 (https://www.ncbi.nlm.nih.gov/pubmed/19351191). doi:10.1021/nl803298q (https://doi.org/10.102
1%2Fnl803298q).
15. M. Hoffmann, Ahmet Oral; Ralph A. G, Peter (2001). "Direct measurement of interatomic-force gradients using an
ultra-low-amplitude atomic-force microscope". Proceedings of the Royal Society A. 457 (2009): 1161–1174.
Bibcode:2001RSPSA.457.1161M (http://adsabs.harvard.edu/abs/2001RSPSA.457.1161M).
doi:10.1098/rspa.2000.0713 (https://doi.org/10.1098%2Frspa.2000.0713).
16. D. Murugesapillai et al, DNA bridging and looping by HMO1 provides a mechanism for stabilizing nucleosome-free
chromatin (https://doi.org/10.1093/nar/gku635), Nucl Acids Res (2014) 42 (14): 8996-9004
17. D. Murugesapillai et al, Single-molecule studies of high-mobility group B architectural DNA bending proteins (htt
p://link.springer.com/article/10.1007/s12551-016-0236-4), Biophys Rev (2016) doi:10.1007/s12551-016-0236-4 (htt
ps://dx.doi.org/10.1007%2Fs12551-016-0236-4)
18. Takenaka, Musashi; Miyachi, Yusuke; Ishii, Jun; Ogino, Chiaki; Kondo, Akihiko (2015-03-04). "The mapping of
yeast's G-protein coupled receptor with an atomic force microscope" (http://xlink.rsc.org/?DOI=C4NR05940A).
Nanoscale. 7 (11): 4956–4963. ISSN 2040-3372 (https://www.worldcat.org/issn/2040-3372).
doi:10.1039/c4nr05940a (https://doi.org/10.1039%2Fc4nr05940a).
19. Radmacher, M. (1997). "Measuring the elastic properties of biological samples with the AFM". IEEE Eng Med Biol
Mag. 16 (2): 47–57. PMID 9086372 (https://www.ncbi.nlm.nih.gov/pubmed/9086372). doi:10.1109/51.582176 (http
s://doi.org/10.1109%2F51.582176).
20. Perkins, Thomas. "Atomic force microscopy measures properties of proteins and protein folding" (http://spie.org/ne
wsroom/technical-articles/videos/perkins-video-x116732). SPIE Newsroom. Retrieved 4 March 2016.
21. Thomas, G.; Y. Ouabbas; P. Grosseau; M. Baron; A. Chamayou; L. Galet (2009). "Modeling the mean interaction
forces between power particles. Application to silice gel-magnesium stearate mixtures". Applied Surface Science.
255: 7500–7507. Bibcode:2009ApSS..255.7500T (http://adsabs.harvard.edu/abs/2009ApSS..255.7500T).
doi:10.1016/j.apsusc.2009.03.099 (https://doi.org/10.1016%2Fj.apsusc.2009.03.099).
22. Sugimoto, Y; Pou, P; Abe, M; Jelinek, P; Pérez, R; Morita, S; Custance, O (Mar 2007). "Chemical identification of
individual surface atoms by atomic-force microscopy". Nature. 446 (7131): 64–7. Bibcode:2007Natur.446...64S (htt
p://adsabs.harvard.edu/abs/2007Natur.446...64S). ISSN 0028-0836 (https://www.worldcat.org/issn/0028-0836).
PMID 17330040 (https://www.ncbi.nlm.nih.gov/pubmed/17330040). doi:10.1038/nature05530 (https://doi.org/10.10
38%2Fnature05530).
23. Bryant, P. J.; Miller, R. G.; Yang, R.; "Scanning tunneling and atomic-force microscopy combined". Applied Physics
Letters, Jun 1988, Vol: 52 Issue:26, p. 2233–2235, ISSN 0003-6951 (https://www.worldcat.org/search?fq=x0:jrnl&q
=n2:0003-6951).
24. Oscar H. Willemsen, Margot M.E. Snel, Alessandra Cambi, Jan Greve, Bart G. De Grooth and Carl G. Figdor
"Biomolecular Interactions Measured by Atomic Force Microscopy" Biophysical Journal, Volume 79, Issue 6,
December 2000, Pages 3267-3281.
25. Koo-Hyun Chung and Dae-Eun Kim, "Wear characteristics of diamond-coated atomic force microscope probe".
Ultramicroscopy, Volume 108, Issue 1, December 2007, Pages 1-10
26. Xu, Xin; Raman, Arvind (2007). "Comparative dynamics of magnetically, acoustically, and Brownian motion driven
microcantilevers in liquids". J. Appl. Phys. 102: 034303. Bibcode:2007JAP...102a4303Y (http://adsabs.harvard.edu/a
bs/2007JAP...102a4303Y). doi:10.1063/1.2751415 (https://doi.org/10.1063%2F1.2751415).
27. Hasselbach, K.; Ladam, C. (2008). "High resolution magnetic imaging : MicroSQUID Force Microscopy". Journal
of Physics: Conference Series. 97: 012330. Bibcode:2008JPhCS..97a2330H (http://adsabs.harvard.edu/abs/2008JPh
CS..97a2330H). doi:10.1088/1742-6596/97/1/012330 (https://doi.org/10.1088%2F1742-6596%2F97%2F1%2F0123
30).
28. Giessibl, Franz J. (1 January 1998). "High-speed force sensor for force microscopy and profilometry utilizing a
quartz tuning fork". Applied Physics Letters. 73 (26): 3956. Bibcode:1998ApPhL..73.3956G (http://adsabs.harvard.e
du/abs/1998ApPhL..73.3956G). doi:10.1063/1.122948 (https://doi.org/10.1063%2F1.122948).
29. Nishida, Shuhei; Kobayashi, Dai; Sakurada, Takeo; Nakazawa, Tomonori; Hoshi, Yasuo; Kawakatsu, Hideki (1
January 2008). "Photothermal excitation and laser Doppler velocimetry of higher cantilever vibration modes for
dynamic atomic-force microscopy in liquid". Review of Scientific Instruments. 79 (12): 123703.
Bibcode:2008RScI...79l3703N (http://adsabs.harvard.edu/abs/2008RScI...79l3703N). PMID 19123565 (https://www.
ncbi.nlm.nih.gov/pubmed/19123565). doi:10.1063/1.3040500 (https://doi.org/10.1063%2F1.3040500).

https://en.wikipedia.org/wiki/Atomic-force_microscopy 14/17
7/24/2017 Atomic-force microscopy - Wikipedia

30. Rugar, D.; Mamin, H. J.; Guethner, P. (1 January 1989). "Improved fiber-optic interferometer for atomic-force
microscopy". Applied Physics Letters. 55 (25): 2588. Bibcode:1989ApPhL..55.2588R (http://adsabs.harvard.edu/abs/
1989ApPhL..55.2588R). doi:10.1063/1.101987 (https://doi.org/10.1063%2F1.101987).
31. Göddenhenrich, T. "Force microscope with capacitive displacement detection". Journal of Vacuum Science and
Technology A. 8 (1): 383. doi:10.1116/1.576401 (https://doi.org/10.1116%2F1.576401).
32. Giessibl, F. J.; Trafas, B. M. (1 January 1994). "Piezoresistive cantilevers utilized for scanning tunneling and
scanning force microscope in ultrahigh vacuum". Review of Scientific Instruments. 65 (6): 1923.
Bibcode:1994RScI...65.1923G (http://adsabs.harvard.edu/abs/1994RScI...65.1923G). doi:10.1063/1.1145232 (http
s://doi.org/10.1063%2F1.1145232).
33. Binnig, G.; Smith, D. P. E. (1986). "Single-tube three-dimensional scanner for scanning tunneling microscopy".
Review of Scientific Instruments. 57 (8): 1688. Bibcode:1986RScI...57.1688B (http://adsabs.harvard.edu/abs/1986RS
cI...57.1688B). ISSN 0034-6748 (https://www.worldcat.org/issn/0034-6748). doi:10.1063/1.1139196 (https://doi.org/
10.1063%2F1.1139196).
34. R. V. Lapshin (1995). "Analytical model for the approximation of hysteresis loop and its application to the scanning
tunneling microscope" (http://www.lapshin.fast-page.org/publications.htm#analytical1995) (PDF). Review of
Scientific Instruments. USA: AIP. 66 (9): 4718–4730. Bibcode:1995RScI...66.4718L (http://adsabs.harvard.edu/abs/1
995RScI...66.4718L). ISSN 0034-6748 (https://www.worldcat.org/issn/0034-6748). doi:10.1063/1.1145314 (https://d
oi.org/10.1063%2F1.1145314). (Russian translation (http://www.lapshin.fast-page.org/publications.htm#analytical19
95) is available).
35. R. V. Lapshin (2011). "Feature-oriented scanning probe microscopy". In H. S. Nalwa. Encyclopedia of Nanoscience
and Nanotechnology (http://www.lapshin.fast-page.org/publications.htm#fospm2011) (PDF). 14. USA: American
Scientific Publishers. pp. 105–115. ISBN 1-58883-163-9.
36. R. V. Lapshin (1998). "Automatic lateral calibration of tunneling microscope scanners" (http://www.lapshin.fast-pag
e.org/publications.htm#automatic1998) (PDF). Review of Scientific Instruments. USA: AIP. 69 (9): 3268–3276.
Bibcode:1998RScI...69.3268L (http://adsabs.harvard.edu/abs/1998RScI...69.3268L). ISSN 0034-6748 (https://www.
worldcat.org/issn/0034-6748). doi:10.1063/1.1149091 (https://doi.org/10.1063%2F1.1149091).
37. Ayache, Maurice; Lux, Simon Franz; Kostecki, Robert (2015-04-02). "IR Near-Field Study of the Solid Electrolyte
Interphase on a Tin Electrode" (http://dx.doi.org/10.1021/acs.jpclett.5b00263). The Journal of Physical Chemistry
Letters. 6 (7): 1126–1129. ISSN 1948-7185 (https://www.worldcat.org/issn/1948-7185).
doi:10.1021/acs.jpclett.5b00263 (https://doi.org/10.1021%2Facs.jpclett.5b00263).
38. Pollard, Benjamin; Raschke, Markus B. (2016-04-22). "Correlative infrared nanospectroscopic and nanomechanical
imaging of block copolymer microdomains" (http://www.beilstein-journals.org/bjnano/articles/7/53). Beilstein
Journal of Nanotechnology. 7 (1): 605–612. ISSN 2190-4286 (https://www.worldcat.org/issn/2190-4286).
PMC 4901903 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4901903)  . PMID 27335750 (https://www.ncbi.nl
m.nih.gov/pubmed/27335750). doi:10.3762/bjnano.7.53 (https://doi.org/10.3762%2Fbjnano.7.53).
39. Huth, F.; Schnell, M.; Wittborn, J.; Ocelic, N.; Hillenbrand, R. "Infrared-spectroscopic nanoimaging with a thermal
source" (http://www.nature.com/doifinder/10.1038/nmat3006). Nature Materials. 10 (5): 352–356.
doi:10.1038/nmat3006 (https://doi.org/10.1038%2Fnmat3006).
40. Bechtel, Hans A.; Muller, Eric A.; Olmon, Robert L.; Martin, Michael C.; Raschke, Markus B. (2014-05-20).
"Ultrabroadband infrared nanospectroscopic imaging" (http://www.pnas.org/content/111/20/7191). Proceedings of
the National Academy of Sciences. 111 (20): 7191–7196. ISSN 0027-8424 (https://www.worldcat.org/issn/0027-842
4). PMC 4034206 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4034206)  . PMID 24803431 (https://www.ncbi.
nlm.nih.gov/pubmed/24803431). doi:10.1073/pnas.1400502111 (https://doi.org/10.1073%2Fpnas.1400502111).
41. Paluszkiewicz, C.; Piergies, N.; Chaniecki, P.; Rękas, M.; Miszczyk, J.; Kwiatek, W. M. (2017-05-30).
"Differentiation of protein secondary structure in clear and opaque human lenses: AFM – IR studies" (http://www.sci
encedirect.com/science/article/pii/S0731708516307907). Journal of Pharmaceutical and Biomedical Analysis. 139:
125–132. doi:10.1016/j.jpba.2017.03.001 (https://doi.org/10.1016%2Fj.jpba.2017.03.001).
42. R. V. Lapshin (2004). "Feature-oriented scanning methodology for probe microscopy and nanotechnology" (http://w
ww.lapshin.fast-page.org/publications.htm#feature2004) (PDF). Nanotechnology. UK: IOP. 15 (9): 1135–1151.
Bibcode:2004Nanot..15.1135L (http://adsabs.harvard.edu/abs/2004Nanot..15.1135L). ISSN 0957-4484 (https://www.
worldcat.org/issn/0957-4484). doi:10.1088/0957-4484/15/9/006 (https://doi.org/10.1088%2F0957-4484%2F15%2F
9%2F006).
43. R. V. Lapshin (2007). "Automatic drift elimination in probe microscope images based on techniques of counter-
scanning and topography feature recognition" (http://www.lapshin.fast-page.org/publications.htm#automatic2007)
(PDF). Measurement Science and Technology. UK: IOP. 18 (3): 907–927. Bibcode:2007MeScT..18..907L (http://adsa
bs.harvard.edu/abs/2007MeScT..18..907L). ISSN 0957-0233 (https://www.worldcat.org/issn/0957-0233).
doi:10.1088/0957-0233/18/3/046 (https://doi.org/10.1088%2F0957-0233%2F18%2F3%2F046).

https://en.wikipedia.org/wiki/Atomic-force_microscopy 15/17
7/24/2017 Atomic-force microscopy - Wikipedia

44. V. Y. Yurov; A. N. Klimov (1994). "Scanning tunneling microscope calibration and reconstruction of real image:
Drift and slope elimination" (http://rsi.aip.org/resource/1/rsinak/v65/i5/p1551_s1) (PDF). Review of Scientific
Instruments. USA: AIP. 65 (5): 1551–1557. Bibcode:1994RScI...65.1551Y (http://adsabs.harvard.edu/abs/1994RSc
I...65.1551Y). ISSN 0034-6748 (https://www.worldcat.org/issn/0034-6748). doi:10.1063/1.1144890 (https://doi.org/1
0.1063%2F1.1144890).
45. G. Schitter; M. J. Rost (2008). "Scanning probe microscopy at video-rate" (https://web.archive.org/web/2009090903
5928/http://www.materialstoday.com/view/2194/scanning-probe-microscopy-at-videorate/). Materials Today. UK:
Elsevier. 11 (special issue): 40–48. ISSN 1369-7021 (https://www.worldcat.org/issn/1369-7021). doi:10.1016/S1369-
7021(09)70006-9 (https://doi.org/10.1016%2FS1369-7021%2809%2970006-9). Archived from the original (http://w
ww.materialstoday.com/view/2194/scanning-probe-microscopy-at-videorate/) (PDF) on September 9, 2009.
46. R. V. Lapshin; O. V. Obyedkov (1993). "Fast-acting piezoactuator and digital feedback loop for scanning tunneling
microscopes" (http://www.lapshin.fast-page.org/publications.htm#fast1993) (PDF). Review of Scientific Instruments.
USA: AIP. 64 (10): 2883–2887. Bibcode:1993RScI...64.2883L (http://adsabs.harvard.edu/abs/1993RScI...64.2883L).
ISSN 0034-6748 (https://www.worldcat.org/issn/0034-6748). doi:10.1063/1.1144377 (https://doi.org/10.1063%2F1.1
144377).
47. Jong, Wim H De; Borm, Paul JA (June 2008). "Drug Delivery and Nanoparticles: Applications and Hazards".
Dovepress International Journal of Nanomedicine: 133–149.
48. Pyrgiotakis, Georgios; Blattmann, Christoph O.; Demokritou, Philip (10 June 2014). "Real-Time Nanoparticle-Cell
Interactions in Physiological Media by Atomic Force Microscopy". ACS Sustainable Chemistry & Engineering
(Sustainable Nanotechnology 2013): 1681–1690.
49. Evans, Evan A.; Calderwood, David A. (25 May 2007). "Forces and Bond Dynamics in Cell Adhesion". Science.
316 (5828): 1148–1153. Bibcode:2007Sci...316.1148E (http://adsabs.harvard.edu/abs/2007Sci...316.1148E).
doi:10.1126/science.1137592 (https://doi.org/10.1126%2Fscience.1137592).
50. Scheuring, Simon; Lévy, Daniel; Rigaud, Jean-Louis (1 July 2005). "Watching the Components". Elsevier. 1712 (2):
109–127. doi:10.1016/j.bbamem.2005.04.005 (https://doi.org/10.1016%2Fj.bbamem.2005.04.005).
51. Alsteens, David; Verbelen, Claire; Dague, Etienne; Raze, Dominique; Baulard, Alain R.; Dufrêne, Yves F. (April
2008). "Organization of the Mycobacterial Cell Wall: A Nanoscale View". Pflügers Archive European Journal of
Physiology. 456 (1): 117–125. doi:10.1007/s00424-007-0386-0 (https://doi.org/10.1007%2Fs00424-007-0386-0).

Further reading
Carpick, Robert W.; Salmeron, Miquel (1997). "Scratching the Surface: Fundamental Investigations of
Tribology with Atomic Force Microscopy". Chemical Reviews. 97 (4): 1163–1194. ISSN 0009-2665 (htt
ps://www.worldcat.org/issn/0009-2665). doi:10.1021/cr960068q (https://doi.org/10.1021%2Fcr960068q).
Giessibl, Franz J. (2003). "Advances in atomic force microscopy". Reviews of Modern Physics. 75 (3):
949–983. Bibcode:2003RvMP...75..949G (http://adsabs.harvard.edu/abs/2003RvMP...75..949G).
ISSN 0034-6861 (https://www.worldcat.org/issn/0034-6861). arXiv:cond-mat/0305119 (https://arxiv.org/
abs/cond-mat/0305119)  . doi:10.1103/RevModPhys.75.949 (https://doi.org/10.1103%2FRevModPhys.7
5.949).
Ricardo, Garcia; Knoll, Armin; Riedo, Elisa (2014). "Advanced Scanning Probe Lithography". Nature
Nanotechnology. 9: 577. PMID 25091447 (https://www.ncbi.nlm.nih.gov/pubmed/25091447).
doi:10.1038/NNANO.2014.157 (https://doi.org/10.1038%2FNNANO.2014.157).

External links
SPM gallery: surface scans, collages, artworks, desktop
Wikimedia Commons has
wallpapers (http://www.lapshin.fast-page.org/gallery.htm)
media related to Atomic
AFM Scan Image Gallery (organized by application area) (https:// force microscopy.
www.nanosurf.com/en/applications)
Wikibooks has a book on
Retrieved from "https://en.wikipedia.org/w/index.php?title=Atomic- the topic of: The
Opensource Handbook of
force_microscopy&oldid=790074054"
Nanoscience and
Nanotechnology
Categories: American inventions Scanning probe microscopy
Semiconductor analysis Intermolecular forces Scientific techniques

https://en.wikipedia.org/wiki/Atomic-force_microscopy 16/17
7/24/2017 Atomic-force microscopy - Wikipedia

This page was last edited on 11 July 2017, at 12:44.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Atomic-force_microscopy 17/17
7/24/2017 Transmission electron microscopy - Wikipedia

Transmission electron microscopy


From Wikipedia, the free encyclopedia

Transmission electron microscopy (TEM, also sometimes


conventional transmission electron microscopy or CTEM) is a
microscopy technique in which a beam of electrons is transmitted
through a specimen to form an image. The specimen is most often an
ultrathin section less than 100 nm thick or a suspension on a grid. An
image is formed from the interaction of the electrons with the sample as
the beam is transmitted through the specimen. The image is then
magnified and focused onto an imaging device, such as a fluorescent
screen, a layer of photographic film, or a sensor such as a charge-
coupled device.

Transmission electron microscopes are capable of imaging at a


significantly higher resolution than light microscopes, owing to the
smaller de Broglie wavelength of electrons. This enables the instrument
to capture fine detail—even as small as a single column of atoms, which
is thousands of times smaller than a resolvable object seen in a light
microscope. Transmission electron microscopy is a major analytical
method in the physical, chemical and biological sciences. TEMs find A TEM image of a cluster of
application in cancer research, virology, and materials science as well as poliovirus. The polio virus is 30 nm
pollution, nanotechnology and semiconductor research. in diameter.[1]

At lower magnifications TEM image contrast is due to


differential absorption of electrons by the material due
to differences in composition or thickness of the
material. At higher magnifications complex wave
interactions modulate the intensity of the image,
requiring expert analysis of observed images.
Alternate modes of use allow for the TEM to observe
modulations in chemical identity, crystal orientation,
electronic structure and sample induced electron phase
shift as well as the regular absorption based imaging.

The first TEM was demonstrated by Max Knoll and Operating principle of a Transmission Electron
Ernst Ruska in 1931, with this group developing the Microscope
first TEM with resolution greater than that of light in
1933 and the first commercial TEM in 1939. In 1986,
Ruska was awarded the Nobel Prize in physics for the development of transmission electron microscopy.[2]

Contents
1 History
1.1 Initial development
1.2 Improving resolution
1.3 Further research
2 Background
2.1 Electrons
2.2 Source formation
2.3 Optics
2.4 Display
3 Components
https://en.wikipedia.org/wiki/Transmission_electron_microscopy 1/20
7/24/2017 Transmission electron microscopy - Wikipedia

3.1 Vacuum system


3.2 Specimen stage
3.3 Electron gun
3.4 Electron lens
3.5 Apertures
4 Imaging methods
4.1 Contrast formation
4.2 Diffraction
4.3 Three-dimensional imaging
5 Sample preparation
5.1 Tissue sectioning
5.2 Sample staining
5.3 Mechanical milling
5.4 Chemical etching
5.5 Ion etching
5.6 Ion milling
5.7 Replication
6 Modifications
6.1 Scanning TEM
6.2 Low-voltage electron microscope
6.3 Cryo-TEM
6.4 Environmental/In-situ TEM
6.5 Aberration Corrected TEM
7 Limitations
7.1 Resolution limits
8 See also
9 References
10 External links

History
Initial development

In 1873, Ernst Abbe proposed that the ability to resolve detail in an object was limited approximately by the
wavelength of the light used in imaging or a few hundred nanometers for visible light microscopes.
Developments in ultraviolet (UV) microscopes, led by Köhler and Rohr, increased resolving power by a factor
of two.[3] However this required expensive quartz optics, due to the absorption of UV by glass. It was believed
that obtaining an image with sub-micrometer information was not possible due to this wavelength constraint.[4]

In 1858 Plücker observed the deflection of "cathode rays" (electrons) with the use of magnetic fields.[5] This
effect was utilized by Ferdinand Braun in 1897 to build simple cathode ray oscilloscopes (CROs) measuring
devices.[6] In 1891 Riecke noticed that the cathode rays could be focused by magnetic fields, allowing for
simple electromagnetic lens designs. In 1926 Hans Busch published work extending this theory and showed
that the lens maker's equation could, with appropriate assumptions, be applied to electrons.[2]

In 1928, at the Technical University of Berlin, Adolf Matthias, Professor of High voltage Technology and
Electrical Installations, appointed Max Knoll to lead a team of researchers to advance the CRO design. The
team consisted of several PhD students including Ernst Ruska and Bodo von Borries. The research team
worked on lens design and CRO column placement, to optimize parameters to construct better CROs, and make
electron optical components to generate low magnification (nearly 1:1) images. In 1931 the group successfully
generated magnified images of mesh grids placed over the anode aperture. The device used two magnetic

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 2/20
7/24/2017 Transmission electron microscopy - Wikipedia

lenses to achieve higher magnifications, arguably creating the first


electron microscope. In that same year, Reinhold Rudenberg, the
scientific director of the Siemens company, patented an electrostatic
lens electron microscope.[4][7]

Improving resolution

At the time, electrons were understood to be charged particles of matter;


the wave nature of electrons was not fully realized until the publication
of the De Broglie hypothesis in 1927.[8] The research group was
unaware of this publication until 1932, when they quickly realized that
the De Broglie wavelength of electrons was many orders of magnitude
smaller than that for light, theoretically allowing for imaging at atomic
scales. In April 1932, Ruska suggested the construction of a new
electron microscope for direct imaging of specimens inserted into the
microscope, rather than simple mesh grids or images of apertures. With
this device successful diffraction and normal imaging of an aluminium
sheet was achieved. However the magnification achievable was lower
than with light microscopy. Magnifications higher than those available
with a light microscope were achieved in September 1933 with images
The first practical TEM, originally of cotton fibers quickly acquired before being damaged by the electron
installed at IG Farben-Werke and now beam.[4]
on display at the Deutsches Museum
in Munich, Germany At this time, interest in the electron microscope had increased, with
other groups, such as Paul Anderson and Kenneth Fitzsimmons of
Washington State University,[9] and Albert Prebus and James Hillier at
the University of Toronto, who constructed the first TEMs in North America in 1935 and 1938, respectively,[10]
continually advancing TEM design.

Research continued on the electron microscope at Siemens in 1936, where the aim of the research was the
development and improvement of TEM imaging properties, particularly with regard to biological specimens. At
this time electron microscopes were being fabricated for specific groups, such as the "EM1" device used at the
UK National Physical Laboratory.[11] In 1939 the first commercial electron microscope, pictured, was installed
in the Physics department of IG Farben-Werke. Further work on the electron microscope was hampered by the
destruction of a new laboratory constructed at Siemens by an air-raid, as well as the death of two of the
researchers, Heinz Müller and Friedrick Krause during World War II.[12]

Further research

After World War II, Ruska resumed work at Siemens, where he continued to develop the electron microscope,
producing the first microscope with 100k magnification.[12] The fundamental structure of this microscope
design, with multi-stage beam preparation optics, is still used in modern microscopes. The worldwide electron
microscopy community advanced with electron microscopes being manufactured in Manchester UK, the USA
(RCA), Germany (Siemens) and Japan (JEOL). The first international conference in electron microscopy was
in Delft in 1949, with more than one hundred attendees.[11] Later conferences included the "First" international
conference in Paris, 1950 and then in London in 1954.

With the development of TEM, the associated technique of scanning transmission electron microscopy (STEM)
was re-investigated and did not become developed until the 1970s, with Albert Crewe at the University of
Chicago developing the field emission gun[13] and adding a high quality objective lens to create the modern
STEM. Using this design, Crewe demonstrated the ability to image atoms using annular dark-field imaging.
Crewe and coworkers at the University of Chicago developed the cold field electron emission source and built a

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 3/20
7/24/2017 Transmission electron microscopy - Wikipedia

STEM able to visualize single heavy atoms on thin carbon substrates.[14] In 2008, Jannick Meyer et al.
described the direct visualization of light atoms such as carbon and even hydrogen using TEM and a clean
single-layer graphene substrate.[15]

Background
Electrons

Theoretically, the maximum resolution, d, that one can obtain with a light microscope has been limited by the
wavelength of the photons that are being used to probe the sample, λ, and the numerical aperture of the system,
NA.[16]

where n is the index of refraction of the medium in which the lens is working and α is the maximum half-angle
of the cone of light that can enter the lens (see numerical aperture).[17] Early twentieth century scientists
theorized ways of getting around the limitations of the relatively large wavelength of visible light (wavelengths
of 400–700 nanometers) by using electrons. Like all matter, electrons have both wave and particle properties
(as theorized by Louis-Victor de Broglie), and their wave-like properties mean that a beam of electrons can be
made to behave like a beam of electromagnetic radiation. The wavelength of electrons is related to their kinetic
energy via the de Broglie equation. An additional correction must be made to account for relativistic effects, as
in a TEM an electron's velocity approaches the speed of light, c.[18]

where, h is Planck's constant, m0 is the rest mass of an electron and E is the energy of the accelerated electron.
Electrons are usually generated in an electron microscope by a process known as thermionic emission from a
filament, usually tungsten, in the same manner as a light bulb, or alternatively by field electron emission.[19]
The electrons are then accelerated by an electric potential (measured in volts) and focused by electrostatic and
electromagnetic lenses onto the sample. The transmitted beam contains information about electron density,
phase and periodicity; this beam is used to form an image.

Source formation

From the top down, the TEM consists of an emission source, which may be a tungsten filament or needle, or a
lanthanum hexaboride (LaB6) single crystal source.[20] The gun is connected to a high voltage source (typically
~100–300 kV) and, given sufficient current, the gun will begin to emit electrons either by thermionic or field
electron emission into the vacuum. The electron source is typically mounted in a Wehnelt cylinder to provide
preliminary focus of the emitted electrons into a beam. The upper lenses of the TEM then further focus the
electron beam to the desired size and location.[21]

Manipulation of the electron beam is performed using two physical effects. The interaction of electrons with a
magnetic field will cause electrons to move according to the left hand rule, thus allowing for electromagnets to
manipulate the electron beam. The use of magnetic fields allows for the formation of a magnetic lens of
variable focusing power, the lens shape originating due to the distribution of magnetic flux. Additionally,
electrostatic fields can cause the electrons to be deflected through a constant angle. Coupling of two deflections
in opposing directions with a small intermediate gap allows for the formation of a shift in the beam path, this
being used in TEM for beam shifting, subsequently this is extremely important to STEM. From these two
effects, as well as the use of an electron imaging system, sufficient control over the beam path is possible for
https://en.wikipedia.org/wiki/Transmission_electron_microscopy 4/20
7/24/2017 Transmission electron microscopy - Wikipedia

TEM operation. The optical


configuration of a TEM can be
rapidly changed, unlike that for an
optical microscope, as lenses in the
beam path can be enabled, have
their strength changed, or be
disabled entirely simply via rapid
electrical switching, the speed of
which is limited by effects such as
the magnetic hysteresis of the
lenses.

Optics
Hairpin style tungsten The lenses of a TEM allow for
filament beam convergence, with the angle
of convergence as a variable
parameter, giving the TEM the
ability to change magnification
simply by modifying the amount of
current that flows through the coil,
quadrupole or hexapole lenses. The
quadrupole lens is an arrangement Layout of optical components in a basic
of electromagnetic coils at the TEM
vertices of the square, enabling the
generation of a lensing magnetic
fields, the hexapole configuration simply enhances the lens symmetry by using
Single crystal LaB6 filament
six, rather than four coils.

Typically a TEM consists of three stages of lensing. The stages are the
condenser lenses, the objective lenses, and the projector lenses. The condenser lenses are responsible for
primary beam formation, while the objective lenses focus the beam that comes through the sample itself (in
STEM scanning mode, there are also objective lenses above the sample to make the incident electron beam
convergent). The projector lenses are used to expand the beam onto the phosphor screen or other imaging
device, such as film. The magnification of the TEM is due to the ratio of the distances between the specimen
and the objective lens' image plane.[22] Additional stigmators allow for the correction of asymmetrical beam
distortions, known as astigmatism. It is noted that TEM optical configurations differ significantly with
implementation, with manufacturers using custom lens configurations, such as in spherical aberration corrected
instruments,[21] or TEMs utilizing energy filtering to correct electron chromatic aberration.

Display

Imaging systems in a TEM consist of a phosphor screen, which may be made of fine (10–100 μm) particulate
zinc sulfide, for direct observation by the operator, and, optionally, an image recording system such as film
based or doped YAG screen coupled CCDs.[23] Typically these devices can be removed or inserted into the
beam path by the operator as required.

Components
A TEM is composed of several components, which include a vacuum system in which the electrons travel, an
electron emission source for generation of the electron stream, a series of electromagnetic lenses, as well as
electrostatic plates. The latter two allow the operator to guide and manipulate the beam as required. Also

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 5/20
7/24/2017 Transmission electron microscopy - Wikipedia

required is a device to allow the insertion into, motion within, and


removal of specimens from the beam path. Imaging devices are
subsequently used to create an image from the electrons that exit the
system.

Vacuum system

To increase the mean free path of the electron gas interaction, a standard
TEM is evacuated to low pressures, typically on the order of 10−4
Pa.[24] The need for this is twofold: first the allowance for the voltage
difference between the cathode and the ground without generating an
arc, and secondly to reduce the collision frequency of electrons with gas
atoms to negligible levels—this effect is characterized by the mean free
path. TEM components such as specimen holders and film cartridges
must be routinely inserted or replaced requiring a system with the
ability to re-evacuate on a regular basis. As such, TEMs are equipped
with multiple pumping systems and airlocks and are not permanently
vacuum sealed.

The vacuum system for evacuating a TEM to an operating pressure The electron source of the TEM is at
level consists of several stages. Initially, a low or roughing vacuum is the top, where the lensing system (4,7
achieved with either a rotary vane pump or diaphragm pumps setting a and 8) focuses the beam on the
sufficiently low pressure to allow the operation of a turbo-molecular or specimen and then projects it onto the
diffusion pump establishing high vacuum level necessary for viewing screen (10). The beam
operations. To allow for the low vacuum pump to not require control is on the right (13 and 14)
continuous operation, while continually operating the turbo-molecular
pumps, the vacuum side of a low-pressure pump may be connected to
chambers which accommodate the exhaust gases from the turbo-molecular pump.[25] Sections of the TEM may
be isolated by the use of pressure-limiting apertures to allow for different vacuum levels in specific areas such
as a higher vacuum of 10−4 to 10−7 Pa or higher in the electron gun in high-resolution or field-emission TEMs.

High-voltage TEMs require ultra-high vacuums on the range of 10−7 to 10−9 Pa to prevent the generation of an
electrical arc, particularly at the TEM cathode.[26] As such for higher voltage TEMs a third vacuum system
may operate, with the gun isolated from the main chamber either by gate valves or a differential pumping
aperture – a small hole that prevents the diffusion of gas molecules into the higher vacuum gun area faster than
they can be pumped out. For these very low pressures, either an ion pump or a getter material is used.

Poor vacuum in a TEM can cause several problems ranging from the deposition of gas inside the TEM onto the
specimen while viewed in a process known as electron beam induced deposition to more severe cathode
damages caused by electrical discharge.[26] The use of a cold trap to adsorb sublimated gases in the vicinity of
the specimen largely eliminates vacuum problems that are caused by specimen sublimation.[25]

Specimen stage

TEM specimen stage designs include airlocks to allow for insertion of the specimen holder into the vacuum
with minimal loss of vacuum in other areas of the microscope. The specimen holders hold a standard size of
sample grid or self-supporting specimen. Standard TEM grid sizes are 3.05 mm diameter, with a thickness and
mesh size ranging from a few to 100 μm. The sample is placed onto the meshed area having a diameter of
approximately 2.5 mm. Usual grid materials are copper, molybdenum, gold or platinum. This grid is placed into
the sample holder, which is paired with the specimen stage. A wide variety of designs of stages and holders
exist, depending upon the type of experiment being performed. In addition to 3.05 mm grids, 2.3 mm grids are
sometimes, if rarely, used. These grids were particularly used in the mineral sciences where a large degree of

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 6/20
7/24/2017 Transmission electron microscopy - Wikipedia

tilt can be required and where specimen material may be extremely rare.
Electron transparent specimens have a thickness usually less than 100 nm, but
this value depends on the accelerating voltage.

Once inserted into a TEM, the sample has to be manipulated to locate the
region of interest to the beam, such as in single grain diffraction, in a specific
orientation. To accommodate this, the TEM stage allows movement of the
sample in the XY plane, Z height adjustment, and commonly a single tilt
direction parallel to the axis of side entry bolders. Sample rotation may be
available on specialized diffraction holders and stages. Some modern TEMs
provide the ability for two orthogonal tilt angles of movement with specialized
TEM sample support mesh
holder designs called double-tilt sample holders. Some stage designs, such as
"grid", with ultramicrotomy
top-entry or vertical insertion stages once common for high resolution TEM
sections
studies, may simply only have X-Y translation available. The design criteria of
TEM stages are complex, owing to the simultaneous requirements of
mechanical and electron-optical constraints and specialized models are available for different methods.

A TEM stage is required to have the ability to hold a specimen and be manipulated to bring the region of
interest into the path of the electron beam. As the TEM can operate over a wide range of magnifications, the
stage must simultaneously be highly resistant to mechanical drift, with drift requirements as low as a few
nm/minute while being able to move several μm/minute, with repositioning accuracy on the order of
nanometers.[27] Earlier designs of TEM accomplished this with a complex set of mechanical downgearing
devices, allowing the operator to finely control the motion of the stage by several rotating rods. Modern devices
may use electrical stage designs, using screw gearing in concert with stepper motors, providing the operator
with a computer-based stage input, such as a joystick or trackball.

Two main designs for stages in a TEM exist, the side-entry and top entry version.[23] Each design must
accommodate the matching holder to allow for specimen insertion without either damaging delicate TEM
optics or allowing gas into TEM systems under vacuum.

The most common is the side entry holder, where the


specimen is placed near the tip of a long metal (brass or
stainless steel) rod, with the specimen placed flat in a small
bore. Along the rod are several polymer vacuum rings to
allow for the formation of a vacuum seal of sufficient
quality, when inserted into the stage. The stage is thus
A diagram of a single axis tilt sample holder for designed to accommodate the rod, placing the sample either
insertion into a TEM goniometer. Titling of the in between or near the objective lens, dependent upon the
holder is achieved by rotation of the entire objective design. When inserted into the stage, the side
goniometer entry holder has its tip contained within the TEM vacuum,
and the base is presented to atmosphere, the airlock formed
by the vacuum rings.

Insertion procedures for side-entry TEM holders typically involve the rotation of the sample to trigger micro
switches that initiate evacuation of the airlock before the sample is inserted into the TEM column.

The second design is the top-entry holder consists of a cartridge that is several cm long with a bore drilled
down the cartridge axis. The specimen is loaded into the bore, possibly utilizing a small screw ring to hold the
sample in place. This cartridge is inserted into an airlock with the bore perpendicular to the TEM optic axis.
When sealed, the airlock is manipulated to push the cartridge such that the cartridge falls into place, where the
bore hole becomes aligned with the beam axis, such that the beam travels down the cartridge bore and into the
specimen. Such designs are typically unable to be tilted without blocking the beam path or interfering with the
objective lens.[23]

Electron gun
https://en.wikipedia.org/wiki/Transmission_electron_microscopy 7/20
7/24/2017 Transmission electron microscopy - Wikipedia

The electron gun is formed from several components: the


filament, a biasing circuit, a Wehnelt cap, and an extraction
anode. By connecting the filament to the negative component
power supply, electrons can be "pumped" from the electron gun
to the anode plate, and TEM column, thus completing the
circuit. The gun is designed to create a beam of electrons
exiting from the assembly at some given angle, known as the
gun divergence semi-angle, α. By constructing the Wehnelt
cylinder such that it has a higher negative charge than the
filament itself, electrons that exit the filament in a diverging
manner are, under proper operation, forced into a converging
pattern the minimum size of which is the gun crossover
diameter.
Cross sectional diagram of an electron gun
The thermionic emission current density, J, can be related to
assembly, illustrating electron extraction
the work function of the emitting material via Richardson's law

where A is the Richardson's constant, Φ is the work function and T is the temperature of the material.[23]

This equation shows that in order to achieve sufficient current density it is necessary to heat the emitter, taking
care not to cause damage by application of excessive heat, for this reason materials with either a high melting
point, such as tungsten, or those with a low work function (LaB6) are required for the gun filament.[28]
Furthermore, both lanthanum hexaboride and tungsten thermionic sources must be heated in order to achieve
thermionic emission, this can be achieved by the use of a small resistive strip. To prevent thermal shock, there
is often a delay enforced in the application of current to the tip, to prevent thermal gradients from damaging the
filament, the delay is usually a few seconds for LaB6, and significantly lower for tungsten.

Electron lens

Electron lenses are designed to act in a manner emulating that of an


optical lens, by focusing parallel electrons at some constant focal
distance. Electron lenses may operate electrostatically or magnetically.
The majority of electron lenses for TEM use electromagnetic coils to
generate a convex lens. The field produced for the lens must be radially
symmetrical, as deviation from the radial symmetry of the magnetic
lens causes aberrations such as astigmatism, and worsens spherical and
chromatic aberration. Electron lenses are manufactured from iron, iron-
cobalt or nickel cobalt alloys,[29] such as permalloy. These are selected
for their magnetic properties, such as magnetic saturation, hysteresis
Diagram of a TEM split pole piece and permeability.
design lens
The components include the yoke, the magnetic coil, the poles, the
polepiece, and the external control circuitry. The pole piece must be
manufactured in a very symmetrical manner, as this provides the boundary conditions for the magnetic field
that forms the lens. Imperfections in the manufacture of the pole piece can induce severe distortions in the
magnetic field symmetry, which induce distortions that will ultimately limit the lenses' ability to reproduce the
object plane. The exact dimensions of the gap, pole piece internal diameter and taper, as well as the overall
design of the lens is often performed by finite element analysis of the magnetic field, whilst considering the
thermal and electrical constraints of the design.[29]

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 8/20
7/24/2017 Transmission electron microscopy - Wikipedia

The coils which produce the magnetic field are located within the lens yoke. The coils can contain a variable
current, but typically utilize high voltages, and therefore require significant insulation in order to prevent short-
circuiting the lens components. Thermal distributors are placed to ensure the extraction of the heat generated by
the energy lost to resistance of the coil windings. The windings may be water-cooled, using a chilled water
supply in order to facilitate the removal of the high thermal duty.

Apertures

Apertures are annular metallic plates, through which electrons that are further than a fixed distance from the
optic axis may be excluded. These consist of a small metallic disc that is sufficiently thick to prevent electrons
from passing through the disc, whilst permitting axial electrons. This permission of central electrons in a TEM
causes two effects simultaneously: firstly, apertures decrease the beam intensity as electrons are filtered from
the beam, which may be desired in the case of beam sensitive samples. Secondly, this filtering removes
electrons that are scattered to high angles, which may be due to unwanted processes such as spherical or
chromatic aberration, or due to diffraction from interaction within the sample.[30]

Apertures are either a fixed aperture within the column, such as at the condenser lens, or are a movable
aperture, which can be inserted or withdrawn from the beam path, or moved in the plane perpendicular to the
beam path. Aperture assemblies are mechanical devices which allow for the selection of different aperture
sizes, which may be used by the operator to trade off intensity and the filtering effect of the aperture. Aperture
assemblies are often equipped with micrometers to move the aperture, required during optical calibration.

Imaging methods
Imaging methods in TEM utilize the information contained in the electron waves exiting from the sample to
form an image. The projector lenses allow for the correct positioning of this electron wave distribution onto the
viewing system. The observed intensity, I, of the image, assuming sufficiently high quality of imaging device,
can be approximated as proportional to the time-averaged amplitude of the electron wavefunctions, where the
wave that forms the exit beam is denoted by Ψ.[31]

Different imaging methods therefore attempt to modify the electron waves exiting the sample in a way that
provides information about the sample, or the beam itself. From the previous equation, it can be deduced that
the observed image depends not only on the amplitude of beam, but also on the phase of the electrons, although
phase effects may often be ignored at lower magnifications. Higher resolution imaging requires thinner samples
and higher energies of incident electrons, which means that the sample can no longer be considered to be
absorbing electrons (i.e., via a Beer's law effect). Instead, the sample can be modeled as an object that does not
change the amplitude of the incoming electron wave function, but instead modifies the phase of the incoming
wave; in this model, the sample is known as a pure phase object. For sufficiently thin specimens, phase effects
dominate the image, complicating analysis of the observed intensities.[31] To improve the contrast in the image,
the TEM may be operated at a slight defocus to enhance contrast, owing to convolution by the contrast transfer
function of the TEM,[32] which would normally decrease contrast if the sample was not a weak phase object.

Contrast formation

Contrast formation in the TEM depends upon the mode of operation. These different modes may be selected to
discern different types of information about the specimen.

Bright field

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 9/20
7/24/2017 Transmission electron microscopy - Wikipedia

The most common mode of operation for a TEM is the bright field imaging mode. In this mode the contrast
formation comes from the sample having varying thickness or density. Thicker areas of the sample and denser
areas or regions with a higher atomic number will block more electrons and appear dark in an image, while
thinner, lower density, lower atomic number regions and areas with no sample in the beam path will appear
bright. The term "bright field" refers to the bright background field where there is no sample and most of the
beam electrons reach the image. The image is assumed to be a simple two dimensional projection of the
sample's thickness and density down the optic axis, and to a first approximation may be modelled via Beer's
law,[16] more complex analyses require the modelling of the sample image to include phase information.[31]

Diffraction contrast

Samples can exhibit diffraction contrast, whereby the electron beam


undergoes Bragg scattering, which in the case of a crystalline sample,
disperses electrons into discrete locations in the back focal plane. By
the placement of apertures in the back focal plane, i.e. the objective
aperture, the desired Bragg reflections can be selected (or excluded),
thus only parts of the sample that are causing the electrons to scatter to
the selected reflections will end up projected onto the imaging
apparatus.

If the reflections that are selected do not include the unscattered beam
(which will appear up at the focal point of the lens), then the image Transmission electron micrograph of
will appear dark wherever no sample scattering to the selected peak is dislocations in steel, which are faults in
present, as such a region without a specimen will appear dark. This is the structure of the crystal lattice at the
known as a dark-field image. atomic scale

Modern TEMs are often equipped with specimen holders that allow
the user to tilt the specimen to a range of angles in order to obtain specific diffraction conditions, and apertures
placed above the specimen allow the user to select electrons that would otherwise be diffracted in a particular
direction from entering the specimen.

Applications for this method include the identification of lattice defects in crystals. By carefully selecting the
orientation of the sample, it is possible not just to determine the position of defects but also to determine the
type of defect present. If the sample is oriented so that one particular plane is only slightly tilted away from the
strongest diffracting angle (known as the Bragg Angle), any distortion of the crystal plane that locally tilts the
plane to the Bragg angle will produce particularly strong contrast variations. However, defects that produce
only displacement of atoms that do not tilt the crystal to the Bragg angle (i. e. displacements parallel to the
crystal plane) will not produce strong contrast.[33]

Electron energy loss

Utilizing the advanced technique of EELS, for TEMs appropriately equipped, electrons can be rejected based
upon their voltage (which, due to constant charge is their energy), using magnetic sector based devices known
as EELS spectrometers. These devices allow for the selection of particular energy values, which can be
associated with the way the electron has interacted with the sample. For example, different elements in a
sample result in different electron energies in the beam after the sample. This normally results in chromatic
aberration – however this effect can, for example, be used to generate an image which provides information on
elemental composition, based upon the atomic transition during electron-electron interaction.[34]

EELS spectrometers can often be operated in both spectroscopic and imaging modes, allowing for isolation or
rejection of elastically scattered beams. As for many images inelastic scattering will include information that
may not be of interest to the investigator thus reducing observable signals of interest, EELS imaging can be
used to enhance contrast in observed images, including both bright field and diffraction, by rejecting unwanted
components.

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 10/20
7/24/2017 Transmission electron microscopy - Wikipedia

Phase contrast

Crystal structure can also be investigated by high-resolution transmission electron microscopy (HRTEM), also
known as phase contrast. When utilizing a Field emission source and a specimen of uniform thickness, the
images are formed due to differences in phase of electron waves, which is caused by specimen interaction.[32]
Image formation is given by the complex modulus of the incoming electron beams. As such, the image is not
only dependent on the number of electrons hitting the screen, making direct interpretation of phase contrast
images more complex. However this effect can be used to an advantage, as it can be manipulated to provide
more information about the sample, such as in complex phase retrieval techniques.

Diffraction

As previously stated, by adjusting the magnetic lenses such that the back focal
plane of the lens rather than the imaging plane is placed on the imaging
apparatus a diffraction pattern can be generated. For thin crystalline samples,
this produces an image that consists of a pattern of dots in the case of a single
crystal, or a series of rings in the case of a polycrystalline or amorphous solid
material. For the single crystal case the diffraction pattern is dependent upon
the orientation of the specimen and the structure of the sample illuminated by
the electron beam. This image provides the investigator with information about
the space group symmetries in the crystal and the crystal's orientation to the
beam path. This is typically done without utilizing any information but the
position at which the diffraction spots appear and the observed image Crystalline diffraction pattern
symmetries. from a twinned grain of FCC
Austenitic steel
Diffraction patterns can have a large dynamic range, and for crystalline
samples, may have intensities greater than those recordable by CCD. As such,
TEMs may still be equipped with film cartridges for the purpose of obtaining these images, as the film is a
single use detector.

Analysis of diffraction patterns beyond point-position can be complex, as the


image is sensitive to a number of factors such as specimen thickness and
orientation, objective lens defocus, spherical and chromatic aberration.
Although quantitative interpretation of the contrast shown in lattice images is
possible, it is inherently complicated and can require extensive computer
simulation and analysis, such as electron multislice analysis.[35]

More complex behaviour in the diffraction plane is also possible, with


phenomena such as Kikuchi lines arising from multiple diffraction within the Convergent-beam Kikuchi
crystalline lattice. In convergent beam electron diffraction (CBED) where a lines from silicon, near the
non-parallel, i.e. converging, electron wavefront is produced by concentrating [100] zone axis
the electron beam into a fine probe at the sample surface, the interaction of the
convergent beam can provide information beyond structural data such as sample thickness.

Three-dimensional imaging

As TEM specimen holders typically allow for the rotation of a sample by a desired angle, multiple views of the
same specimen can be obtained by rotating the angle of the sample along an axis perpendicular to the beam. By
taking multiple images of a single TEM sample at differing angles, typically in 1° increments, a set of images
known as a "tilt series" can be collected. This methodology was proposed in the 1970s by Walter Hoppe. Under
purely absorption contrast conditions, this set of images can be used to construct a three-dimensional
representation of the sample.[37]

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 11/20
7/24/2017 Transmission electron microscopy - Wikipedia

The reconstruction is accomplished by a two-step process, first images


are aligned to account for errors in the positioning of a sample; such
errors can occur due to vibration or mechanical drift.[38] Alignment
methods use image registration algorithms, such as autocorrelation
methods to correct these errors. Secondly, using a reconstruction
algorithm, such as filtered back projection, the aligned image slices can
be transformed from a set of two-dimensional images, Ij(x, y), to a
single three-dimensional image, I'j(x, y, z). This three-dimensional
image is of particular interest when morphological information is
required, further study can be undertaken using computer algorithms,
A three-dimensional TEM image of a such as isosurfaces and data slicing to analyse the data.
parapoxvirus[36]
As TEM samples cannot typically be viewed at a full 180° rotation, the
observed images typically suffer from a "missing wedge" of data,
which when using Fourier-based back projection methods decreases the range of resolvable frequencies in the
three-dimensional reconstruction.[37] Mechanical refinements, such as multi-axis tilting (two tilt series of the
same specimen made at orthogonal directions) and conical tomography (where the specimen is first tilted to a
given fixed angle and then imaged at equal angular rotational increments through one complete rotation in the
plane of the specimen grid) can be used to limit the impact of the missing data on the observed specimen
morphology. Using focused ion beam milling, a new technique has been proposed[39] which uses pillar-shaped
specimen and a dedicated on-axis tomography holder to perform 180° rotation of the sample inside the pole
piece of the objective lens in TEM. Using such arrangements, quantitative electron tomography without the
missing wedge is possible.[40] In addition, numerical techniques exist which can improve the collected data.

All the above-mentioned methods involve recording tilt series of a given specimen field. This inevitably results
in the summation of a high dose of reactive electrons through the sample and the accompanying destruction of
fine detail during recording. The technique of low-dose (minimal-dose) imaging is therefore regularly applied
to mitigate this effect. Low-dose imaging is performed by deflecting illumination and imaging regions
simultaneously away from the optical axis to image an adjacent region to the area to be recorded (the high-dose
region). This area is maintained centered during tilting and refocused before recording. During recording the
deflections are removed so that the area of interest is exposed to the electron beam only for the duration
required for imaging. An improvement of this technique (for objects resting on a sloping substrate film) is to
have two symmetrical off-axis regions for focusing followed by setting focus to the average of the two high-
dose focus values before recording the low-dose area of interest.

Non-tomographic variants on this method, referred to as single particle analysis, use images of multiple
(hopefully) identical objects at different orientations to produce the image data required for three-dimensional
reconstruction. If the objects do not have significant preferred orientations, this method does not suffer from the
missing data wedge (or cone) which accompany tomographic methods nor does it incur excessive radiation
dosage, however it assumes that the different objects imaged can be treated as if the 3D data generated from
them arose from a single stable object.

Sample preparation
Sample preparation in TEM can be a complex procedure.[41] TEM specimens sbould be less than 100
nanometers thick for a conventional TEM. Unlike neutron or X-Ray radiation the electrons in the beam interact
readily with the sample, an effect that increases roughly with atomic number squared (z2).[16] High quality
samples will have a thickness that is comparable to the mean free path of the electrons that travel through the
samples, which may be only a few tens of nanometers. Preparation of TEM specimens is specific to the
material under analysis and the type of information to be obtained from the specimen.

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 12/20
7/24/2017 Transmission electron microscopy - Wikipedia

Materials that have dimensions small enough to be electron transparent,


such as powdered substances, small organisms, viruses, or nanotubes,
can be quickly prepared by the deposition of a dilute sample containing
the specimen onto films on support grids. Biological specimens may be
embedded in resin to withstand the high vacuum in the sample chamber
and to enable cutting tissue into electron transparent thin sections. The
biological sample can be stained using either a negative staining
material such as uranyl acetate for bacteria and viruses, or, in the case of
embedded sections, the specimen may be stained with heavy metals,
including osmium tetroxide. Alternately samples may be held at liquid
A sample of cells (black) stained with
nitrogen temperatures after embedding in vitreous ice.[42] In material osmium tetroxide and uranyl acetate
science and metallurgy the specimens can usually withstand the high embedded in epoxy resin (amber)
vacuum, but still must be prepared as a thin foil, or etched so some ready for sectioning.
portion of the specimen is thin enough for the beam to penetrate.
Constraints on the thickness of the material may be limited by the
scattering cross-section of the atoms from which the material is comprised.

Tissue sectioning

Biological tissue is often embedded in a resin block then thinned to less


than 100nm on an ultramicrotome. The resin block is fractured as it
passes over a glass or diamond knife edge.[43] This method is used to
obtain thin, minimally deformed samples that allow for the observation
of tissue ultrastructure. Inorganic samples, such as aluminium, may also
be embedded in resins and ultrathin sectioned in this way, using either
coated glass, sapphire or larger angle diamond knives.[44] To prevent
charge build-up at the sample surface when viewing in the TEM, tissue
samples need to be coated with a thin layer of conducting material, such
as carbon. A diamond knife blade used for
cutting ultrathin sections (typically 70
Sample staining to 350 nm) for transmission electron
microscopy.
TEM samples of biological
tissues need high atomic number
stains to enhance contrast. The stain absorbs the beam electrons or
scatters part of the electron beam which otherwise is projected onto the
imaging system. Compounds of heavy metals such as osmium, lead,
uranium or gold (in immunogold labelling) may be used prior to TEM
observation to selectively deposit electron dense atoms in or on the
sample in desired cellular or protein regionn. This process requires an
understanding of how heavy metals bind to specific biological tissues
A section of a cell of Bacillus subtilis, and cellular structures.[45]
taken with a Tecnai T-12 TEM. The
scale bar is 200 nm. Mechanical milling

Mechanical polishing is also used to prepare samples for imaging on the


TEM. Polishing needs to be done to a high quality, to ensure constant sample thickness across the region of
interest. A diamond, or cubic boron nitride polishing compound may be used in the final stages of polishing to
remove any scratches that may cause contrast fluctuations due to varying sample thickness. Even after careful
mechanical milling, additional fine methods such as ion etching may be required to perform final stage
thinning.

Chemical etching
https://en.wikipedia.org/wiki/Transmission_electron_microscopy 13/20
7/24/2017 Transmission electron microscopy - Wikipedia

Certain samples may be prepared by chemical etching, particularly metallic specimens. These samples are
thinned using a chemical etchant, such as an acid, to prepare the sample for TEM observation. Devices to
control the thinning process may allow the operator to control either the voltage or current passing through the
specimen, and may include systems to detect when the sample has been thinned to a sufficient level of optical
transparency.

Ion etching

Ion etching is a sputtering process that can remove very fine quantities
of material. This is used to perform a finishing polish of specimens
polished by other means. Ion etching uses an inert gas passed through
an electric field to generate a plasma stream that is directed to the
sample surface. Acceleration energies for gases such as argon are
typically a few kilovolts. The sample may be rotated to promote even
polishing of the sample surface. The sputtering rate of such methods is
on the order of tens of micrometers per hour, limiting the method to
only extremely fine polishing.

Ion etching by argon gas has been recently shown to be able to file SEM image of a thin TEM sample
down MTJ stack structures to a specific layer which has then been milled by FIB. The thin membrane
atomically resolved. The TEM images taken in plan view rather than shown here is suitable for TEM
cross-section reveal that the MgO layer within MTJs contains a large examination; however, at ~300-nm
number of grain boundaries that may be diminishing the properties of thickness, it would not be suitable for
devices.[46] high-resolution TEM without further
milling.
Ion milling

More recently focused ion beam methods have been used to prepare samples. FIB is a relatively new technique
to prepare thin samples for TEM examination from larger specimens. Because FIB can be used to micro-
machine samples very precisely, it is possible to mill very thin membranes from a specific area of interest in a
sample, such as a semiconductor or metal. Unlike inert gas ion sputtering, FIB makes use of significantly more
energetic gallium ions and may alter the composition or structure of the material through gallium
implantation.[47]

Replication

Samples may also be replicated using cellulose acetate film, the film subsequently coated with a heavy metal
such as platinum, the original film dissolved away, and the replica imaged on the TEM. Variations of the replica
technique are used for both materials and biological samples. In materials science a common use is for
examining the fresh fracture surface of metal alloys.

Modifications
The capabilities of the TEM can be further extended by additional stages and detectors, sometimes incorporated
on the same microscope.

Scanning TEM

A TEM can be modified into a scanning transmission electron microscope (STEM) by the addition of a system
that rasters the beam across the sample to form the image, combined with suitable detectors. Scanning coils are
used to deflect the beam, such as by an electrostatic shift of the beam, where the beam is then collected using a
current detector such as a Faraday cup, which acts as a direct electron counter. By correlating the electron count

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 14/20
7/24/2017 Transmission electron microscopy - Wikipedia

to the position of the scanning beam (known as the "probe"), the


transmitted component of the beam may be measured. The non-
transmitted components may be obtained either by beam tilting or by
the use of annular dark field detectors.

Low-voltage electron microscope

A low-voltage electron microscope (LVEM) is operated at relatively


low electron accelerating voltage between 5–25 kV. Some of these can
be a combination of SEM, TEM and STEM in a single compact
instrument. Low voltage increases image contrast which is especially
important for biological specimens. This increase in contrast
significantly reduces, or even eliminates the need to stain. Resolutions
of a few nm are possible in TEM, SEM and STEM modes. The low
energy of the electron beam means that permanent magnets can be used
as lenses and thus a miniature column that does not require cooling can
Staphylococcus aureus platinum
be used.[48][49] replica image shot on a TEM at
50,000x magnification
Cryo-TEM

Main article: Cryo-electron microscopy

Cryogenic transmission electron microscopy (Cryo-TEM) uses a TEM with a specimen holder capable of
maintaining the specimen at liquid nitrogen or liquid helium temperatures. This allows imaging specimens
prepared in vitreous ice, the preferred preparation technique for imaging individual molecules or
macromolecular assemblies,[50] imaging of vitrified solid-electrolye interfaces,[51] and imaging of materials
that are volatile in high vacuum at room temperature, such as sulfur.[52]

Environmental/In-situ TEM

In-situ experiments may also be conducted in TEM using differentially pumped sample chambers, or
specialized holders.[53] Types of in-situ experiments include studying chemical reactions in liquid cells,[54] and
material deformation testing.[55]

Aberration Corrected TEM

Modern research TEMs may include aberration correctors,[21] to reduce the amount of distortion in the image.
Incident beam monochromators may also be used which reduce the energy spread of the incident electron beam
to less than 0.15 eV.[21] Major aberration corrected TEM manufacturers include JEOL, Hitachi High-
technologies, FEI Company, and NION.

Limitations
There are a number of drawbacks to the TEM technique. Many materials require extensive sample preparation
to produce a sample thin enough to be electron transparent, which makes TEM analysis a relatively time
consuming process with a low throughput of samples. The structure of the sample may also be changed during
the preparation process. Also the field of view is relatively small, raising the possibility that the region analyzed
may not be characteristic of the whole sample. There is potential that the sample may be damaged by the
electron beam, particularly in the case of biological materials.

Resolution limits

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 15/20
7/24/2017 Transmission electron microscopy - Wikipedia

The limit of resolution obtainable in a TEM may be described in several


ways, and is typically referred to as the information limit of the
microscope. One commonly used value is a cut-off value of the contrast
transfer function, a function that is usually quoted in the frequency
domain to define the reproduction of spatial frequencies of objects in
the object plane by the microscope optics. A cut-off frequency, qmax, for
the transfer function may be approximated with the following equation,
where Cs is the spherical aberration coefficient and λ is the electron
wavelength:[30]

Evolution of spatial resolution


For a 200 kV microscope, with partly corrected spherical aberrations achieved with optical, transmission
("to the third order") and a Cs value of 1 µm,[57] a theoretical cut-off (TEM) and aberration-corrected
electron microscopes (ACTEM).[56]
value might be 1/qmax = 42 pm.[30] The same microscope without a
corrector would have Cs = 0.5 mm and thus a 200-pm cut-off.[57] The
spherical aberrations are suppressed to the third or fifth order in the "aberration-corrected" microscopes. Their
resolution is however limited by electron source geometry and brightness and chromatic aberrations in the
objective lens system.[21][58]

The frequency domain representation of the contrast transfer function may often have an oscillatory nature,[59]
which can be tuned by adjusting the focal value of the objective lens. This oscillatory nature implies that some
spatial frequencies are faithfully imaged by the microscope, whilst others are suppressed. By combining
multiple images with different spatial frequencies, the use of techniques such as focal series reconstruction can
be used to improve the resolution of the TEM in a limited manner.[30] The contrast transfer function can, to
some extent, be experimentally approximated through techniques such as Fourier transforming images of
amorphous material, such as amorphous carbon.

More recently, advances in aberration corrector design have been able to reduce spherical aberrations[60] and to
achieve resolution below 0.5 Ångströms (50 pm)[58] at magnifications above 50 million times.[61] Improved
resolution allows for the imaging of lighter atoms that scatter electrons less efficiently, such as lithium atoms in
lithium battery materials.[62] The ability to determine the position of atoms within materials has made the
HRTEM an indispensable tool for nanotechnology research and development in many fields, including
heterogeneous catalysis and the development of semiconductor devices for electronics and photonics.[63]

See also
Applications for electron microscopy Low-voltage electron microscopy (LVEM)
Cryo-electron microscopy Precession Electron Diffraction
Electron beam induced deposition Scanning confocal electron microscopy
Electron diffraction Scanning electron microscope (SEM)
Electron energy loss spectroscopy (EELS) Scanning transmission electron microscope
Electron microscope (STEM)
Energy filtered transmission electron Transmission Electron Aberration-corrected
microscopy (EFTEM) Microscope
High-resolution transmission electron
microscopy (HRTEM)
Low-voltage electron microscopy (LVEM)
References
https://en.wikipedia.org/wiki/Transmission_electron_microscopy 16/20
7/24/2017 Transmission electron microscopy - Wikipedia

1. "Viruses" (http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/V/Viruses.html). users.rcn.com.


2. "The Nobel Prize in Physics 1986, Perspectives – Life through a Lens" (http://nobelprize.org/nobel_prizes/physics/la
ureates/1986/perspectives.html). nobelprize.org.
3. ultraviolet microscope. (2010). In Encyclopædia Britannica. Retrieved November 20, 2010, from Encyclopædia
Britannica Online (http://www.britannica.com/EBchecked/topic/613520/ultraviolet-microscope)
4. Ernst Ruska; translation by T Mulvey. The Early Development of Electron Lenses and Electron Microscopy. ISBN 3-
7776-0364-3.
5. Plücker, J. (1858). "Über die Einwirkung des Magneten auf die elektrischen Entladungen in verdünnten Gasen" (http
s://books.google.com/books?id=j2UEAAAAYAAJ&pg=PA88) [On the effect of a magnet on the electric discharge in
rarified gases]. Poggendorffs Annalen der Physik und Chemie. 103: 88–106. Bibcode:1858AnP...179...88P (http://ads
abs.harvard.edu/abs/1858AnP...179...88P). doi:10.1002/andp.18581790106 (https://doi.org/10.1002%2Fandp.185817
90106).
6. "Ferdinand Braun, The Nobel Prize in Physics 1909, Biography" (http://nobelprize.org/nobel_prizes/physics/laureate
s/1909/braun-bio.html). nobelprize.org.
7. Rudenberg, Reinhold (May 30, 1931). "Configuration for the enlarged imaging of objects by electron beams" (http://
v3.espacenet.com/searchResults?locale=en_GB&PN=DE906737&compact=false&DB=EPODOC). Patent
DE906737.
8. Broglie, L. (1928). "La nouvelle dynamique des quanta". Électrons et Photons: Rapports et Discussions du
Cinquième Conseil de Physique. Solvay.
9. "A Brief History of the Microscopy Society of America" (http://www.microscopy.org/about/history.cfm).
microscopy.org.
10. "Dr. James Hillier, Biography" (http://comdir.bfree.on.ca/hillier/hilbio.htm). comdir.bfree.on.ca.
11. Hawkes, P. (Ed.) (1985). The beginnings of Electron Microscopy. Academic Press. ISBN 0120145782.
12. "Ernst Ruska, Nobel Prize Lecture" (http://nobelprize.org/nobel_prizes/physics/laureates/1986/ruska-lecture.html).
nobelprize.org.
13. Crewe, Albert V; Isaacson, M. and Johnson, D.; Johnson, D. (1969). "A Simple Scanning Electron Microscope". Rev.
Sci. Inst. 40 (2): 241–246. Bibcode:1969RScI...40..241C (http://adsabs.harvard.edu/abs/1969RScI...40..241C).
doi:10.1063/1.1683910 (https://doi.org/10.1063%2F1.1683910).
14. Crewe, Albert V; Wall, J. and Langmore, J., J; Langmore, J (1970). "Visibility of a single atom". Science. 168
(3937): 1338–1340. Bibcode:1970Sci...168.1338C (http://adsabs.harvard.edu/abs/1970Sci...168.1338C).
PMID 17731040 (https://www.ncbi.nlm.nih.gov/pubmed/17731040). doi:10.1126/science.168.3937.1338 (https://doi.
org/10.1126%2Fscience.168.3937.1338).
15. Meyer, Jannik C.; Girit, C. O.; Crommie, M. F.; Zettl, A. (2008). "Imaging and dynamics of light atoms and
molecules on graphene" (http://physics.berkeley.edu/research/zettl/pdf/350.Nature.454-Meyer.pdf) (PDF). Nature.
Berkeley. 454 (7202): 319–22. Bibcode:2008Natur.454..319M (http://adsabs.harvard.edu/abs/2008Natur.454..319M).
PMID 18633414 (https://www.ncbi.nlm.nih.gov/pubmed/18633414). arXiv:0805.3857 (https://arxiv.org/abs/0805.38
57)  . doi:10.1038/nature07094 (https://doi.org/10.1038%2Fnature07094). Retrieved 3 June 2012.
16. Fultz, B & Howe, J (2007). Transmission Electron Microscopy and Diffractometry of Materials. Springer. ISBN 3-
540-73885-1.
17. Murphy, Douglas B. (2002). Fundamentals of Light Microscopy and Electronic Imaging. New York: John Wiley &
Sons. ISBN 9780471234296.
18. Champness, P. E. (2001). Electron Diffraction in the Transmission Electron Microscope. Garland Science.
ISBN 978-1859961476.
19. Hubbard, A (1995). The Handbook of surface imaging and visualization. CRC Press. ISBN 0-8493-8911-9.
20. Egerton, R (2005). Physical principles of electron microscopy. Springer. ISBN 0-387-25800-0.
21. Rose, H H (2008). "Optics of high-performance electron Microscopes". Science and Technology of Advanced
Materials. 9: 014107. Bibcode:2008STAdM...9a4107R (http://adsabs.harvard.edu/abs/2008STAdM...9a4107R).
doi:10.1088/0031-8949/9/1/014107 (https://doi.org/10.1088%2F0031-8949%2F9%2F1%2F014107).
22. "The objective lens of a TEM, the heart of the electron microscope" (http://www.rodenburg.org/guide/t700.html).
rodenburg.org.
23. Williams, D & Carter, C. B. (1996). Transmission Electron Microscopy. 1 – Basics. Plenum Press. ISBN 0-306-
45324-X.
24. Rodenburg, J M. "The Vacuum System" (http://www.rodenburg.org/guide/t1400.html). rodenburg.org.
25. Ross, L. E, Dykstra, M (2003). Biological Electron Microscopy: Theory, techniques and troubleshooting. Springer.
ISBN 0306477491.
26. Chapman, S. K. (1986). Maintaining and Monitoring the Transmission Electron Microscope. Royal Microscopical
Society Microscopy Handbooks. 08. Oxford University Press. ISBN 0-19-856407-4.
27. Pulokas, James; Green, Carmen; Kisseberth, Nick; Potter, Clinton S.; Carragher, Bridget (1999). "Improving the
Positional Accuracy of the Goniometer on the Philips CM Series TEM". Journal of Structural Biology. 128 (3): 250–
256. PMID 10633064 (https://www.ncbi.nlm.nih.gov/pubmed/10633064). doi:10.1006/jsbi.1999.4181 (https://doi.or
g/10.1006%2Fjsbi.1999.4181).
https://en.wikipedia.org/wiki/Transmission_electron_microscopy 17/20
7/24/2017 Transmission electron microscopy - Wikipedia

28. Buckingham, J (1965). "Thermionic emission properties of a lanthanum hexaboride/rhenium cathode". British
Journal of Applied Physics. 16 (12): 1821. Bibcode:1965BJAP...16.1821B (http://adsabs.harvard.edu/abs/1965BJA
P...16.1821B). doi:10.1088/0508-3443/16/12/306 (https://doi.org/10.1088%2F0508-3443%2F16%2F12%2F306).
29. Orloff, J, ed. (1997). Handbook of Electron Optics. CRC-press. ISBN 0-8493-2513-7.
30. Reimer,L and Kohl, H (2008). Transmission Electron Microscopy: Physics of Image Formation. Springer. ISBN 0-
387-34758-5.
31. Cowley, J. M (1995). Diffraction physics. Elsevier Science B. V. ISBN 0-444-82218-6.
32. Kirkland, E (1998). Advanced computing in Electron Microscopy. Springer. ISBN 0-306-45936-1.
33. Hull, D. & Bacon, J (2001). Introduction to dislocations (4th ed.). Butterworth-Heinemann. ISBN 0-7506-4681-0.
34. Egerton, R. F. (1996). Electron Energy-loss Spectroscopy in the Electron Microscope. Springer. ISBN 978-0-306-
45223-9.
35. Cowley, J. M.; Moodie, A. F. (1957). "The Scattering of Electrons by Atoms and Crystals. I. A New Theoretical
Approach". Acta Crystallographica. 199 (3): 609–619. doi:10.1107/S0365110X57002194 (https://doi.org/10.1107%
2FS0365110X57002194).
36. Mast, Jan; Demeestere, Lien (2009). "Electron tomography of negatively stained complex viruses: application in
their diagnosis" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2649040). Diagnostic Pathology. 4: 5.
PMC 2649040 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2649040)  . PMID 19208223 (https://www.ncbi.nl
m.nih.gov/pubmed/19208223). doi:10.1186/1746-1596-4-5 (https://doi.org/10.1186%2F1746-1596-4-5).
37. Frank, J, ed. (2006). Electron tomography: methods for three-dimensional visualization of structures in the cell.
Springer. ISBN 978-0-387-31234-7.
38. B. D.A. Levin; et al. (2016). "Nanomaterial datasets to advance tomography in scanning transmission electron
microscopy" (http://www.nature.com/articles/sdata201641). Scientific Data. 3: 160041. doi:10.1038/sdata.2016.41 (h
ttps://doi.org/10.1038%2Fsdata.2016.41).
39. Kawase, Noboru; Kato, Mitsuro; Jinnai, Hiroshi; Jinnai, H (2007). "Transmission electron microtomography without
the 'missing wedge' for quantitative structural analysis". Ultramicroscopy. 107 (1): 8–15. PMID 16730409 (https://w
ww.ncbi.nlm.nih.gov/pubmed/16730409). doi:10.1016/j.ultramic.2006.04.007 (https://doi.org/10.1016%2Fj.ultramic.
2006.04.007).
40. Heidari, Hamed; Van den Broek, Wouter; Bals, Sara (2013). "Quantitative electron tomography: The effect of the
three-dimensional point spread function". Ultramicroscopy. 135: 1–5. PMID 23872036 (https://www.ncbi.nlm.nih.go
v/pubmed/23872036). doi:10.1016/j.ultramic.2013.06.005 (https://doi.org/10.1016%2Fj.ultramic.2013.06.005).
41. Cheville, NF; Stasko J (2014). "Techniques in Electron Microscopy of Animal Tissue" (http://vet.sagepub.com/conte
nt/51/1/28.abstract). Veterinary Pathology. 51 (1): 28–41. PMID 24114311 (https://www.ncbi.nlm.nih.gov/pubmed/2
4114311). doi:10.1177/0300985813505114 (https://doi.org/10.1177%2F0300985813505114).
42. Amzallag, Arnaud; Vaillant, Cédric; Jacob, Mathews; Unser, Michael; Bednar, Jan; Kahn, Jason D.; Dubochet,
Jacques; Stasiak, Andrzej; Maddocks, John H. (2006). "3D reconstruction and comparison of shapes of DNA
minicircles observed by cryo-electron microscopy" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1635295).
Nucleic Acids Research. 34 (18): e125. PMC 1635295 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1635295)  .
PMID 17012274 (https://www.ncbi.nlm.nih.gov/pubmed/17012274). doi:10.1093/nar/gkl675 (https://doi.org/10.109
3%2Fnar%2Fgkl675).
43. Porter, K & Blum, J (1953). "A study in Microtomy for Electron Microscopy". The Anatomical Record. 117 (4):
685–710. PMID 13124776 (https://www.ncbi.nlm.nih.gov/pubmed/13124776). doi:10.1002/ar.1091170403 (https://d
oi.org/10.1002%2Far.1091170403).
44. Phillips (1961). "Diamond knife ultra microtomy of metals and the structure of microtomed sections". British
Journal of Applied Physics. 12 (10): 554. Bibcode:1961BJAP...12..554P (http://adsabs.harvard.edu/abs/1961BJAP...1
2..554P). doi:10.1088/0508-3443/12/10/308 (https://doi.org/10.1088%2F0508-3443%2F12%2F10%2F308).
45. Alberts, Bruce (2008). Molecular biology of the cell (5th ed.). New York: Garland Science. ISBN 0815341113.
46. Bean, J. J., Saito, M., Fukami, S., Sato, H., Ikeda, S., Ohno, H., … Mckenna, K. P. (2017). Atomic structure and
electronic properties of MgO grain boundaries in tunnelling magnetoresistive devices. Nature Publishing Group.
https://doi.org/10.1038/srep45594
47. Baram, M. & Kaplan W. D. (2008). "Quantitative HRTEM analysis of FIB prepared specimens". Journal of
Microscopy. 232 (3): 395–05. PMID 19094016 (https://www.ncbi.nlm.nih.gov/pubmed/19094016).
doi:10.1111/j.1365-2818.2008.02134.x (https://doi.org/10.1111%2Fj.1365-2818.2008.02134.x).
48. Nebesářová1, Jana; Vancová, Marie (2007). "How to Observe Small Biological Objects in Low-Voltage Electron
Microscope" (http://journals.cambridge.org/action/displayFulltext?type=1&fid=1330184&jid=MAM&volumeId=13
&issueId=S03&aid=1330180). Microscopy and Microanalysis. 13 (3): 248–249. Retrieved 8 August 2016.
49. Drummy, Lawrence, F.; Yang, Junyan; Martin, David C. (2004). "Low-voltage electron microscopy of polymer and
organic molecular thin films". Ultramicroscopy. 99 (4): 247–256. PMID 15149719 (https://www.ncbi.nlm.nih.gov/pu
bmed/15149719). doi:10.1016/j.ultramic.2004.01.011 (https://doi.org/10.1016%2Fj.ultramic.2004.01.011).

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 18/20
7/24/2017 Transmission electron microscopy - Wikipedia

50. Li, Z; Baker, ML; Jiang, W; Estes, MK; Prasad, BV (2009). "Rotavirus Architecture at Subnanometer Resolution" (h
ttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC2643745). Journal of Virology. 83 (4): 1754–1766. PMC 2643745 (ht
tps://www.ncbi.nlm.nih.gov/pmc/articles/PMC2643745)  . PMID 19036817 (https://www.ncbi.nlm.nih.gov/pubmed/
19036817). doi:10.1128/JVI.01855-08 (https://doi.org/10.1128%2FJVI.01855-08).
51. M.J. Zachman; et al. (2016). "Site-Specific Preparation of Intact Solid–Liquid Interfaces by Label-Free In Situ
Localization and Cryo-Focused Ion Beam Lift-Out" (https://www.cambridge.org/core/journals/microscopy-and-micr
oanalysis/article/div-classtitlesite-specific-preparation-of-intact-solidliquid-interfaces-by-label-free-span-classitalicin
-situspan-localization-and-cryo-focused-ion-beam-lift-outdiv/A5E0B5DC08FF1108683ECBC905AA4586).
Microscopy and Microanalysis. 22: 1338–1349. doi:10.1017/S1431927616011892 (https://doi.org/10.1017%2FS143
1927616011892).
52. B.D.A. Levin; et al. (2017). "Characterization of Sulfur and Nanostructured Sulfur Battery Cathodes in Electron
Microscopy Without Sublimation Artifacts" (https://www.cambridge.org/core/journals/microscopy-and-microanalysi
s/article/div-classtitlecharacterization-of-sulfur-and-nanostructured-sulfur-battery-cathodes-in-electron-microscopy-
without-sublimation-artifactsdiv/57779A6234591B790E7CEB1E24FECF16). Microscopy and Microanalysis. 23:
155–162. doi:10.1017/S1431927617000058 (https://doi.org/10.1017%2FS1431927617000058).
53. P.A. Crozier & T.W. Hansen (2014). "In situ and operando transmission electron microscopy of catalytic materials"
(https://www.cambridge.org/core/journals/mrs-bulletin/article/div-classtitlespan-classitalicin-situspan-and-span-class
italicoperandospan-transmission-electron-microscopy-of-catalytic-materialsdiv/C874CFF58B3BD52B31CCC4ECA
F40FDA8). MRS Bulletin. 40: 38–45. doi:10.1557/mrs.2014.304 (https://doi.org/10.1557%2Fmrs.2014.304).
54. F.M.Ross (2015). "Opportunities and challenges in liquid cell electron microscopy" (http://science.sciencemag.org/co
ntent/350/6267/aaa9886/tab-pdf). Science. 350: 1490–1501. doi:10.1126/science.aaa9886 (https://doi.org/10.1126%2
Fscience.aaa9886).
55. Haque, M. A. & Saif, M. T. A. (2001). "In-situ tensile testing of nano-scale specimens in SEM and TEM".
Experimental Mechanics. 42: 123. doi:10.1007/BF02411059 (https://doi.org/10.1007%2FBF02411059).
56. Pennycook, S.J.; Varela, M.; Hetherington, C.J.D.; Kirkland, A.I. (2011). "Materials Advances through Aberration-
Corrected Electron Microscopy" (http://web.pdx.edu/~pmoeck/pennycooks%20aberration%20corrected%20microsc
opes.pdf) (PDF). MRS Bulletin. 31: 36. doi:10.1557/mrs2006.4 (https://doi.org/10.1557%2Fmrs2006.4).
57. Furuya, Kazuo (2008). "Nanofabrication by advanced electron microscopy using intense and focused beam". Science
and Technology of Advanced Materials. 9: 014110. Bibcode:2008STAdM...9a4110F (http://adsabs.harvard.edu/abs/2
008STAdM...9a4110F). doi:10.1088/1468-6996/9/1/014110 (https://doi.org/10.1088%2F1468-6996%2F9%2F1%2F
014110).
58. Erni, Rolf; Rossell, MD; Kisielowski, C; Dahmen, U (2009). "Atomic-Resolution Imaging with a Sub-50-pm
Electron Probe". Physical Review Letters. 102 (9): 096101. Bibcode:2009PhRvL.102i6101E (http://adsabs.harvard.e
du/abs/2009PhRvL.102i6101E). PMID 19392535 (https://www.ncbi.nlm.nih.gov/pubmed/19392535).
doi:10.1103/PhysRevLett.102.096101 (https://doi.org/10.1103%2FPhysRevLett.102.096101).
59. Stahlberg, Henning (September 6, 2012). "Contrast Transfer Functions" (http://www.2dx.unibas.ch/workshop/2012/l
ecture-notes/ctf-the-contrast-transfer-function-by-henning-stahlberg/view). 2dx.unibas.ch.
60. Tanaka, Nobuo (2008). "Present status and future prospects of spherical aberration corrected TEM/STEM for study
of nanomaterials". Sci. Technol. Adv. Mater. 9: 014111. Bibcode:2008STAdM...9a4111T (http://adsabs.harvard.edu/a
bs/2008STAdM...9a4111T). doi:10.1088/1468-6996/9/1/014111 (https://doi.org/10.1088%2F1468-6996%2F9%2F
1%2F014111).
61. Scale of Things Chart (http://science.energy.gov/bes/news-and-resources/scale-of-things-chart/). Science.energy.gov
62. O’Keefe, Michael A. and Shao-Horn, Yang (2004). "Imaging lithium atoms at sub-Ångström resolution" (http://esch
olarship.org/uc/item/63p3p9gd).
63. O’Keefe, Michael A. and Allard, Lawrence F. "Sub-Ångstrom Electron Microscopy for Sub-Ångstrom Nano-
Metrology" (http://www.osti.gov/bridge/servlets/purl/821768-E3YVgN/native/821768.pdf) (PDF). National
Nanotechnology Initiative Workshop on Instrumentation and Metrology for Nanotechnology, Gaithersburg, MD
(2004). osti.gov.

External links
The National Center for Electron Microscopy, Berkeley
Wikimedia Commons has
California USA (http://ncem.lbl.gov)
media related to
The National Center for Macromolecular Imaging, Houston Texas Transmission electron
USA (http://ncmi.bcm.tmc.edu) microscope.
The National Resource for Automated Molecular Microscopy,
New York USA (http://nramm.scripps.edu/) The Wikibook
Tutorial courses in Transmission Electron Microscopy (http://ww Nanotechnology has a
w.rodenburg.org/) page on the topic of:

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 19/20
7/24/2017 Transmission electron microscopy - Wikipedia

Cambridge University Teaching and Learning Package on TEM Transmission electron


(http://www.msm.cam.ac.uk/doitpoms/tlplib/tem/index.php) microscopy (TEM)
Online course on Transmission Electron Microscopy and
Crystalline Imperfections (http://nanohub.org/resources/4092) Dr. Eric Stach (2008).
Transmission electron microscope simulator (http://tem-simulator.goldzoneweb.info/) (Teaching tool).
animations and explanations on various types of microscopes including electron microscopes (http://toute
stquantique.fr/en/microscopy/) (Université Paris Sud)

Retrieved from "https://en.wikipedia.org/w/index.php?


title=Transmission_electron_microscopy&oldid=790286448"

Categories: Electron beam Electron microscopy Scientific techniques

This page was last edited on 12 July 2017, at 19:14.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Transmission_electron_microscopy 20/20
7/24/2017 Phase-contrast microscopy - Wikipedia

Phase-contrast microscopy
From Wikipedia, the free encyclopedia

Phase-contrast microscopy is an optical-microscopy


technique that converts phase shifts in light passing through Phase-contrast microscope
a transparent specimen to brightness changes in the image.
Phase shifts themselves are invisible, but become visible
when shown as brightness variations.

When light waves travel through a medium other than


vacuum, interaction with the medium causes the wave
amplitude and phase to change in a manner dependent on
properties of the medium. Changes in amplitude (brightness)
arise from the scattering and absorption of light, which is
often wavelength-dependent and may give rise to colors.
Photographic equipment and the human eye are only
sensitive to amplitude variations. Without special
arrangements, phase changes are therefore invisible. Yet,
phase changes often carry important information.

Phase-contrast
microscopy is
particularly important
in biology. It reveals A phase-contrast microscope
many cellular
Uses Microscopic observation of
structures that are not
The same cells imaged with unstained biological material
visible with a simpler
traditional bright-field microscopy bright-field Inventor Frits Zernike
(left) and with phase-contrast microscope, as Manufacturer Zeiss, Nikon, Olympus and others
microscopy (right) exemplified in the
figure. These Related items Differential interference contrast
structures were made visible to earlier microscopists by microscopy, Hoffman modulation-
staining, but this required additional preparation and killed contrast microscopy, Quantitative
the cells. The phase-contrast microscope made it possible for phase-contrast microscopy
biologists to study living cells and how they proliferate
through cell division.[1] After its invention in the early 1930s,[2] phase-contrast microscopy proved to be such
an advancement in microscopy, that its inventor Frits Zernike was awarded the Nobel prize (physics) in 1953.[3]

Working principle
The basic principle to making phase changes visible in phase-contrast
microscopy is to separate the illuminating (background) light from the
specimen-scattered light (which makes up the foreground details) and to
manipulate these differently.

The ring-shaped illuminating light (green) that passes the condenser


annulus is focused on the specimen by the condenser. Some of the
illuminating light is scattered by the specimen (yellow). The remaining
Dark field and phase contrast
light is unaffected by the specimen and forms the background light
microscopies operating principle
(red). When observing an unstained biological specimen, the scattered
light is weak and typically phase-shifted by −90° (due to both the

https://en.wikipedia.org/wiki/Phase-contrast_microscopy 1/4
7/24/2017 Phase-contrast microscopy - Wikipedia

typical thickness of specimens and the refractive index difference between biological tissue and the surrounding
medium) relative to the background light. This leads to the foreground (blue vector) and background (red
vector) having nearly the same intensity, resulting in low image contrast.

In a phase-contrast microscope, image contrast is increased in two ways: by generating constructive


interference between scattered and background light rays in regions of the field of view that contain the
specimen, and by reducing the amount of background light that reaches the image plane. First, the background
light is phase-shifted by −90° by passing it through a phase-shift ring, which eliminates the phase difference
between the background and the scattered light rays.

When the light is then focused on


the image plane (where a camera or
eyepiece is placed), this phase shift
causes background and scattered
light rays originating from regions
of the field of view that contain the
sample (i.e., the foreground) to
constructively interfere, resulting in
an increase in the brightness of
these areas compared to regions that
do not contain the sample. Finally,
the background is dimmed ~70-90%
by a gray filter ring—this method
maximizes the amount of scattered
light generated by the illumination
(i.e., background) light, while
minimizing the amount of
illumination light that reaches the image plane. Some of the scattered light (which illuminates the entire surface
of the filter) will be phase-shifted and dimmed by the rings, but to a much lesser extent than the background
light (which only illuminates the phase-shift and gray filter rings).

The above describes negative phase contrast. In its positive form, the background light is instead phase-shifted
by +90°. The background light will thus be 180° out of phase relative to the scattered light. The scattered light
will then be subtracted from the background light to form an image with a darker foreground and a lighter
background, as shown in the first figure.[4][5][6][7][8][9]

Related methods
The success of the phase-contrast microscope has led to a number of subsequent
phase-imaging methods. In 1952 Georges Nomarski patented what is today known
as differential interference contrast (DIC) microscopy.[10] It enhances contrast
by creating artificial shadows, as if the object is illuminated from the side. But DIC
microscopy and phase contrast microscopy both use polarized light, which is
unsuitable when the object or its container alter polarization. With the growing use
of polarizing plastic containers in cell biology, DIC microscopy is increasingly
replaced by Hoffman modulation contrast microscopy, invented by Robert
Cultured cells imaged by Hoffman in 1975.[11]
DIC microscopy
Traditional phase-contrast methods enhance contrast optically, blending brightness
and phase information in single image. Since the introduction of the digital camera
in the mid-1990s, several new digital phase-imaging methods have been developed, collectively known as
quantitative phase-contrast microscopy. These methods digitally create two separate images, an ordinary
bright-field image and a so-called phase-shift image. In each image point, the phase-shift image displays the
quantified phase shift induced by the object, which is proportional to the optical thickness of the object.[12][13]
https://en.wikipedia.org/wiki/Phase-contrast_microscopy 2/4
7/24/2017 Phase-contrast microscopy - Wikipedia

See also
Live cell imaging
Phase-contrast imaging
Phase-contrast X-ray imaging

References
1. "The phase contrast microscope" (https://www.nobelprize.org/educationa
l/physics/microscopes/phase). Nobel Media AB. A quantitative phase-contrast
2. Zernike, F. (1955). "How I Discovered Phase Contrast". Science. 121 microscopy image of cells in culture.
(3141): 345–349. Bibcode:1955Sci...121..345Z (http://adsabs.harvard.ed The height and color of an image
u/abs/1955Sci...121..345Z). PMID 13237991 (https://www.ncbi.nlm.nih.
point correspond to the optical
gov/pubmed/13237991). doi:10.1126/science.121.3141.345 (https://doi.o
rg/10.1126%2Fscience.121.3141.345). thickness, which only depends on the
3. "The Nobel Prize in Physics 1953" (https://www.nobelprize.org/nobel_pr object's thickness and the relative
izes/physics/laureates/1953). Nobel Media AB. refractive index. The volume of an
4. "Phase contrast microscopy" (http://www.phiab.se/technology/phase-cont object can thus be determined when
rast-microscopy). Phase Holographic Imaging AB. the difference in refractive index
5. "Introduction to Phase Contrast Microscopy" (http://www.microscopyu.c between the object and the
om/articles/phasecontrast/phasemicroscopy.html). Nikon MicroscopyU. surrounding media is known.
6. "Phase Contrast" (http://www.leica-microsystems.com/science-lab/phase
-contrast-making-unstained-phase-objects-visible). Leica Science Lab.
7. Frits Zernike (1942). "Phase contrast, a new method for the microscopic
observation of transparent objects part I". Physica. 9 (7): 686–698.
Bibcode:1942Phy.....9..686Z (http://adsabs.harvard.edu/abs/1942Phy.....
9..686Z). doi:10.1016/S0031-8914(42)80035-X (https://doi.org/10.101
6%2FS0031-8914%2842%2980035-X).
8. Frits Zernike (1942). "Phase contrast, a new method for the microscopic
observation of transparent objects part II". Physica. 9 (10): 974–980.
Bibcode:1942Phy.....9..974Z (http://adsabs.harvard.edu/abs/1942Phy.....
9..974Z). doi:10.1016/S0031-8914(42)80079-8 (https://doi.org/10.101
6%2FS0031-8914%2842%2980079-8).
9. Oscar Richards (1956). "Phase Microscopy 1954-56". Science. 124
(3226): 810–814. Bibcode:1956Sci...124..810R (http://adsabs.harvard.ed
u/abs/1956Sci...124..810R). doi:10.1126/science.124.3226.810 (https://d
oi.org/10.1126%2Fscience.124.3226.810).
10. US2924142 (https://worldwide.espacenet.com/textdoc?DB=EPODOC&I
DX=US2924142), Georges Nomarski, "INTERFERENTIAL
POLARIZING DEVICE FOR STUDY OF PHASE OBJECTS"
11. US4200354 (https://worldwide.espacenet.com/textdoc?DB=EPODOC&I
DX=US4200354), Robert Hoffman, "Microscopy systems with
rectangular illumination particularly adapted for viewing transparent
object"
12. Kemmler, M.; Fratz, M.; Giel, D.; Saum, N.; Brandenburg, A.;
Hoffmann, C. (2007). "Noninvasive time-dependent cytometry
monitoring by digital holography". Journal of Biomedical Optics. 12 (6):
064002. Bibcode:2007JBO....12f4002K (http://adsabs.harvard.edu/abs/2
007JBO....12f4002K). PMID 18163818 (https://www.ncbi.nlm.nih.gov/p
ubmed/18163818). doi:10.1117/1.2804926 (https://doi.org/10.1117%2F1.
2804926).
13. "Quantitative phase contrast microscopy" (http://www.phiab.se/technolo
gy/quantitative-phase-contrast-microscopy). Phase Holographic Imaging
AB.

External links

https://en.wikipedia.org/wiki/Phase-contrast_microscopy 3/4
7/24/2017 Phase-contrast microscopy - Wikipedia

Optical Microscopy Primer — Phase Contrast Microscopy (http://micro.magnet.fsu.edu/primer/technique


s/phasecontrast/phaseindex.html) by Florida State University
Phase contrast and dark field microscopes (http://toutestquantique.fr/en/dark-field-and-phase-contrast/)
(Université Paris Sud)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Phase- Wikimedia Commons has


media related to Phase
contrast_microscopy&oldid=785574435"
contrast microscopic
images.
Categories: Microscopy Dutch inventions

This page was last edited on 14 June 2017, at 08:27.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Phase-contrast_microscopy 4/4
7/25/2017 Fluorescence microscope - Wikipedia

Fluorescence microscope
From Wikipedia, the free encyclopedia

A fluorescence microscope is an optical microscope that uses


fluorescence and phosphorescence instead of, or in addition to,
reflection and absorption to study properties of organic or inorganic
substances.[1][2] The "fluorescence microscope" refers to any
microscope that uses fluorescence to generate an image, whether it is a
more simple set up like an epifluorescence microscope, or a more
complicated design such as a confocal microscope, which uses optical
sectioning to get better resolution of the fluorescent image.

On 8 October 2014, the Nobel Prize in Chemistry was awarded to Eric


Betzig, William Moerner and Stefan Hell for "the development of
super-resolved fluorescence microscopy," which brings "optical
microscopy into the nanodimension".[3][4]

An upright fluorescence microscope


Contents (Olympus BX61) with the fluorescent
filter cube turret above the objective
1 Principle lenses, coupled with a digital camera.
1.1 Epifluorescence microscopy
2 Light sources
3 Sample preparation
3.1 Biological fluorescent stains
3.2 Immunofluorescence
3.3 Fluorescent proteins
4 Limitations
5 Sub-diffraction techniques
6 Fluorescence micrograph gallery
7 See also
8 References
9 External links
Fluorescent and confocal microscopes operating
principle
Principle
The specimen is illuminated with light of a specific wavelength (or wavelengths) which is absorbed by the
fluorophores, causing them to emit light of longer wavelengths (i.e., of a different color than the absorbed
light). The illumination light is separated from the much weaker emitted fluorescence through the use of a
spectral emission filter. Typical components of a fluorescence microscope are a light source (xenon arc lamp or
mercury-vapor lamp are common; more advanced forms are high-power LEDs and lasers), the excitation filter,
the dichroic mirror (or dichroic beamsplitter), and the emission filter (see figure below). The filters and the
dichroic beamsplitter are chosen to match the spectral excitation and emission characteristics of the fluorophore
used to label the specimen.[1] In this manner, the distribution of a single fluorophore (color) is imaged at a time.
Multi-color images of several types of fluorophores must be composed by combining several single-color
images.[1]

Most fluorescence microscopes in use are epifluorescence microscopes, where excitation of the fluorophore
and detection of the fluorescence are done through the same light path (i.e. through the objective). These
microscopes are widely used in biology and are the basis for more advanced microscope designs, such as the
confocal microscope and the total internal reflection fluorescence microscope (TIRF).

https://en.wikipedia.org/wiki/Fluorescence_microscope 1/10
7/25/2017 Fluorescence microscope - Wikipedia

Epifluorescence microscopy

The majority of fluorescence microscopes, especially those used in the


life sciences, are of the epifluorescence design shown in the diagram.
Light of the excitation wavelength is focused on the specimen through
the objective lens. The fluorescence emitted by the specimen is focused
to the detector by the same objective that is used for the excitation
which for greater resolution will need objective lens with higher
numerical aperture. Since most of the excitation light is transmitted
through the specimen, only reflected excitatory light reaches the
objective together with the emitted light and the epifluorescence method
therefore gives a high signal-to-noise ratio. The dichroic beamsplitter
acts as a wavelength specific filter, transmitting fluoresced light through
to the eyepiece or detector, but reflecting any remaining excitation light
back towards the source.
Schematic of a fluorescence
microscope.
Light sources
Fluorescence microscopy requires intense, near-monochromatic, illumination which some widespread light
sources, like halogen lamps cannot provide. Four main types of light source are used, including xenon arc
lamps or mercury-vapor lamps with an excitation filter, lasers, supercontinuum sources, and high-power LEDs.
Lasers are most widely used for more complex fluorescence microscopy techniques like confocal microscopy
and total internal reflection fluorescence microscopy while xenon lamps, and mercury lamps, and LEDs with a
dichroic excitation filter are commonly used for widefield epifluorescence microscopes. By placing two
microlens arrays into the illumination path of a widefield epifluorescence microscope[5] , highly uniform
illumination with a coefficient of variation of 1-2% can be achieved.

Sample preparation
In order for a sample to be suitable for fluorescence microscopy it must be fluorescent. There are several
methods of creating a fluorescent sample; the main techniques are labelling with fluorescent stains or, in the
case of biological samples, expression of a fluorescent protein. Alternatively the intrinsic fluorescence of a
sample (i.e., autofluorescence) can be used.[1] In the life sciences fluorescence microscopy is a powerful tool
which allows the specific and sensitive staining of a specimen in order to detect the distribution of proteins or
other molecules of interest. As a result, there is a diverse range of techniques for fluorescent staining of
biological samples.

Biological fluorescent stains

Many fluorescent stains have been designed for a range of biological molecules. Some of these are small
molecules which are intrinsically fluorescent and bind a biological molecule of interest. Major examples of
these are nucleic acid stains like DAPI and Hoechst (excited by UV wavelength light) and DRAQ5 and
DRAQ7 (optimally excited by red light) which all bind the minor groove of DNA, thus labeling the nuclei of
cells. Others are drugs or toxins which bind specific cellular structures and have been derivatised with a
fluorescent reporter. A major example of this class of fluorescent stain is phalloidin which is used to stain actin
fibres in mammalian cells.

There are many fluorescent molecules called fluorophores or fluorochromes such as fluorescein, Alexa Fluors
or DyLight 488, which can be chemically linked to a different molecule which binds the target of interest
within the sample.

Immunofluorescence

https://en.wikipedia.org/wiki/Fluorescence_microscope 2/10
7/25/2017 Fluorescence microscope - Wikipedia

Immunofluorescence is a technique which uses the highly specific binding


of an antibody to its antigen in order to label specific proteins or other
molecules within the cell. A sample is treated with a primary antibody
specific for the molecule of interest. A fluorophore can be directly
conjugated to the primary antibody. Alternatively a secondary antibody,
conjugated to a fluorophore, which binds specifically to the first antibody
can be used. For example, a primary antibody raised in a mouse which
recognises tubulin combined with a secondary anti-mouse antibody
derivatised with a fluorophore could be used to label microtubules in a cell.

Fluorescent proteins

The modern understanding of genetics and the techniques available for


modifying DNA allow scientists to genetically modify proteins to also
carry a fluorescent protein reporter. In biological samples this allows a
scientist to directly make a protein of interest fluorescent. The protein
location can then be directly tracked, including in live cells.
A sample of herring sperm stained
with SYBR green in a cuvette
Limitations illuminated by blue light in an
epifluorescence microscope. The
Fluorophores lose their ability to fluoresce as they are illuminated in a SYBR green in the sample binds
process called photobleaching. Photobleaching occurs as the fluorescent to the herring sperm DNA and,
molecules accumulate chemical damage from the electrons excited during once bound, fluoresces giving off
fluorescence. Photobleaching can severely limit the time over which a green light when illuminated by
sample can be observed by fluorescent microscopy. Several techniques blue light.
exist to reduce photobleaching such as the use of more robust fluorophores,
by minimizing illumination, or by using photoprotective scavenger
chemicals.

Fluorescence microscopy with fluorescent reporter proteins has enabled analysis of live cells by fluorescence
microscopy, however cells are susceptible to phototoxicity, particularly with short wavelength light.
Furthermore, fluorescent molecules have a tendency to generate reactive chemical species when under
illumination which enhances the phototoxic effect.

Unlike transmitted and reflected light microscopy techniques fluorescence microscopy only allows observation
of the specific structures which have been labeled for fluorescence. For example, observing a tissue sample
prepared with a fluorescent DNA stain by fluorescent microscopy only reveals the organization of the DNA
within the cells and reveals nothing else about the cell morphologies.

Sub-diffraction techniques
The wave nature of light limits the size of the spot to which light can be focused due to the diffraction limit.
This limitation was described in the 19th century by Ernst Abbe and limits an optical microscope's resolution to
approximately half of the wavelength of the light used. Fluorescence microscopy is central to many techniques
which aim to reach past this limit by specialized optical configurations.

Several improvements in microscopy techniques have been invented in the 20th century and have resulted in
increased resolution and contrast to some extent. However they did not overcome the diffraction limit. In 1978
first theoretical ideas have been developed to break this barrier by using a 4Pi microscope as a confocal laser
scanning fluorescence microscope where the light is focused ideally from all sides to a common focus which is
used to scan the object by 'point-by-point' excitation combined with 'point-by-point' detection.[6] However, the

https://en.wikipedia.org/wiki/Fluorescence_microscope 3/10
7/25/2017 Fluorescence microscope - Wikipedia

first experimental demonstration of the 4pi microscope took place in 1994.[7] 4Pi microscopy maximizes the
amount of available focusing directions by using two opposing objective lenses or Two-photon excitation
microscopy using redshifted light and multi-photon excitation.

Integrated correlative microscopy combines a fluorescence microscope with an electron microscope. This
allows one to visualize ultrastructure and contextual information with the electron microscope while using the
data from the fluorescence microscope as a labelling tool.[8]

The first technique to really achieve a sub-diffraction resolution was STED microscopy, proposed in 1994. This
method and all techniques following the RESOLFT concept rely on a strong non-linear interaction between
light and fluorescing molecules. The molecules are driven strongly between distinguishable molecular states at
each specific location, so that finally light can be emitted at only a small fraction of space, hence an increased
resolution.

As well in the 1990s another super resolution microscopy method based on wide field microscopy has been
developed. Substantially improved size resolution of cellular nanostructures stained with a fluorescent marker
was achieved by development of SPDM localization microscopy and the structured laser illumination (spatially
modulated illumination, SMI).[9] Combining the principle of SPDM with SMI resulted in the development of
the Vertico SMI microscope.[10][11] Single molecule detection of normal blinking fluorescent dyes like Green
fluorescent protein (GFP) can be achieved by using a further development of SPDM the so-called
SPDMphymod technology which makes it possible to detect and count two different fluorescent molecule types
at the molecular level (this technology is referred to as two-color localization microscopy or 2CLM).[12]

Alternatively, the advent of photoactivated localization microscopy could achieve similar results by relying on
blinking or switching of single molecules, where the fraction of fluorescing molecules is very small at each
time. This stochastic response of molecules on the applied light corresponds also to a highly nonlinear
interaction, leading to subdiffraction resolution.

Fluorescence micrograph gallery

https://en.wikipedia.org/wiki/Fluorescence_microscope 4/10
7/25/2017 Fluorescence microscope - Wikipedia

A z-projection of an osteosarcoma cell phalloidin stained to


visualise actin filaments. The image was taken on a confocal
microscope and the subsequent deconvolution was done using
an experimentally derived point spread function.

Epifluorescent imaging of the three components in a dividing


human cancer cell. DNA is stained blue, a protein called
INCENP is green, and the microtubules are red. Each
fluorophore is imaged separately using a different
combination of excitation and emission filters, and the images
are captured sequentially using a digital CCD camera, then
overlaid to give a complete image.

https://en.wikipedia.org/wiki/Fluorescence_microscope 5/10
7/25/2017 Fluorescence microscope - Wikipedia

Endothelial cells under the microscope. Nuclei are stained


blue with DAPI, microtubules are marked green by an
antibody bound to FITC and actin filaments are labeled red
with phalloidin bound to TRITC. Bovine pulmonary artery
endothelial (BPAE) cells

3D dual-color super-resolution microscopy with Her2 and


Her3 in breast cells, standard dyes: Alexa 488, Alexa 568.
LIMON microscopy

https://en.wikipedia.org/wiki/Fluorescence_microscope 6/10
7/25/2017 Fluorescence microscope - Wikipedia

Human lymphocyte nucleus stained with DAPI with


chromosome 13 (green) and 21 (red) centromere probes
hybridized (Fluorescent in situ hybridization (FISH))

Yeast cell membrane visualized by some membrane proteins


fused with RFP and GFP fluorescent markers. Imposition of
light from both of markers results in yellow color.

https://en.wikipedia.org/wiki/Fluorescence_microscope 7/10
7/25/2017 Fluorescence microscope - Wikipedia

Super-resolution microscopy: Single YFP molecule detection


in a human cancer cell. Typical distance measurements in the
15 nm range measured with a Vertico-SMI/SPDMphymod microscope

Super-resolution microscopy: Co-localization microscopy


(2CLM) with GFP and RFP fusion proteins (nucleus of a bone
cancer cell) 120.000 localized molecules in a wide-field area
(470 µm2) measured with a Vertico-SMI/SPDMphymod microscope

https://en.wikipedia.org/wiki/Fluorescence_microscope 8/10
7/25/2017 Fluorescence microscope - Wikipedia

Fluorescence microscopy of DNA Expression in the Human


Wild-Type and P239S Mutant Palladin.

Fluorescence microscopy images of sun flares pathology in a


blood cell showing the affected areas in red.

See also
Green fluorescent protein (GFP) Stokes shift
Mercury-vapor lamp Xenon arc lamp
Microscope Correlative Light-Electron Microscopy
Scanning electron
microscope#Cathodoluminescence
Stokes shift
References
1. Spring KR, Davidson MW. "Introduction to Fluorescence Microscopy" (http://www.microscopyu.com/articles/fluore
scence/fluorescenceintro.html). Nikon MicroscopyU. Retrieved 2008-09-28.
2. "The Fluorescence Microscope" (http://nobelprize.org/educational_games/physics/microscopes/fluorescence/).
Microscopes—Help Scientists Explore Hidden Worlds. The Nobel Foundation. Retrieved 2008-09-28.
3. Ritter, Karl; Rising, Malin (8 October 2014). "2 Americans, 1 German win chemistry Nobel" (http://apnews.excite.co
m/article/20141008/nobel-chemistry-e759dff699.html). Associated Press. Retrieved 8 October 2014.
https://en.wikipedia.org/wiki/Fluorescence_microscope 9/10
7/25/2017 Fluorescence microscope - Wikipedia

4. Chang, Kenneth (8 October 2014). "2 Americans and a German Are Awarded Nobel Prize in Chemistry" (https://ww
w.nytimes.com/2014/10/09/science/nobel-prize-chemistry.html). New York Times. Retrieved 8 October 2014.
5. F.A.W. Coumans; E. van der Pol; L.W.M.M. Terstappen (2012). "Flat-top illumination profile in an epi-fluorescence
microscope by dual micro lens arrays" (http://onlinelibrary.wiley.com/doi/10.1002/cyto.a.22029/abstract;jsessionid=
7025063A6179E481B3187FDB98AD2B56.f03t04). Cytometry Part A. 81 (4): 324–331. PMID 22392641 (https://w
ww.ncbi.nlm.nih.gov/pubmed/22392641). doi:10.1002/cyto.a.22029 (https://doi.org/10.1002%2Fcyto.a.22029).
6. Cremer, C; Cremer, T (1978). "Considerations on a laser-scanning-microscope with high resolution and depth of
field" (http://www.kip.uni-heidelberg.de/AG_Cremer/pdf-files/Cremer_Micros_Acta_1978.pdf) (PDF). Microscopica
acta. 81 (1): 31–44. PMID 713859 (https://www.ncbi.nlm.nih.gov/pubmed/713859).
7. S.W. Hell, E.H.K. Stelzer, S. Lindek, C. Cremer; Stelzer; Lindek; Cremer (1994). "Confocal microscopy with an
increased detection aperture: type-B 4Pi confocal microscopy". Optics Letters. 19 (3): 222–224.
Bibcode:1994OptL...19..222H (http://adsabs.harvard.edu/abs/1994OptL...19..222H). PMID 19829598 (https://www.
ncbi.nlm.nih.gov/pubmed/19829598). doi:10.1364/OL.19.000222 (https://doi.org/10.1364%2FOL.19.000222).
8. Baarle, Kaitlin van. "Correlative microscopy: Opening up worlds of information with fluorescence" (http://blog.delm
ic.com/correlative-microscopy-opening-up-worlds-of-information-with-fluorescence). Retrieved 2017-02-16.
9. Hausmann, Michael; Schneider, Bernhard; Bradl, Joachim; Cremer, Christoph G. (1997), "High-precision distance
microscopy of 3D nanostructures by a spatially modulated excitation fluorescence microscope", in Bigio, Irving J;
Schneckenburger, Herbert; Slavik, Jan; et al., Optical Biopsies and Microscopic Techniques II (http://www.kip.uni-he
idelberg.de/AG_Cremer/sites/default/files/Bilder/pdf_1997/1997SPIE3197Hausmannp217.pdf) (PDF), Optical
Biopsies and Microscopic Techniques II, 3197, p. 217, doi:10.1117/12.297969 (https://doi.org/10.1117%2F12.29796
9)
10. Reymann, J; Baddeley, D; Gunkel, M; Lemmer, P; Stadter, W; Jegou, T; Rippe, K; Cremer, C; Birk, U (2008).
"High-precision structural analysis of subnuclear complexes in fixed and live cells via spatially modulated
illumination (SMI) microscopy" (http://www.kip.uni-heidelberg.de/AG_Cremer/sites/default/files/Bilder/pdf/Chrom
Res_Vertico-SMI-2008-printed_2008.pdf) (PDF). Chromosome research : an international journal on the molecular,
supramolecular and evolutionary aspects of chromosome biology. 16 (3): 367–82. PMID 18461478 (https://www.nc
bi.nlm.nih.gov/pubmed/18461478). doi:10.1007/s10577-008-1238-2 (https://doi.org/10.1007%2Fs10577-008-1238-
2).
11. Baddeley, D; Batram, C; Weiland, Y; Cremer, C; Birk, UJ (2003). "Nanostructure analysis using spatially modulated
illumination microscopy" (http://www.kip.uni-heidelberg.de/AG_Cremer/sites/default/files/Bilder/pdf_/cpu01077.pd
f) (PDF). Nature Protocols. 2 (10): 2640–6. PMID 17948007 (https://www.ncbi.nlm.nih.gov/pubmed/17948007).
doi:10.1038/nprot.2007.399 (https://doi.org/10.1038%2Fnprot.2007.399).
12. Gunkel, M; Erdel, F; Rippe, K; Lemmer, P; Kaufmann, R; Hörmann, C; Amberger, R; Cremer, C (2009). "Dual color
localization microscopy of cellular nanostructures" (http://www.kip.uni-heidelberg.de/AG_Cremer/pdf-files/BTJGun
kel.pdf) (PDF). Biotechnology journal. 4 (6): 927–38. PMID 19548231 (https://www.ncbi.nlm.nih.gov/pubmed/19548
231). doi:10.1002/biot.200900005 (https://doi.org/10.1002%2Fbiot.200900005).

External links
Fluorophores.org (http://www.fluorophores.org/), the database of fluorescent dyes
Microscopy Resource Center (http://www.olympusmicro.com/)
Fluorescence Microscopy at Leica Science Lab (http://www.leica-microsystems.com/science-lab/topics/fl
uorescence-microscopy/)
animations and explanations on various types of microscopes including fluorescent and confocal
microscopes (http://toutestquantique.fr/en/microscopy/) (Université Paris Sud)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Fluorescence_microscope&oldid=784821792"

Categories: Microscopes Microscopy Fluorescence Cell imaging Laboratory equipment

This page was last edited on 10 June 2017, at 08:19.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Fluorescence_microscope 10/10
7/25/2017 Confocal microscopy - Wikipedia

Confocal microscopy
From Wikipedia, the free encyclopedia

Confocal microscopy, most frequently confocal laser


scanning microscopy (CLSM), is an optical imaging Confocal Microscopy
technique for increasing optical resolution and contrast of a Medical diagnostics
micrograph by means of adding a spatial pinhole placed at
MeSH D018613
the confocal plane of the lens to eliminate out-of-focus
light.[1] It enables the reconstruction of three-dimensional OPS-301 3-301 (http://ops.icd-code.de/ops/code/
structures from sets of images obtained at different depths (a code 3-301.html)
process known as optical sectioning) within a thick object.
This technique has gained popularity in the scientific and
industrial communities and typical applications are in life
sciences, semiconductor inspection and materials science.

A conventional microscope "sees" as far into the specimen


as the light can penetrate, while a confocal microscope only
"sees" images one depth level at a time. In effect, the
CLSM achieves a controlled and highly limited depth of
focus.
Principle of confocal microscopy

Contents
1 Basic concept
2 Techniques used for horizontal scanning
3 Resolution enhancement
4 Uses
4.1 Biology and medicine
4.2 Optics and crystallography
5 Variants and enhancements
5.1 Improving axial resolution
5.2 Super resolution Fluorescent and confocal microscopes operating
5.3 Low-temperature operability principle
6 Images
7 History
7.1 The beginnings: 1940–1957
7.2 The Tandem-Scanning-Microscope
7.3 1969: The first confocal laser scanning
microscope
7.4 1977–1985: point scanners with lasers
and stage scanning
7.5 Starting 1985: Laser point scanners with
beam scanning
8 See also
9 References
10 External links

Basic concept

https://en.wikipedia.org/wiki/Confocal_microscopy 1/12
7/25/2017 Confocal microscopy - Wikipedia

The principle of confocal imaging was patented in 1957 by Marvin


Minsky[2][3] and aims to overcome some limitations of traditional wide-
field fluorescence microscopes. In a conventional (i.e., wide-field)
fluorescence microscope, the entire specimen is flooded evenly in light
from a light source. All parts of the specimen in the optical path are
excited at the same time and the resulting fluorescence is detected by
the microscope's photodetector or camera including a large unfocused
background part. In contrast, a confocal microscope uses point Confocal point sensor principle from
illumination (see Point Spread Function) and a pinhole in an optically Minsky's patent
conjugate plane in front of the detector to eliminate out-of-focus signal
– the name "confocal" stems from this configuration. As only light produced by fluorescence very close to the
focal plane can be detected, the image's optical resolution, particularly in the sample depth direction, is much
better than that of wide-field microscopes. However, as much of the light from sample fluorescence is blocked
at the pinhole, this increased resolution is at the cost of decreased signal intensity – so long exposures are often
required. To offset this drop in signal after the pinhole, the light intensity is detected by a sensitive detector,
usually a photomultiplier tube (PMT) or avalanche photodiode, transforming the light signal into an electrical
one that is recorded by a computer.[4]

As only one point in the sample is illuminated at a time, 2D or 3D imaging requires scanning over a regular
raster (i.e., a rectangular pattern of parallel scanning lines) in the specimen. The beam is scanned across the
sample in the horizontal plane by using one or more (servo controlled) oscillating mirrors. This scanning
method usually has a low reaction latency and the scan speed can be varied. Slower scans provide a better
signal-to-noise ratio, resulting in better contrast and higher resolution.

The achievable thickness of the focal plane is defined mostly by the wavelength of the used light divided by the
numerical aperture of the objective lens, but also by the optical properties of the specimen. The thin optical
sectioning possible makes these types of microscopes particularly good at 3D imaging and surface profiling of
samples.

Successive slices make up a 'z-stack' which can either be processed by certain software to create a 3D image, or
it is merged into a 2D stack (predominately the maximum pixel intensity is taken, other common methods
include using the standard deviation or summing the pixels).[1]

Confocal microscopy provides the capacity for direct, noninvasive,


serial optical sectioning of intact, thick, living specimens with a
minimum of sample preparation as well as a marginal improvement in
lateral resolution.[4] Biological samples are often treated with
fluorescent dyes to make selected objects visible. However, the actual
dye concentration can be low to minimize the disturbance of biological
systems: some instruments can track single fluorescent molecules. Also,
transgenic techniques can create organisms that produce their own
fluorescent chimeric molecules (such as a fusion of GFP, green
fluorescent protein with the protein of interest).

Techniques used for horizontal scanning


GFP fusion protein being expressed
Four types of confocal microscopes are commercially available: in Nicotiana benthamiana. The
Confocal laser scanning microscopes use multiple mirrors (typically 2 fluorescence is visible by confocal
or 3 scanning linearly along the x and the y axis) to scan the laser across microscopy.
the sample and "descan" the image across a fixed pinhole and detector.

Spinning-disk (Nipkow disk) confocal microscopes use a series of moving pinholes on a disc to scan spots of
light. Since a series of pinholes scans an area in parallel each pinhole is allowed to hover over a specific area
for a longer amount of time thereby reducing the excitation energy needed to illuminate a sample when
https://en.wikipedia.org/wiki/Confocal_microscopy 2/12
7/25/2017 Confocal microscopy - Wikipedia

compared to laser scanning microscopes. Decreased excitation energy reduces photo-toxicity and photo-
bleaching of a sample often making it the preferred system for imaging live cells or organisms.

Microlens enhanced or dual spinning disk confocal microscopes work under the same principles as spinning-
disk confocal microscopes except a second spinning disk containing micro-lenses is placed before the spinning
disk containing the pinholes. Every pinhole has an associated micro-lens. The micro-lenses act to capture a
broad band of light and focus it into each pinhole significantly increasing the amount of light directed into each
pinhole and reducing the amount of light blocked by the spinning disk. Microlens enhanced confocal
microscopes are therefore significantly more sensitive than standard spinning disk systems. Yokogawa Electric
invented this technology in 1992.[5]

Programmable array microscopes (PAM) use an electronically controlled spatial light modulator (SLM) that
produces a set of moving pinholes. The SLM is a device containing an array of pixels with some property
(opacity, reflectivity or optical rotation) of the individual pixels that can be adjusted electronically. The SLM
contains microelectromechanical mirrors or liquid crystal components. The image is usually acquired by a
charge coupled device (CCD) camera.

Each of these classes of confocal microscope have particular advantages and disadvantages. Most systems are
either optimized for recording speed (i.e. video capture) or high spatial resolution. Confocal laser scanning
microscopes can have a programmable sampling density and very high resolutions while Nipkow and PAM use
a fixed sampling density defined by the camera's resolution. Imaging frame rates are typically slower for single
point laser scanning systems than spinning-disk or PAM systems. Commercial spinning-disk confocal
microscopes achieve frame rates of over 50 per second[6] – a desirable feature for dynamic observations such as
live cell imaging.

In practice, Nipkow and PAM allow multiple pinholes scanning the same area in parallel[7] as long as the
pinholes are sufficiently far apart.

Cutting-edge development of confocal laser scanning microscopy now allows better than standard video rate
(60 frames per second) imaging by using multiple microelectromechanical scanning mirrors.

Confocal X-ray fluorescence imaging is a newer technique that allows control over depth, in addition to
horizontal and vertical aiming, for example, when analyzing buried layers in a painting.[8]

Resolution enhancement
CLSM is a scanning imaging technique in which the resolution obtained is best explained by comparing it with
another scanning technique like that of the scanning electron microscope (SEM). CLSM has the advantage of
not requiring a probe to be suspended nanometers from the surface, as in an AFM or STM, for example, where
the image is obtained by scanning with a fine tip over a surface. The distance from the objective lens to the
surface (called the working distance) is typically comparable to that of a conventional optical microscope. It
varies with the system optical design, but working distances from hundreds of micrometres to several
millimeters are typical.

In CLSM a specimen is illuminated by a point laser source, and each volume element is associated with a
discrete scattering or fluorescence intensity. Here, the size of the scanning volume is determined by the spot
size (close to diffraction limit) of the optical system because the image of the scanning laser is not an infinitely
small point but a three-dimensional diffraction pattern. The size of this diffraction pattern and the focal volume
it defines is controlled by the numerical aperture of the system's objective lens and the wavelength of the laser
used. This can be seen as the classical resolution limit of conventional optical microscopes using wide-field
illumination. However, with confocal microscopy it is even possible to improve on the resolution limit of wide-
field illumination techniques because the confocal aperture can be closed down to eliminate higher orders of
the diffraction pattern. For example, if the pinhole diameter is set to 1 Airy unit then only the first order of the
diffraction pattern makes it through the aperture to the detector while the higher orders are blocked, thus

https://en.wikipedia.org/wiki/Confocal_microscopy 3/12
7/25/2017 Confocal microscopy - Wikipedia

improving resolution at the cost of a slight decrease in brightness. In fluorescence observations, the resolution
limit of confocal microscopy is often limited by the signal to noise ratio caused by the small number of photons
typically available in fluorescence microscopy. One can compensate for this effect by using more sensitive
photodetectors or by increasing the intensity of the illuminating laser point source. Increasing the intensity of
illumination laser risks excessive bleaching or other damage to the specimen of interest, especially for
experiments in which comparison of fluorescence brightness is required. When imaging tissues which are
differentially refractive, such as the spongy mesophyll of plant leaves or other air-space containing tissues,
spherical aberrations that impair confocal image quality are often pronounced. Such aberrations however, can
be significantly reduced by mounting samples in optically transparent, non-toxic perfluorocarbons such as
perfluorodecalin, which readily infiltrates tissues and has a refractive index almost identical to that of water [1]
(http://www3.interscience.wiley.com/journal/123335070/abstract).

Uses
CLSM is widely used in numerous biological science disciplines, from cell biology and genetics to
microbiology and developmental biology. It is also used in quantum optics and nano-crystal imaging and
spectroscopy.

Biology and medicine

Clinically, CLSM is used in the evaluation of various eye diseases, and


is particularly useful for imaging, qualitative analysis, and
quantification of endothelial cells of the cornea.[9] It is used for
localizing and identifying the presence of filamentary fungal elements
in the corneal stroma in cases of keratomycosis, enabling rapid
diagnosis and thereby early institution of definitive therapy. Research
into CLSM techniques for endoscopic procedures (endomicroscopy) is
also showing promise.[10] In the pharmaceutical industry, it was
recommended to follow the manufacturing process of thin film
pharmaceutical forms, to control the quality and uniformity of the drug
Example of a stack of confocal
distribution.[11] microscope images showing the
distribution of actin filaments
Optics and crystallography throughout a cell.

CLSM is used as the data retrieval mechanism in some 3D optical data


storage systems and has helped determine the age of the Magdalen papyrus.

Variants and enhancements


Improving axial resolution

The point spread function of the pinhole is an ellipsoid, several times as long as it is wide. This limits the axial
resolution of the microscope. One technique of overcoming this is 4π microscopy where incident and or
emitted light are allowed to interfere from both above and below the sample to reduce the volume of the
ellipsoid. An alternative technique is confocal theta microscopy. In this technique the cone of illuminating
light and detected light are at an angle to each other (best results when they are perpendicular). The intersection
of the two point spread functions gives a much smaller effective sample volume. From this evolved the single
plane illumination microscope. Additionally deconvolution may be employed using an experimentally derived
point spread function to remove the out of focus light, improving contrast in both the axial and lateral planes.

Super resolution

https://en.wikipedia.org/wiki/Confocal_microscopy 4/12
7/25/2017 Confocal microscopy - Wikipedia

There are confocal variants that achieve resolution below the diffraction limit such as stimulated emission
depletion microscopy (STED). Besides this technique a broad variety of other (not confocal based) super-
resolution techniques is available like PALM, (d)STORM, SIM, and so on. They all have their own advantages
like ease of use, resolution and the need for special equipment/buffers/fluorophores/... .

Low-temperature operability

To image samples at low temperature, two main approaches have been used, both based on the laser scanning
confocal microscopy architecture. One approach is to use a continuous flow cryostat: only the sample is at low
temperature and it is optically addressed through a transparent window.[12] Another possible approach is to
have part of the optics (especially the microscope objective) in a cryogenic storage dewar.[13] This second
approach, although more cumbersome, guarantees better mechanical stability and avoids the losses due to the
window.

Images

β-tubulin in Tetrahymena (a ciliated protozoan).

Partial surface profile of a 1-Euro coin, measured with a


Nipkow disk confocal microscope.

https://en.wikipedia.org/wiki/Confocal_microscopy 5/12
7/25/2017 Confocal microscopy - Wikipedia

Reflection data for 1-Euro coin.

Cross section at y=200 through profile of 1-Euro coin.

https://en.wikipedia.org/wiki/Confocal_microscopy 6/12
7/25/2017 Confocal microscopy - Wikipedia

Colour coded image of actin filaments in a cancer cell.

Green signal from anti-tubulin antibody conjugated with


Alexa Fluor 488) and nuclei (blue signal from DNA
stained with DAPI) in root meristem cells 4-day old
Arabidopsis thaliana (Col-0). Scale bar: 5 um.

History
The beginnings: 1940–1957

In 1940 Hans Goldmann,


ophthalmologist in Bern,
Switzerland, developed a slit lamp
system to document eye
examinations.[14] This system is
considered by some later authors as
the first confocal optical
system.[15][16]
Scheme from Minsky's patent application showing the principle of the
In 1943 Zyun Koana published a transmission confocal scanning microscope he built.
confocal system.[17] [15] A figure in
this publication clearly shows a confocal transmission beam path. In 1951 Hiroto Naora, a colleague of Koana,

https://en.wikipedia.org/wiki/Confocal_microscopy 7/12
7/25/2017 Confocal microscopy - Wikipedia

described a confocal microscope in the journal Science for spectrophotometry.[18]

The first confocal scanning microscope was built by Marvin Minsky in 1955 and a patent was filed in 1957.
The scanning of the illumination point in the focal plane was achieved by moving the stage. No scientific
publication was submitted and no images made with it were preserved.[19][20]

The Tandem-Scanning-Microscope

In the 1960s, the Czechoslovak Mojmír Petráň from the Medical


Faculty of the Charles University in Plzeň developed the Tandem-
Scanning-Microscope, the first commercialized confocal microscope. It
was sold by a small company in Czechoslovakia and in the United
States by Tracor-Northern (later Noran) and used a rotating Nipkow
disk to generate multiple excitation and emission pinholes.[16][21]

The Czechoslovak patent was filed 1966 by Petráň and Milan


Hadravský, a Czechoslovak coworker. A first scientific publication with
data and images generated with this microscope was published in the
journal Science in 1967, authored by M. David Egger from Yale
University and Petráň.[22] As a footnote to this paper it is mentioned
that Petráň designed the microscope and supervised its construction and
that he was, in part, a "research associate" at Yale. A second publication
from 1968 described the theory and the technical details of the
instrument and had Hadravský and Robert Galambos, the head of the Scheme of Petráň's Tandem-
group at Yale, as additional authors.[23] In 1970 the US patent was Scanning-Microscope. Red bar added
to indicate the Nipkow-Disk.
granted. It was filed in 1967.[24]

1969: The first confocal laser scanning microscope

In 1969 and 1971, M. David Egger and Paul Davidovits from Yale University, published two papers describing
the first confocal laser scanning microscope.[25][26] It was a point scanner, meaning just one illumination spot
was generated. It used epi-Illumination-reflection microscopy for the observation of nerve tissue. A 5 mW
Helium-Neon-Laser with 633 nm light was reflected by a semi-transparent mirror towards the objective. The
objective was a simple lens with a focal length of 8.5 mm. As opposed to all earlier and most later systems, the
sample was scanned by movement of this lens (objective scanning), leading to a movement of the focal point.
Reflected light came back to the semitransparent mirror, the transmitted part was focused by another lens on the
detection pinhole behind which a photomultiplier tube was placed. The signal was visualized by a CRT of an
oscilloscope, the cathode ray was moved simultaneously with the objective. A special device allowed to make
Polaroid photos, three of which were shown in the 1971 publication.

The authors speculate about fluorescent dyes for in vivo investigations. They cite Minsky‘s patent, thank Steve
Baer, at the time a doctoral student at the Albert Einstein School of Medicine in New York City where he
developed a confocal line scanning microscope,[27] for suggesting to use a laser with ‚Minsky‘s microscope‘
and thank Galambos, Hadravsky and Petráň for discussions leading to the development of their microscope.
The motivation for their development was that in the Tandem-Scanning-Microscope only a fraction of 10−7 of
the illumination light participates in generating the image in the eye piece. Thus, image quality was not
sufficient for most biological investigations.[15][28]

1977–1985: point scanners with lasers and stage scanning

https://en.wikipedia.org/wiki/Confocal_microscopy 8/12
7/25/2017 Confocal microscopy - Wikipedia

In 1977 Colin J. R. Sheppard and Amarjyoti Choudhury, Oxford, UK, published a theoretical analysis of
confocal and laser-scanning microscopes.[29] It is probably the first publication using the term "confocal
microscope".[15][28]

In 1978, the brothers Christoph Cremer and Thomas Cremer published a design for a confocal laser-scanning-
microscope using fluorescent excitation with electronic autofocus. They also suggested a laser point
illumination by using a „4π-point-hologramme“.[28][30] This CLSM design combined the laser scanning method
with the 3D detection of biological objects labeled with fluorescent markers for the first time.

In 1978 and 1980, the Oxford-group around Colin Sheppard and Tony Wilson described a confocal with epi-
laser-illumination, stage scanning and photomultiplier tubes as detectors. The stage could move along the
optical axis, allowing optical serial sections.[28]

In 1979 Fred Brakenhoff and coworkers demonstrated that the theoretical advantages of optical sectioning and
resolution improvement are indeed achievable in practice. In 1985 this group became the first to publish
convincing images taken on a confocal microscope that were able to answer biological questions.[31] Shortly
after many more groups started using confocal microscopy to answer scientific questions that until now had
remained a mystery due to technological limitations.

In 1983 I. J. Cox und C. Sheppard from Oxford published the first work whereby a confocal microscope was
controlled by a computer. The first commercial laser scanning microscope, the stage-scanner SOM-25 was
offered by Oxford Optoelectronics (after several take-overs acquired by BioRad) starting in 1982. It was based
on the design of the Oxford group.[16][32]

Starting 1985: Laser point scanners with beam scanning

In the mid-1980s, William Bradshaw Amos and John Graham White and colleagues working at the Laboratory
of Molecular Biology in Cambridge built the first confocal beam scanning microscope.[33] [34] The stage with
the sample was not moving, instead the illumination spot was, allowing faster image acquisition: four images
per second with 512 lines each. Hugely magnified intermediate images, due to a 1-2 meter long beam path,
allowed the use of a conventional iris diaphragm as a ‘pinhole’, with diameters ~ 1 mm. First micrographs were
taken with long-term exposure on film before a digital camera was added. A further improvement allowed
zooming into the preparation for the first time. Zeiss, Leitz and Cambridge Instruments had no interest in a
commercial production. The Medical Research Council (MRC) finally sponsored development of a prototype.
The design was acquired by Bio-Rad, amended with computer control and commercialized as ‘MRC 500’. The
successor MRC 600 was later the basis for the development of the first two-photon-fluorescent microscope
developed 1990 at Cornell University.[31]

Developments at the University of Stockholm around the same time led to a commercial clsm distributed by the
Swedish company Sarastro. The venture was acquired in 1990 by Molecular Dynamics,[35] but the clsm was
eventually discontinued. In Germany, Heidelberg Instruments, founded in 1984, developed a clsm which was
initially meant for industrial applications, rather than Biology. This instrument was taken over in 1990 by Leica
Lasertechnik. Zeiss already had a non-confocal flying-spot laser scanning microscope on the market which was
upgraded to a confocal. A report from 1990,[36] mentioning “some” manufacturers of confocals lists: Sarastro,
Technical Instrument, Meridian Instruments, Bio-Rad, Leica, Tracor-Northern and Zeiss.[31]

In 1989, Fritz Karl Preikschat, with son Ekhard Preikschat, invented the scanning laser diode microscope for
particle-size analysis.[37][38] He and Ekhard Preikschat co-founded Lasentec to commercialize it. In 2001,
Lasentec was acquired by Mettler Toledo (NYSE: MTD).[39] About ten thousand systems have been installed
globally, mostly in the pharmaceutical industry to provide in-situ control of the crystallization process in large
purification systems.

https://en.wikipedia.org/wiki/Confocal_microscopy 9/12
7/25/2017 Confocal microscopy - Wikipedia

See also
Deconvolution Optical sectioning
Fluorescence microscope Photodetector
Laser scanning confocal microscopy Point spread function
Live cell imaging Stimulated emission depletion microscope
Microscope objective lens Super-resolution microscopy
Microscope slide
Optical microscope
Optical sectioning
Two-photon excitation microscopy: Although they use a related technology (both are laser scanning
microscopes), multiphoton fluorescence microscopes are not strictly confocal microscopes. The term
confocal arises from the presence of a diaphragm in the conjugated focal plane (confocal). This
diaphragm is usually absent in multiphoton microscopes.
Total internal reflection fluorescence microscope (TIRF)

References
1. Pawley JB (editor) (2006). Handbook of Biological Confocal Microscopy (3rd ed.). Berlin: Springer. ISBN 0-387-
25921-X.
2. Filed in 1957 and granted 1961. US 3013467 (https://worldwide.espacenet.com/textdoc?DB=EPODOC&IDX=US30
13467)
3. Memoir on Inventing the Confocal Scanning Microscope (http://web.media.mit.edu/~minsky/papers/ConfocalMemoi
r.html), Scanning 10 (1988), pp128–138.
4. Fellers TJ, Davidson MW (2007). "Introduction to Confocal Microscopy" (http://www.olympusconfocal.com/theory/
confocalintro.html). Olympus Fluoview Resource Center. National High Magnetic Field Laboratory. Retrieved
2007-07-25.
5. "Analytics for US Patent No. 5162941, Confocal microscope" (http://www.patentbuddy.com/Patent/5162941).
6. "Data Sheet of NanoFocus µsurf spinning disk confocal white light microscope" (http://www.nanofocus.com/product
s/usurf/usurf-explorer/).
7. "Data Sheet of Sensofar 'PLu neox' Dual technology sensor head combining confocal and Interferometry techniques,
as well as Spectroscopic Reflectometry" (http://www.sensofar.com/products/products_neox.html).
8. Vincze L (2005). "Confocal X-ray Fluorescence Imaging and XRF Tomography for Three Dimensional Trace
Element Microanalysis" (http://journals.cambridge.org/action/displayFulltext?type=1&fid=326128&jid=MAM&vol
umeId=11&issueId=S02&aid=326127). Microscopy and Microanalysis. 11 (Supplement 2).
doi:10.1017/S1431927605503167 (https://doi.org/10.1017%2FS1431927605503167).
9. Patel DV, McGhee CN (2007). "Contemporary in vivo confocal microscopy of the living human cornea using white
light and laser scanning techniques: a major review". Clin. Experiment. Ophthalmol. 35 (1): 71–88. PMID 17300580
(https://www.ncbi.nlm.nih.gov/pubmed/17300580). doi:10.1111/j.1442-9071.2007.01423.x (https://doi.org/10.1111%
2Fj.1442-9071.2007.01423.x).
10. Hoffman A, Goetz M, Vieth M, Galle PR, Neurath MF, Kiesslich R (2006). "Confocal laser endomicroscopy:
technical status and current indications". Endoscopy. 38 (12): 1275–83. PMID 17163333 (https://www.ncbi.nlm.nih.
gov/pubmed/17163333). doi:10.1055/s-2006-944813 (https://doi.org/10.1055%2Fs-2006-944813).
11. Le Person S, Puigalli JR, Baron M, Roques M Near infrared drying of pharmaceutical thin film: experimental
analysis of internal mass transport Chemical Engineering and Processing 37 , 257-263 , 1998
12. Hirschfeld, V.; Hubner, C.G. (2010). "A sensitive and versatile laser scanning confocal optical microscope for single-
molecule fluorescence at 77 K". Review of Scientific Instruments. 81 (11): 113705. doi:10.1063/1.3499260 (https://do
i.org/10.1063%2F1.3499260).
13. Grazioso, F.; Patton, B. R.; Smith, J.M. (2010). "A high stability beam-scanning confocal optical microscope for low
temperature operation". Review of Scientific Instruments. 81 (9): 093705–4. doi:10.1063/1.3484140 (https://doi.org/1
0.1063%2F1.3484140).
14. Hans Goldmann (1939). "Spaltlampenphotographie und –photometrie". Ophthalmologica. 98 (5/6): 257–270.
doi:10.1159/000299716 (https://doi.org/10.1159%2F000299716). Note: Volume 98 is assigned to the year 1939,
however on the first page of the article January 1940 is listed as publication date.
15. Colin JR Sheppard (3 November 2009). "Confocal Microscopy. The Development of a Modern Microscopy".
Imaging & Microscopy.online (http://www.imaging-git.com/science/light-microscopy/confocal-microscopy)
16. Barry R. Masters: Confocal Microscopy And Multiphoton Excitation Microscopy. The Genesis of Live Cell Imaging.
SPIE Press, Bellingham, Washington, USA 2006, ISBN 978-0-8194-6118-6, S. 120-121.

https://en.wikipedia.org/wiki/Confocal_microscopy 10/12
7/25/2017 Confocal microscopy - Wikipedia

17. Zyun Koana (1942). Journal of the Illumination Engineering Institute. 26 (8): 371–385. Missing or empty |title=
(help) The article is available on the website of the journal (https://www.jstage.jst.go.jp/browse/jieij1917). The pdf-
file labeled „P359 - 402“ is 19020 kilobyte in size and contains also neighboring articles from the same issue. Figure
1b of the article shows the scheme of a confocal transmission beam path.
18. Naora, Hiroto (1951). "Microspectrophotometry and cytochemical analysis of nucleic acids.". Science. 114 (2959):
279–280. Bibcode:1951Sci...114..279N (http://adsabs.harvard.edu/abs/1951Sci...114..279N). PMID 14866220 (http
s://www.ncbi.nlm.nih.gov/pubmed/14866220). doi:10.1126/science.114.2959.279 (https://doi.org/10.1126%2Fscienc
e.114.2959.279).
19. Marvin Minsky: Microscopy Apparatus. US Patent 3.013.467 (http://patimg2.uspto.gov/.piw?Docid=03013467&Pag
eNum=1&Rtype=&SectionNum=&idkey=NONE), filed 7. November 1957, granted 19. December 1961.
20. Marvin Minsky (1988). "Memoir on inventing the confocal scanning microscope". Scanning. 10 (4): 128–138.
doi:10.1002/sca.4950100403 (https://doi.org/10.1002%2Fsca.4950100403).
21. Guy Cox: Optical Imaging Techniques in Cell Biology. 1. edition. CRC Press, Taylor & Francis Group, Boca Raton,
FL, USA 2006, ISBN 0-8493-3919-7, pages 115-122.
22. Egger MD, Petrăn M (July 1967). "New reflected-light microscope for viewing unstained brain and ganglion cells".
Science. 157 (786): 305–7. Bibcode:1967Sci...157..305E (http://adsabs.harvard.edu/abs/1967Sci...157..305E).
PMID 6030094 (https://www.ncbi.nlm.nih.gov/pubmed/6030094). doi:10.1126/science.157.3786.305 (https://doi.or
g/10.1126%2Fscience.157.3786.305).
23. MOJMÍR PETRÁŇ; MILAN HADRAVSKÝ; M. DAVID EGGER; ROBERT GALAMBOS (1968). "Tandem-
Scanning Reflected-Light Microscope". Journal of the Optical Society of America. 58 (5): 661–664.
Bibcode:1968JOSA...58..661P (http://adsabs.harvard.edu/abs/1968JOSA...58..661P). doi:10.1364/JOSA.58.000661
(https://doi.org/10.1364%2FJOSA.58.000661).
24. Mojmír Petráň, Milan Hadravský: METHOD AND ARRANGEMENT FOR IMPROVING THE RESOLVING,
POWER AND CONTRAST. online (http://www.google.com/patents/US3517980?printsec=drawing&dq=ininventor:%
22Milan+Hadravsky%22&ei=g0qlUOe3H8fe4QTMuIGADA#v=onepage&q&f=false) at Google Patents, filed
4. November 1967, granted 30. June 1970.
25. Davidovits, P.; Egger, M. D. (1969). "Scanning laser microscope.". Nature. 223 (5208): 831.
Bibcode:1969Natur.223..831D (http://adsabs.harvard.edu/abs/1969Natur.223..831D). PMID 5799022 (https://www.n
cbi.nlm.nih.gov/pubmed/5799022). doi:10.1038/223831a0 (https://doi.org/10.1038%2F223831a0).
26. Davidovits, P.; Egger, M. D. (1971). "Scanning laser microscope for biological investigations.". Applied Optics. 10
(7): 1615–1619. Bibcode:1971ApOpt..10.1615D (http://adsabs.harvard.edu/abs/1971ApOpt..10.1615D).
PMID 20111173 (https://www.ncbi.nlm.nih.gov/pubmed/20111173). doi:10.1364/AO.10.001615 (https://doi.org/10.1
364%2FAO.10.001615).
27. Barry R. Masters: Confocal Microscopy And Multiphoton Excitation Microscopy. The Genesis of Live Cell Imaging.
SPIE Press, Bellingham, Washington, USA 2006, ISBN 978-0-8194-6118-6, pp. 124-125.
28. Shinya Inoué (2006). "Chapter 1: Foundations of Confocal Scanned Imaging in Light Microscopy". In James
Pawley. Handbook of Biological Confocal Microscopy (3. ed.). Springer Science and Business Media LLC. pp. 1–19.
ISBN 978-0-387-25921-5.
29. C.J.R. Sheppard, A. Choudhury: Image Formation in the Scanning Microscope. In: Optica Acta: International
Journal of Optics. 24, 1977, S. 1051–1073, doi:10.1080/713819421 (https://dx.doi.org/10.1080%2F713819421).
30. C. Cremer, T. Cremer: Considerations on a laser-scanning-microscope with high resolution and depth of field. In:
Microscopica acta. Band 81, Nummer 1, September 1978, S. 31–44, ISSN 0044-376X (https://www.worldcat.org/sea
rch?fq=x0:jrnl&q=n2:0044-376X). PMID 713859 (https://www.ncbi.nlm.nih.gov/pubmed/713859).
31. Amos, W.B.; White, J.G. (2003). "How the Confocal Laser Scanning Microscope entered Biological Research".
Biology of the Cell. 95 (6): 335–342. PMID 14519550 (https://www.ncbi.nlm.nih.gov/pubmed/14519550).
doi:10.1016/S0248-4900(03)00078-9 (https://doi.org/10.1016%2FS0248-4900%2803%2900078-9).
32. Cox, I. J.; Sheppard, C. J. (1983). "Scanning optical microscope incorporating a digital framestore and
microcomputer.". Applied Optics. 22 (10): 1474. Bibcode:1983ApOpt..22.1474C (http://adsabs.harvard.edu/abs/1983
ApOpt..22.1474C). PMID 18195988 (https://www.ncbi.nlm.nih.gov/pubmed/18195988). doi:10.1364/ao.22.001474
(https://doi.org/10.1364%2Fao.22.001474).
33. White, J. G. (1987). "An evaluation of confocal versus conventional imaging of biological structures by fluorescence
light microscopy" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2114888). The Journal of Cell Biology. 105 (1):
41–48. ISSN 0021-9525 (https://www.worldcat.org/issn/0021-9525). PMC 2114888 (https://www.ncbi.nlm.nih.gov/p
mc/articles/PMC2114888)  . PMID 3112165 (https://www.ncbi.nlm.nih.gov/pubmed/3112165).
doi:10.1083/jcb.105.1.41 (https://doi.org/10.1083%2Fjcb.105.1.41).
34. Anon (2005). "Dr John White FRS" (https://web.archive.org/web/20151117112042/https://royalsociety.org/people/jo
hn-white-12519/). royalsociety.org. London: Royal Society. Archived from the original (https://royalsociety.org/peop
le/john-white-12519/) on 2015-11-17.
35. Brent Johnson (1 February 1999). "Image Is Everything". The Scientist. online (http://www.the-scientist.com/?article
s.view/articleNo/19279/title/Image-Is-Everything/)

https://en.wikipedia.org/wiki/Confocal_microscopy 11/12
7/25/2017 Confocal microscopy - Wikipedia

36. Diana Morgan (23 July 1990). "Confocal Microscopes Widen Cell Biology Career Horizons". The Scientist. online
(http://www.the-scientist.com/?articles.view/articleNo/11261/title/Confocal-Microscopes-Widen-Cell-Biology-Caree
r-Horizons/)
37. "Apparatus and method for particle analysis" (http://www.google.com/patents/US4871251).
38. "Apparatus and method for particle analysis" (http://www.google.com/patents/US5012118).
39. reserved, Mettler-Toledo International Inc. all rights. "Particle Size Distribution Analysis" (http://www.mt.com/us/en/
home/products/L1_AutochemProducts/FBRM-PVM-Particle-System-Characterization.html).

External links
Virtual CLSM (Java-based) (http://micro.magnet.fsu.edu/primer/virtual/confocal/index.html)
animations and explanations on various types of microscopes including fluorescent and confocal
microscopes (http://toutestquantique.fr/en/microscopy/). (Université Paris Sud)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Confocal_microscopy&oldid=789212646"

Categories: Microscopy American inventions Cell imaging Scientific techniques

This page was last edited on 6 July 2017, at 01:34.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Confocal_microscopy 12/12
7/25/2017 Crystallography - Wikipedia

Crystallography
From Wikipedia, the free encyclopedia

Crystallography is the experimental science of determining the


arrangement of atoms in the crystalline solids (see crystal structure).
The word "crystallography" derives from the Greek words crystallon
"cold drop, frozen drop", with its meaning extending to all solids with
some degree of transparency, and graphein "to write". In July 2012, the
United Nations recognised the importance of the science of
crystallography by proclaiming that 2014 would be the International
Year of Crystallography.[1] X-ray crystallography is used to determine A crystalline solid: atomic resolution
the structure of large biomolecules such as proteins. Before the image of strontium titanate. Brighter
development of X-ray diffraction crystallography (see below), the study atoms are strontium and darker ones
of crystals was based on physical measurements of their geometry. This are titanium.
involved measuring the angles of crystal faces relative to each other and
to theoretical reference axes (crystallographic axes), and establishing the symmetry of the crystal in question.
This physical measurement is carried out using a goniometer. The position in 3D space of each crystal face is
plotted on a stereographic net such as a Wulff net or Lambert net. The pole to each face is plotted on the net.
Each point is labelled with its Miller index. The final plot allows the symmetry of the crystal to be established.

Crystallographic methods now depend on analysis of the diffraction patterns of a sample targeted by a beam of
some type. X-rays are most commonly used; other beams used include electrons or neutrons. This is facilitated
by the wave properties of the particles. Crystallographers often explicitly state the type of beam used, as in the
terms X-ray crystallography, neutron diffraction and electron diffraction. These three types of radiation interact
with the specimen in different ways.

X-rays interact with the spatial distribution of electrons in the sample.


Electrons are charged particles and therefore interact with the total charge distribution of both the atomic
nuclei and the electrons of the sample.
Neutrons are scattered by the atomic nuclei through the strong nuclear forces, but in addition, the
magnetic moment of neutrons is non-zero. They are therefore also scattered by magnetic fields. When
neutrons are scattered from hydrogen-containing materials, they produce diffraction patterns with high
noise levels. However, the material can sometimes be treated to substitute deuterium for hydrogen.

Because of these different forms of interaction, the three types of radiation are suitable for different
crystallographic studies.

Contents
1 Theory
2 Notation
3 Techniques
4 In materials science
5 Biology
6 Reference Literature in Crystallography
7 Scientists of note
8 See also
9 References
10 External links

Theory
https://en.wikipedia.org/wiki/Crystallography 1/6
7/25/2017 Crystallography - Wikipedia

An image of a small object is made using a lens to focus the beam, similar to a lens in a microscope. However,
the wavelength of visible light (about 4000 to 7000 ångström) is three orders of magnitude longer than the
length of typical atomic bonds and atoms themselves (about 1 to 2 Å). Therefore, obtaining information about
the spatial arrangement of atoms requires the use of radiation with shorter wavelengths, such as X-ray or
neutron beams. Employing shorter wavelengths implied abandoning microscopy and true imaging, however,
because there exists no material from which a lens capable of focusing this type of radiation can be created.
(Nevertheless, scientists have had some success focusing X-rays with microscopic Fresnel zone plates made
from gold, and by critical-angle reflection inside long tapered capillaries.)[2] Diffracted X-ray or neutron beams
cannot be focused to produce images, so the sample structure must be reconstructed from the diffraction
pattern. Sharp features in the diffraction pattern arise from periodic, repeating structure in the sample, which
are often very strong due to coherent reflection of many photons from many regularly spaced instances of
similar structure, while non-periodic components of the structure result in diffuse (and usually weak) diffraction
features - areas with a higher density and repetition of atom order tend to reflect more light toward one point in
space when compared to those areas with fewer atoms and less repetition.

Because of their highly ordered and repetitive structure, crystals give diffraction patterns of sharp Bragg
reflection spots, and are ideal for analyzing the structure of solids.

Notation
Coordinates in square brackets such as [100] denote a direction vector (in real space).
Coordinates in angle brackets or chevrons such as <100> denote a family of directions which are related
by symmetry operations. In the cubic crystal system for example, <100> would mean [100], [010], [001]
or the negative of any of those directions.
Miller indices in parentheses such as (100) denote a plane of the crystal structure, and regular repetitions
of that plane with a particular spacing. In the cubic system, the normal to the (hkl) plane is the direction
[hkl], but in lower-symmetry cases, the normal to (hkl) is not parallel to [hkl].
Indices in curly brackets or braces such as {100} denote a family of planes and their normals which are
equivalent in cubic materials due to symmetry operations, much the way angle brackets denote a family
of directions. In non-cubic materials, <hkl> is not necessarily perpendicular to {hkl}.

Techniques
Some materials that have been analyzed crystallographically, such as proteins, do not occur naturally as
crystals. Typically, such molecules are placed in solution and allowed to slowly crystallize through vapor
diffusion. A drop of solution containing the molecule, buffer, and precipitants is sealed in a container with a
reservoir containing a hygroscopic solution. Water in the drop diffuses to the reservoir, slowly increasing the
concentration and allowing a crystal to form. If the concentration were to rise more quickly, the molecule
would simply precipitate out of solution, resulting in disorderly granules rather than an orderly and hence
usable crystal.

Once a crystal is obtained, data can be collected using a beam of radiation. Although many universities that
engage in crystallographic research have their own X-ray producing equipment, synchrotrons are often used as
X-ray sources, because of the purer and more complete patterns such sources can generate. Synchrotron sources
also have a much higher intensity of X-ray beams, so data collection takes a fraction of the time normally
necessary at weaker sources. Complementary neutron crystallography techniques are used to identify the
positions of hydrogen atoms, since X-rays only interact very weakly with light elements such as hydrogen.

Producing an image from a diffraction pattern requires sophisticated mathematics and often an iterative process
of modelling and refinement. In this process, the mathematically predicted diffraction patterns of an
hypothesized or "model" structure are compared to the actual pattern generated by the crystalline sample.
Ideally, researchers make several initial guesses, which through refinement all converge on the same answer.
Models are refined until their predicted patterns match to as great a degree as can be achieved without radical
revision of the model. This is a painstaking process, made much easier today by computers.

https://en.wikipedia.org/wiki/Crystallography 2/6
7/25/2017 Crystallography - Wikipedia

The mathematical methods for the analysis of diffraction data only apply to patterns, which in turn result only
when waves diffract from orderly arrays. Hence crystallography applies for the most part only to crystals, or to
molecules which can be coaxed to crystallize for the sake of measurement. In spite of this, a certain amount of
molecular information can be deduced from patterns that are generated by fibers and powders, which while not
as perfect as a solid crystal, may exhibit a degree of order. This level of order can be sufficient to deduce the
structure of simple molecules, or to determine the coarse features of more complicated molecules. For example,
the double-helical structure of DNA was deduced from an X-ray diffraction pattern that had been generated by
a fibrous sample.

In materials science
Crystallography is used by materials scientists to characterize different materials. In single crystals, the effects
of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of
crystals reflect the atomic structure. In addition, physical properties are often controlled by crystalline defects.
The understanding of crystal structures is an important prerequisite for understanding crystallographic defects.
Mostly, materials do not occur as a single crystal, but in poly-crystalline form (i.e., as an aggregate of small
crystals with different orientations). Because of this, the powder diffraction method, which takes diffraction
patterns of polycrystalline samples with a large number of crystals, plays an important role in structural
determination.

Other physical properties are also linked to crystallography. For example, the minerals in clay form small, flat,
platelike structures. Clay can be easily deformed because the platelike particles can slip along each other in the
plane of the plates, yet remain strongly connected in the direction perpendicular to the plates. Such mechanisms
can be studied by crystallographic texture measurements.

In another example, iron transforms from a body-centered cubic (bcc) structure to a face-centered cubic (fcc)
structure called austenite when it is heated. The fcc structure is a close-packed structure unlike the bcc
structure; thus the volume of the iron decreases when this transformation occurs.

Crystallography is useful in phase identification. When manufacturing or using a material, it is generally


desirable to know what compounds and what phases are present in the material, as their composition, structure
and proportions will influence the material's properties. Each phase has a characteristic arrangement of atoms.
X-ray or neutron diffraction can be used to identify which patterns are present in the material, and thus which
compounds are present. Crystallography covers the enumeration of the symmetry patterns which can be formed
by atoms in a crystal and for this reason is related to group theory and geometry.

Biology
X-ray crystallography is the primary method for determining the molecular conformations of biological
macromolecules, particularly protein and nucleic acids such as DNA and RNA. In fact, the double-helical
structure of DNA was deduced from crystallographic data. The first crystal structure of a macromolecule was
solved in 1958, a three-dimensional model of the myoglobin molecule obtained by X-ray analysis.[3] The
Protein Data Bank (PDB) is a freely accessible repository for the structures of proteins and other biological
macromolecules. Computer programs such as RasMol or Pymol can be used to visualize biological molecular
structures. Neutron crystallography is often used to help refine structures obtained by X-ray methods or to solve
a specific bond; the methods are often viewed as complementary, as X-rays are sensitive to electron positions
and scatter most strongly off heavy atoms, while neutrons are sensitive to nucleus positions and scatter strongly
even off many light isotopes, including hydrogen and deuterium. Electron crystallography has been used to
determine some protein structures, most notably membrane proteins and viral capsids.

Reference Literature in Crystallography

https://en.wikipedia.org/wiki/Crystallography 3/6
7/25/2017 Crystallography - Wikipedia

The International Tables of Crystallography[4] is an eight book series that outlines the standard notations for
formatting, describing and testing crystals. The series contains books that covers analysis methods and the
mathematical procedures for determining organic structure though x-ray crystallography, electron diffraction,
and neutron diffraction. The International tables are focused on procedures, techniques and descriptions and
does not list the physical properties of individual crystals themselves. Each book is about 1000 pages and the
titles of the books are:

Vol A - Space Group Symmetry

Vol A1 - Symmetry Relations Between Space Groups

Vol B - Reciprocal Space

Vol C - Mathematical, Physical, and Chemical Tables

Vol D - Physical Properties of Crystals

Vol E - Subperiodic Groups

Vol F - Crystallography of Biological Macromolecues

Vol G - Definition and Exchange of Crystallographic Data

Scientists of note
William Astbury Max von Laue
William Barlow Otto Lehmann
C. Arnold Beevers Michael Levitt
John Desmond Bernal Henry Lipson
William Henry Bragg Kathleen Lonsdale
William Lawrence Bragg Ernest-François Mallard
Auguste Bravais Charles-Victor Mauguin
Glenn H. Brown William Hallowes Miller
Martin Julian Buerger Friedrich Mohs
Francis Crick Paul Niggli
Pierre Curie Louis Pasteur
Peter Debye Arthur Lindo Patterson
Johann Deisenhofer Max Perutz
Boris Delone Friedrich Reinitzer
Gautam R. Desiraju Hugo Rietveld
Jack Dunitz Jean-Baptiste L. Romé de l'Isle
David Eisenberg Michael Rossmann
Paul Peter Ewald Paul Scherrer
Evgraf Stepanovich Fedorov Arthur Moritz Schönflies
Rosalind Franklin Dan Shechtman
Georges Friedel George M. Sheldrick
Paul Heinrich von Groth Tej P. Singh
René Just Haüy Nicolas Steno
Wayne Hendrickson Constance Tipper
Carl Hermann Daniel Vorländer
Johann Friedrich Christian Hessel Christian Samuel Weiss
Dorothy Crowfoot Hodgkin Don Craig Wiley
Robert Huber Ralph Walter Graystone Wyckoff
Isabella Karle Ada Yonath
Jerome Karle
Aaron Klug
Max von Laue
https://en.wikipedia.org/wiki/Crystallography 4/6
7/25/2017 Crystallography - Wikipedia

See also
Abnormal grain growth Electron crystallography Neutron diffraction at OPAL
Atomic packing factor Euclidean plane isometry Neutron diffraction at the
Beevers–Lipson strip Fixed points of isometry ILL
Condensed matter physics groups in Euclidean space NMR crystallography
Crystal engineering Fractional coordinates Permutation group
Crystal growth Group action Point group
Crystal optics International Year of Precession electron
Crystal structure Crystallography diffraction
Crystallite Laser-heated pedestal growth Quantum mineralogy
Crystallization processes Materials science Quasicrystal
Crystallographic database Metallurgy Solid state chemistry
Crystallographic point group Mineralogy Space group
Crystallographic group Modeling of polymer Symmetric group
Dynamical theory of crystals
diffraction Neutron crystallography
Electron crystallography Neutron diffraction at OPAL
References
1. UN announcement "International Year of Crystallography" (http://www.iycr2014.org/about/resolution). iycr2014.org.
12 July 2012
2. Snigirev, A. (2007). "Two-step hard X-ray focusing combining Fresnel zone plate and single-bounce ellipsoidal
capillary". Journal of Synchrotron Radiation. 14 (Pt 4): 326–330. PMID 17587657 (https://www.ncbi.nlm.nih.gov/pu
bmed/17587657). doi:10.1107/S0909049507025174 (https://doi.org/10.1107%2FS0909049507025174).
3. Kendrew, J. C.; Bodo, G.; Dintzis, H. M.; Parrish, R. G.; Wyckoff, H.; Phillips, D. C. (1958). "A Three-Dimensional
Model of the Myoglobin Molecule Obtained by X-Ray Analysis". Nature. 181 (4610): 662–6.
Bibcode:1958Natur.181..662K (http://adsabs.harvard.edu/abs/1958Natur.181..662K). PMID 13517261 (https://www.
ncbi.nlm.nih.gov/pubmed/13517261). doi:10.1038/181662a0 (https://doi.org/10.1038%2F181662a0).
4. Prince, E. (2006). International Tables for Crystallography Vol. C: Mathematical, Physical and Chemical Tables.
Wiley. ISBN 978-1-4020-4969-9.

External links
American Crystallographic Association (http://www.AmerCrystalAssn.org/)
Learning Crystallography (http://www.xtal.iqfr.csic.es/Cristalografia/index-en.html)
Crystal Lattice Structures (https://web.archive.org/web/20080324193801/http://cst-www.nrl.navy.mil:80/
lattice/spcgrp/)
100 Years of Crystallography (http://richannel.org/celebrating-crystallography) Royal Institution
animation
Vega Science Trust Interviews on Crystallography (http://www.vega.org.uk) Freeview video interviews
with Max Perutz, Rober Huber and Aaron Klug.
Commission on Crystallographic Teaching, Pamphlets (http://www.iucr.org/iucr-top/comm/cteach/pamph
lets.html)
Ames Laboratory, US DOE Crystallography Research Resources (http://www.mcbmm.ameslab.gov/inde
x.html)
International Union of Crystallography (http://www.iucr.org/)
Web Portal of Open Access Crystallography Resources (http://nanocrystallography.net)
Interactive Crystallography Timeline (http://rigb.org/our-history/history-of-research/crystallography-time
line) from the Royal Institution
Nature Milestones in Crystallography (http://www.nature.com/milestones/milecrystal/index.html)
Crystallography in the 21st century (http://journals.iucr.org/a/issues/2015/06/00/me0590/index.html)
(editorial in Acta Crystallographica Section A)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Crystallography&oldid=774677232"

https://en.wikipedia.org/wiki/Crystallography 5/6
7/25/2017 Crystallography - Wikipedia

Categories: Chemistry Condensed matter physics Crystallography Instrumental analysis


Materials science Neutron-related techniques Synchrotron-related techniques

This page was last edited on 10 April 2017, at 00:37.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Crystallography 6/6
7/25/2017 X-ray crystallography - Wikipedia

X-ray crystallography
From Wikipedia, the free encyclopedia

X-ray crystallography is a technique used for determining the atomic and molecular
structure of a crystal, in which the crystalline atoms cause a beam of incident X-rays to
diffract into many specific directions. By measuring the angles and intensities of these
diffracted beams, a crystallographer can produce a three-dimensional picture of the
density of electrons within the crystal. From this electron density, the mean positions of
the atoms in the crystal can be determined, as well as their chemical bonds, their disorder,
and various other information.

Since many materials can form crystals—such as salts, metals, minerals, semiconductors,
as well as various inorganic, organic, and biological molecules—X-ray crystallography
has been fundamental in the development of many scientific fields. In its first decades of X-ray crystallography provides an
use, this method determined the size of atoms, the lengths and types of chemical bonds, atomistic model of zeolite, an
and the atomic-scale differences among various materials, especially minerals and alloys. aluminosilicate.
The method also revealed the structure and function of many biological molecules,
including vitamins, drugs, proteins and nucleic acids such as DNA. X-ray crystallography is still the chief method for
characterizing the atomic structure of new materials and in discerning materials that appear similar by other experiments. X-ray
crystal structures can also account for unusual electronic or elastic properties of a material, shed light on chemical interactions
and processes, or serve as the basis for designing pharmaceuticals against diseases.

In a single-crystal X-ray diffraction measurement, a crystal is mounted on a goniometer. The goniometer is used to position the
crystal at selected orientations. The crystal is illuminated with a finely focused monochromatic beam of X-rays, producing a
diffraction pattern of regularly spaced spots known as reflections. The two-dimensional images taken at different orientations are
converted into a three-dimensional model of the density of electrons within the crystal using the mathematical method of Fourier
transforms, combined with chemical data known for the sample. Poor resolution (fuzziness) or even errors may result if the
crystals are too small, or not uniform enough in their internal makeup.

X-ray crystallography is related to several other methods for determining atomic structures. Similar diffraction patterns can be
produced by scattering electrons or neutrons, which are likewise interpreted by Fourier transformation. If single crystals of
sufficient size cannot be obtained, various other X-ray methods can be applied to obtain less detailed information; such methods
include fiber diffraction, powder diffraction and (if the sample is not crystallized) small-angle X-ray scattering (SAXS). If the
material under investigation is only available in the form of nanocrystalline powders or suffers from poor crystallinity, the
methods of electron crystallography can be applied for determining the atomic structure.

For all above mentioned X-ray diffraction methods, the scattering is elastic; the scattered X-rays have the same wavelength as
the incoming X-ray. By contrast, inelastic X-ray scattering methods are useful in studying excitations of the sample, rather than
the distribution of its atoms.

Contents
1 History
1.1 Early scientific history of crystals and X-rays
1.2 X-ray diffraction
1.3 Scattering
1.4 Development from 1912 to 1920
1.5 Cultural and aesthetic importance
2 Contributions to chemistry and material science
2.1 Mineralogy and metallurgy
2.2 Early organic and small biological molecules
2.3 Biological macromolecular crystallography
3 Relationship to other scattering techniques
3.1 Elastic vs. inelastic scattering
3.2 Other X-ray techniques
3.3 Electron and neutron diffraction
4 Methods
4.1 Overview of single-crystal X-ray diffraction
4.1.1 Procedure
4.1.2 Limitations
4.2 Crystallization
https://en.wikipedia.org/wiki/X-ray_crystallography 1/24
7/25/2017 X-ray crystallography - Wikipedia

4.3 Data collection


4.3.1 Mounting the crystal
4.3.2 X-ray sources
4.3.2.1 Rotating anode
4.3.2.2 Synchrotron radiation
4.3.2.3 Free electron laser
4.3.3 Recording the reflections
4.4 Data analysis
4.4.1 Crystal symmetry, unit cell, and image scaling
4.4.2 Initial phasing
4.4.3 Model building and phase refinement
4.4.4 Disorder
4.5 Deposition of the structure
5 Diffraction theory
5.1 Intuitive understanding by Bragg's law
5.2 Scattering as a Fourier transform
5.3 Friedel and Bijvoet mates
5.4 Ewald's sphere
5.5 Patterson function
5.6 Advantages of a crystal
6 Nobel Prizes for X-ray crystallography
7 Applications of X-ray diffraction
7.1 X-ray method for investigation of drugs
7.2 X-ray method for investigation of textile fibers and polymers
7.3 X-ray method for investigation of bones
8 See also
9 References
10 Further reading
10.1 International Tables for Crystallography
10.2 Bound collections of articles
10.3 Textbooks
10.4 Applied computational data analysis
10.5 Historical
11 External links
11.1 Tutorials
11.2 Primary databases
11.3 Derivative databases
11.4 Structural validation

History
Early scientific history of crystals and X-rays

Crystals, though long admired for their regularity and symmetry, were not investigated
scientifically until the 17th century. Johannes Kepler hypothesized in his work Strena seu
de Nive Sexangula (A New Year's Gift of Hexagonal Snow) (1611) that the hexagonal
symmetry of snowflake crystals was due to a regular packing of spherical water
particles.[1]

The Danish scientist Nicolas Steno (1669) pioneered experimental investigations of


crystal symmetry. Steno showed that the angles between the faces are the same in every
exemplar of a particular type of crystal,[2] and René Just Haüy (1784) discovered that
every face of a crystal can be described by simple stacking patterns of blocks of the same
shape and size. Hence, William Hallowes Miller in 1839 was able to give each face a
unique label of three small integers, the Miller indices which remain in use today for
identifying crystal faces. Haüy's study led to the correct idea that crystals are a regular Drawing of square (Figure A, above)
three-dimensional array (a Bravais lattice) of atoms and molecules; a single unit cell is and hexagonal (Figure B, below)
repeated indefinitely along three principal directions that are not necessarily packing from Kepler's work, Strena
perpendicular. In the 19th century, a complete catalog of the possible symmetries of a seu de Nive Sexangula.
crystal was worked out by Johan Hessel,[3] Auguste Bravais,[4] Evgraf Fedorov,[5] Arthur

https://en.wikipedia.org/wiki/X-ray_crystallography 2/24
7/25/2017 X-ray crystallography - Wikipedia

Schönflies[6] and (belatedly) William Barlow (1894). From the available data and
physical reasoning, Barlow proposed several crystal structures in the 1880s that were
validated later by X-ray crystallography;[7] however, the available data were too scarce in
the 1880s to accept his models as conclusive.

Wilhelm Röntgen discovered X-rays in 1895, just


as the studies of crystal symmetry were being
concluded. Physicists were initially uncertain of
the nature of X-rays, but soon suspected
(correctly) that they were waves of
electromagnetic radiation, in other words, another
form of light. At that time, the wave model of
As shown by X-ray crystallography,
light—specifically, the Maxwell theory of
the hexagonal symmetry of
electromagnetic radiation—was well accepted
snowflakes results from the
among scientists, and experiments by Charles
tetrahedral arrangement of hydrogen
Glover Barkla showed that X-rays exhibited
bonds about each water molecule.
phenomena associated with electromagnetic
The water molecules are arranged
waves, including transverse polarization and X-ray crystallography shows the
similarly to the silicon atoms in the
spectral lines akin to those observed in the visible arrangement of water molecules in
tridymite polymorph of SiO2. The
wavelengths. Single-slit experiments in the ice, revealing the hydrogen bonds (1)
resulting crystal structure has laboratory of Arnold Sommerfeld suggested that
hexagonal symmetry when viewed that hold the solid together. Few other
X-rays had a wavelength of about 1 angstrom. methods can determine the structure
along a principal axis. However, X-rays are composed of photons, and of matter with such precision
thus are not only waves of electromagnetic (resolution).
radiation but also exhibit particle-like properties. Albert Einstein introduced the photon
concept in 1905,[8] but it was not broadly accepted until 1922,[9][10] when Arthur
Compton confirmed it by the scattering of X-rays from electrons.[11] Therefore, these particle-like properties of X-rays, such as
their ionization of gases, caused William Henry Bragg to argue in 1907 that X-rays were not electromagnetic
radiation.[12][13][14][15] Nevertheless, Bragg's view was not broadly accepted and the observation of X-ray diffraction by Max
von Laue in 1912[16] confirmed for most scientists that X-rays were a form of electromagnetic radiation.

X-ray diffraction

Crystals are regular arrays of atoms, and X-rays can be considered waves of
electromagnetic radiation. Atoms scatter X-ray waves, primarily through the atoms'
electrons. Just as an ocean wave striking a lighthouse produces secondary circular waves
emanating from the lighthouse, so an X-ray striking an electron produces secondary
spherical waves emanating from the electron. This phenomenon is known as elastic
scattering, and the electron (or lighthouse) is known as the scatterer. A regular array of
scatterers produces a regular array of spherical waves. Although these waves cancel one
another out in most directions through destructive interference, they add constructively in
a few specific directions, determined by Bragg's law:
The incoming beam (coming from
upper left) causes each scatterer to re-
radiate a small portion of its intensity
as a spherical wave. If scatterers are Here d is the spacing between diffracting planes, is the incident angle, n is any integer,
arranged symmetrically with a and λ is the wavelength of the beam. These specific directions appear as spots on the
separation d, these spherical waves diffraction pattern called reflections. Thus, X-ray diffraction results from an
will be in sync (add constructively) electromagnetic wave (the X-ray) impinging on a regular array of scatterers (the
only in directions where their path- repeating arrangement of atoms within the crystal).
length difference 2d sin θ equals an
integer multiple of the wavelength λ. X-rays are used to produce the diffraction pattern because their wavelength λ is typically
In that case, part of the incoming the same order of magnitude (1–100 angstroms) as the spacing d between planes in the
beam is deflected by an angle 2θ, crystal. In principle, any wave impinging on a regular array of scatterers produces
producing a reflection spot in the diffraction, as predicted first by Francesco Maria Grimaldi in 1665. To produce
diffraction pattern. significant diffraction, the spacing between the scatterers and the wavelength of the
impinging wave should be similar in size. For illustration, the diffraction of sunlight
through a bird's feather was first reported by James Gregory in the later 17th century. The
first artificial diffraction gratings for visible light were constructed by David Rittenhouse in 1787, and Joseph von Fraunhofer in
1821. However, visible light has too long a wavelength (typically, 5500 angstroms) to observe diffraction from crystals. Prior to
the first X-ray diffraction experiments, the spacings between lattice planes in a crystal were not known with certainty.

https://en.wikipedia.org/wiki/X-ray_crystallography 3/24
7/25/2017 X-ray crystallography - Wikipedia

The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald
and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this
model could not be validated using visible light, since the wavelength was much larger than the spacing between the resonators.
Von Laue realized that electromagnetic radiation of a shorter wavelength was needed to observe such small spacings, and
suggested that X-rays might have a wavelength comparable to the unit-cell spacing in crystals. Von Laue worked with two
technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and
record its diffraction on a photographic plate. After being developed, the plate showed a large number of well-defined spots
arranged in a pattern of intersecting circles around the spot produced by the central beam.[16][17] Von Laue developed a law that
connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the
Nobel Prize in Physics in 1914.[18]

Scattering

As described in the mathematical derivation below, the X-ray scattering is determined by the density of electrons within the
crystal. Since the energy of an X-ray is much greater than that of a valence electron, the scattering may be modeled as Thomson
scattering, the interaction of an electromagnetic ray with a free electron. This model is generally adopted to describe the
polarization of the scattered radiation.

The intensity of Thomson scattering for one particle with mass m and charge q is:[19]

Hence the atomic nuclei, which are much heavier than an electron, contribute negligibly to the scattered X-rays.

Development from 1912 to 1920

After Von Laue's pioneering research, the field developed rapidly, most notably by
physicists William Lawrence Bragg and his father William Henry Bragg. In 1912–1913,
the younger Bragg developed Bragg's law, which connects the observed scattering with
reflections from evenly spaced planes within the crystal.[20][21][22] The Braggs, father
and son, shared the 1915 Nobel Prize in Physics for their work in crystallography. The
earliest structures were generally simple and marked by one-dimensional symmetry.
However, as computational and experimental methods improved over the next decades, it
became feasible to deduce reliable atomic positions for more complicated two- and three-
dimensional arrangements of atoms in the unit-cell.
Although diamonds (top left) and
The potential of X-ray crystallography for determining the structure of molecules and
graphite (top right) are identical in
minerals—then only known vaguely from chemical and hydrodynamic experiments—
chemical composition—being both
was realized immediately. The earliest structures were simple inorganic crystals and
pure carbon—X-ray crystallography
minerals, but even these revealed fundamental laws of physics and chemistry. The first
revealed the arrangement of their
atomic-resolution structure to be "solved" (i.e., determined) in 1914 was that of table
atoms (bottom) accounts for their
salt.[23][24][25] The distribution of electrons in the table-salt structure showed that crystals different properties. In diamond, the
are not necessarily composed of covalently bonded molecules, and proved the existence carbon atoms are arranged
of ionic compounds.[26] The structure of diamond was solved in the same year,[27][28] tetrahedrally and held together by
proving the tetrahedral arrangement of its chemical bonds and showing that the length of single covalent bonds, making it
C–C single bond was 1.52 angstroms. Other early structures included copper,[29] calcium strong in all directions. By contrast,
fluoride (CaF2, also known as fluorite), calcite (CaCO3) and pyrite (FeS2)[30] in 1914; graphite is composed of stacked
sheets. Within the sheet, the bonding
spinel (MgAl2O4) in 1915;[31][32] the rutile and anatase forms of titanium dioxide (TiO2) is covalent and has hexagonal
symmetry, but there are no covalent
in 1916;[33] pyrochroite Mn(OH)2 and, by extension, brucite Mg(OH)2 in 1919;.[34][35]
bonds between the sheets, making
Also in 1919 sodium nitrate (NaNO3) and caesium dichloroiodide (CsICl2) were graphite easy to cleave into flakes.
determined by Ralph Walter Graystone Wyckoff, and the wurtzite (hexagonal ZnS)
structure became known in 1920.[36]

The structure of graphite was solved in 1916[37] by the related method of powder diffraction,[38] which was developed by Peter
Debye and Paul Scherrer and, independently, by Albert Hull in 1917.[39] The structure of graphite was determined from single-
crystal diffraction in 1924 by two groups independently.[40][41] Hull also used the powder method to determine the structures of
various metals, such as iron[42] and magnesium.[43]

https://en.wikipedia.org/wiki/X-ray_crystallography 4/24
7/25/2017 X-ray crystallography - Wikipedia

Cultural and aesthetic importance

In what has been called his scientific autobiography, The Development of X-ray Analysis, Sir William Lawrence Bragg
mentioned that he believed the field of crystallography was particularly welcoming to women because the techno-aesthetics of
the molecular structures resembled textiles and household objects. Bragg was known to compare crystal formation to "curtains,
wallpapers, mosaics, and roses".[44]

In 1951, the Festival Pattern Group at the Festival of Britain hosted a collaborative group of textile manufacturers and
experienced crystallographers to design lace and prints based on the X-ray crystallography of insulin, china clay, and
hemoglobin. One of the leading scientists of the project was Dr. Helen Megaw (1907–2002), the Assistant Director of Research
at the Cavendish Laboratory in Cambridge at the time. Megaw is credited as one of the central figures who took inspiration from
crystal diagrams and saw their potential in design.[45] In 2008, the Wellcome Collection in London curated an exhibition on the
Festival Pattern Group called "From Atom to Patterns".[45]

Contributions to chemistry and material science


X-ray crystallography has led to a better understanding of chemical bonds and non-covalent interactions. The initial studies
revealed the typical radii of atoms, and confirmed many theoretical models of chemical bonding, such as the tetrahedral bonding
of carbon in the diamond structure,[27] the octahedral bonding of metals observed in ammonium hexachloroplatinate (IV),[46]
and the resonance observed in the planar carbonate group[30] and in aromatic molecules.[47] Kathleen Lonsdale's 1928 structure
of hexamethylbenzene[48] established the hexagonal symmetry of benzene and showed a clear difference in bond length between
the aliphatic C–C bonds and aromatic C–C bonds; this finding led to the idea of resonance between chemical bonds, which had
profound consequences for the development of chemistry.[49] Her conclusions were anticipated by William Henry Bragg, who
published models of naphthalene and anthracene in 1921 based on other molecules, an early form of molecular
replacement.[47][50]

Also in the 1920s, Victor Moritz Goldschmidt and later Linus Pauling developed rules for eliminating chemically unlikely
structures and for determining the relative sizes of atoms. These rules led to the structure of brookite (1928) and an
understanding of the relative stability of the rutile, brookite and anatase forms of titanium dioxide.

The distance between two bonded atoms is a sensitive measure of the bond strength and its bond order; thus, X-ray
crystallographic studies have led to the discovery of even more exotic types of bonding in inorganic chemistry, such as metal-
metal double bonds,[51][52][53] metal-metal quadruple bonds,[54][55][56] and three-center, two-electron bonds.[57] X-ray
crystallography—or, strictly speaking, an inelastic Compton scattering experiment—has also provided evidence for the partly
covalent character of hydrogen bonds.[58] In the field of organometallic chemistry, the X-ray structure of ferrocene initiated
scientific studies of sandwich compounds,[59][60] while that of Zeise's salt stimulated research into "back bonding" and metal-pi
complexes.[61][62][63][64] Finally, X-ray crystallography had a pioneering role in the development of supramolecular chemistry,
particularly in clarifying the structures of the crown ethers and the principles of host-guest chemistry.

In material sciences, many complicated inorganic and organometallic systems have been analyzed using single-crystal methods,
such as fullerenes, metalloporphyrins, and other complicated compounds. Single-crystal diffraction is also used in the
pharmaceutical industry, due to recent problems with polymorphs. The major factors affecting the quality of single-crystal
structures are the crystal's size and regularity; recrystallization is a commonly used technique to improve these factors in small-
molecule crystals. The Cambridge Structural Database contains over 800,000 structures as of September 2016; over 99% of
these structures were determined by X-ray diffraction.

Mineralogy and metallurgy

Since the 1920s, X-ray diffraction has been the principal method for determining the arrangement of atoms in minerals and
metals. The application of X-ray crystallography to mineralogy began with the structure of garnet, which was determined in
1924 by Menzer. A systematic X-ray crystallographic study of the silicates was undertaken in the 1920s. This study showed that,
as the Si/O ratio is altered, the silicate crystals exhibit significant changes in their atomic arrangements. Machatschki extended
these insights to minerals in which aluminium substitutes for the silicon atoms of the silicates. The first application of X-ray
crystallography to metallurgy likewise occurred in the mid-1920s.[66][67][68][69][70][71] Most notably, Linus Pauling's structure of
the alloy Mg2Sn[72] led to his theory of the stability and structure of complex ionic crystals.[73]

On October 17, 2012, the Curiosity rover on the planet Mars at "Rocknest" performed the first X-ray diffraction analysis of
Martian soil. The results from the rover's CheMin analyzer revealed the presence of several minerals, including feldspar,
pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the "weathered basaltic soils" of
Hawaiian volcanoes.[65]
https://en.wikipedia.org/wiki/X-ray_crystallography 5/24
7/25/2017 X-ray crystallography - Wikipedia

Early organic and small biological molecules

The first structure of an organic compound,


hexamethylenetetramine, was solved in 1923.[74] This was
followed by several studies of long-chain fatty acids, which
are an important component of biological
membranes.[75][76][77][78][79][80][81][82][83] In the 1930s, the
structures of much larger molecules with two-dimensional
complexity began to be solved. A significant advance was
First X-ray diffraction
view of Martian soil – the structure of phthalocyanine,[84] a large planar molecule
that is closely related to porphyrin molecules important in The three-dimensional structure of
CheMin analysis reveals penicillin, solved by Dorothy
feldspar, pyroxenes, biology, such as heme, corrin and chlorophyll.
Crowfoot Hodgkin in 1945. The
olivine and more green, white, red, yellow and blue
(Curiosity rover at
X-ray crystallography of biological molecules took off with
Dorothy Crowfoot Hodgkin, who solved the structures of spheres represent atoms of carbon,
"Rocknest", October 17, hydrogen, oxygen, sulfur and
cholesterol (1937), penicillin (1946) and vitamin B12
2012).[65] (1956), for which she was awarded the Nobel Prize in nitrogen, respectively.
Chemistry in 1964. In 1969, she succeeded in solving the
structure of insulin, on which she worked for over thirty years.[85]

Biological macromolecular crystallography

Crystal structures of proteins (which are irregular and hundreds of times larger than
cholesterol) began to be solved in the late 1950s, beginning with the structure of sperm
whale myoglobin by Sir John Cowdery Kendrew,[86] for which he shared the Nobel Prize
in Chemistry with Max Perutz in 1962. Since that success, 132055 X-ray crystal
structures of proteins, nucleic acids and other biological molecules have been
determined.[87] For comparison, the nearest competing method in terms of structures
analyzed is nuclear magnetic resonance (NMR) spectroscopy, which has resolved 11904
chemical structures.[88] Moreover, crystallography can solve structures of arbitrarily large
molecules, whereas solution-state NMR is restricted to relatively small ones (less than 70
kDa). X-ray crystallography is now used routinely by scientists to determine how a
Ribbon diagram of the structure of pharmaceutical drug interacts with its protein target and what changes might improve
myoglobin, showing colored alpha
it.[89] However, intrinsic membrane proteins remain challenging to crystallize because
helices. Such proteins are long, linear
they require detergents or other means to solubilize them in isolation, and such detergents
molecules with thousands of atoms;
often interfere with crystallization. Such membrane proteins are a large component of the
yet the relative position of each atom
genome, and include many proteins of great physiological importance, such as ion
has been determined with sub-atomic
resolution by X-ray crystallography. channels and receptors.[90][91] Helium cryogenics are used to prevent radiation damage in
Since it is difficult to visualize all the protein crystals.[92]
atoms at once, the ribbon shows the
rough path of the protein polymer On the other end of the size scale, even relatively small molecules may pose challenges
from its N-terminus (blue) to its C- for the resolving power of X-ray crystallography. The structure assigned in 1991 to the
terminus (red). antibiotic isolated from a marine organism, diazonamide A (C40H34Cl2N6O6, molar mass
765.65 g/mol), proved to be incorrect by the classical proof of structure: a synthetic
sample was not identical to the natural product. The mistake was attributed to the
inability of X-ray crystallography to distinguish between the correct -OH / >NH and the interchanged -NH2 / -O- groups in the
incorrect structure.[93] With advances in instrumentation, however, it is now routinely possible to distinguish between such
similar groups using modern single-crystal X-ray diffractometers.

Relationship to other scattering techniques


Elastic vs. inelastic scattering

X-ray crystallography is a form of elastic scattering; the outgoing X-rays have the same energy, and thus same wavelength, as
the incoming X-rays, only with altered direction. By contrast, inelastic scattering occurs when energy is transferred from the
incoming X-ray to the crystal, e.g., by exciting an inner-shell electron to a higher energy level. Such inelastic scattering reduces
the energy (or increases the wavelength) of the outgoing beam. Inelastic scattering is useful for probing such excitations of
matter, but not in determining the distribution of scatterers within the matter, which is the goal of X-ray crystallography.

https://en.wikipedia.org/wiki/X-ray_crystallography 6/24
7/25/2017 X-ray crystallography - Wikipedia

X-rays range in wavelength from 10 to 0.01 nanometers; a typical wavelength used for crystallography is 1 Å (0.1 nm), which is
on the scale of covalent chemical bonds and the radius of a single atom. Longer-wavelength photons (such as ultraviolet
radiation) would not have sufficient resolution to determine the atomic positions. At the other extreme, shorter-wavelength
photons such as gamma rays are difficult to produce in large numbers, difficult to focus, and interact too strongly with matter,
producing particle-antiparticle pairs. Therefore, X-rays are the "sweetspot" for wavelength when determining atomic-resolution
structures from the scattering of electromagnetic radiation.

Other X-ray techniques

Other forms of elastic X-ray scattering include powder diffraction, Small-Angle X-ray Scattering (SAXS) and several types of
X-ray fiber diffraction, which was used by Rosalind Franklin in determining the double-helix structure of DNA. In general,
single-crystal X-ray diffraction offers more structural information than these other techniques; however, it requires a sufficiently
large and regular crystal, which is not always available.

These scattering methods generally use monochromatic X-rays, which are restricted to a single wavelength with minor
deviations. A broad spectrum of X-rays (that is, a blend of X-rays with different wavelengths) can also be used to carry out X-
ray diffraction, a technique known as the Laue method. This is the method used in the original discovery of X-ray diffraction.
Laue scattering provides much structural information with only a short exposure to the X-ray beam, and is therefore used in
structural studies of very rapid events (Time resolved crystallography). However, it is not as well-suited as monochromatic
scattering for determining the full atomic structure of a crystal and therefore works better with crystals with relatively simple
atomic arrangements.

The Laue back reflection mode records X-rays scattered backwards from a broad spectrum source. This is useful if the sample is
too thick for X-rays to transmit through it. The diffracting planes in the crystal are determined by knowing that the normal to the
diffracting plane bisects the angle between the incident beam and the diffracted beam. A Greninger chart can be used[94] to
interpret the back reflection Laue photograph.

Electron and neutron diffraction

Other particles, such as electrons and neutrons, may be used to produce a diffraction pattern. Although electron, neutron, and X-
ray scattering are based on different physical processes, the resulting diffraction patterns are analyzed using the same coherent
diffraction imaging techniques.

As derived below, the electron density within the crystal and the diffraction patterns are related by a simple mathematical
method, the Fourier transform, which allows the density to be calculated relatively easily from the patterns. However, this works
only if the scattering is weak, i.e., if the scattered beams are much less intense than the incoming beam. Weakly scattered beams
pass through the remainder of the crystal without undergoing a second scattering event. Such re-scattered waves are called
"secondary scattering" and hinder the analysis. Any sufficiently thick crystal will produce secondary scattering, but since X-rays
interact relatively weakly with the electrons, this is generally not a significant concern. By contrast, electron beams may produce
strong secondary scattering even for relatively thin crystals (>100 nm). Since this thickness corresponds to the diameter of many
viruses, a promising direction is the electron diffraction of isolated macromolecular assemblies, such as viral capsids and
molecular machines, which may be carried out with a cryo-electron microscope. Moreover, the strong interaction of electrons
with matter (about 1000 times stronger than for X-rays) allows determination of the atomic structure of extremely small
volumes. The field of applications for electron crystallography ranges from bio molecules like membrane proteins over organic
thin films to the complex structures of (nanocrystalline) intermetallic compounds and zeolites.

Neutron diffraction is an excellent method for structure determination, although it has been difficult to obtain intense,
monochromatic beams of neutrons in sufficient quantities. Traditionally, nuclear reactors have been used, although sources
producing neutrons by spallation are becoming increasingly available. Being uncharged, neutrons scatter much more readily
from the atomic nuclei rather than from the electrons. Therefore, neutron scattering is very useful for observing the positions of
light atoms with few electrons, especially hydrogen, which is essentially invisible in the X-ray diffraction. Neutron scattering
also has the remarkable property that the solvent can be made invisible by adjusting the ratio of normal water, H2O, and heavy
water, D2O.

Methods
Overview of single-crystal X-ray diffraction

The oldest and most precise method of X-ray crystallography is single-crystal X-ray diffraction, in which a beam of X-rays
strikes a single crystal, producing scattered beams. When they land on a piece of film or other detector, these beams make a
diffraction pattern of spots; the strengths and angles of these beams are recorded as the crystal is gradually rotated.[95] Each spot
is called a reflection, since it corresponds to the reflection of the X-rays from one set of evenly spaced planes within the crystal.
https://en.wikipedia.org/wiki/X-ray_crystallography 7/24
7/25/2017 X-ray crystallography - Wikipedia

For single crystals of sufficient purity and regularity, X-ray diffraction data can determine
the mean chemical bond lengths and angles to within a few thousandths of an angstrom
and to within a few tenths of a degree, respectively. The atoms in a crystal are not static,
but oscillate about their mean positions, usually by less than a few tenths of an angstrom.
X-ray crystallography allows measuring the size of these oscillations.

Procedure

The technique of single-crystal X-ray crystallography has three basic steps. The first—
and often most difficult—step is to obtain an adequate crystal of the material under study.
The crystal should be sufficiently large (typically larger than 0.1 mm in all dimensions),
pure in composition and regular in structure, with no significant internal imperfections
such as cracks or twinning.

In the second step, the crystal is placed in an intense beam of X-rays, usually of a single
wavelength (monochromatic X-rays), producing the regular pattern of reflections. As the
crystal is gradually rotated, previous reflections disappear and new ones appear; the
intensity of every spot is recorded at every orientation of the crystal. Multiple data sets
may have to be collected, with each set covering slightly more than half a full rotation of
the crystal and typically containing tens of thousands of reflections.
Workflow for solving the structure of
In the third step, these data are combined computationally with complementary chemical a molecule by X-ray crystallography.
information to produce and refine a model of the arrangement of atoms within the crystal.
The final, refined model of the atomic arrangement—now called a crystal structure—is usually stored in a public database.

Limitations

As the crystal's repeating unit, its unit cell, becomes larger and more complex, the atomic-level picture provided by X-ray
crystallography becomes less well-resolved (more "fuzzy") for a given number of observed reflections. Two limiting cases of X-
ray crystallography—"small-molecule" (which includes continuous inorganic solids) and "macromolecular" crystallography—
are often discerned. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric
unit; such crystal structures are usually so well resolved that the atoms can be discerned as isolated "blobs" of electron density.
By contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures
are generally less well-resolved (more "smeared out"); the atoms and chemical bonds appear as tubes of electron density, rather
than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray
crystallography has proven possible even for viruses with hundreds of thousands of atoms. Though normally x-ray
crystallography can only be performed if the sample is in crystal form, new research has been done into sampling non-crystalline
forms of samples.[96]

Crystallization

Although crystallography can be used to characterize the disorder in an impure or irregular


crystal, crystallography generally requires a pure crystal of high regularity to solve the
structure of a complicated arrangement of atoms. Pure, regular crystals can sometimes be
obtained from natural or synthetic materials, such as samples of metals, minerals or other
macroscopic materials. The regularity of such crystals can sometimes be improved with
macromolecular crystal annealing[97][98][99] and other methods. However, in many cases,
obtaining a diffraction-quality crystal is the chief barrier to solving its atomic-resolution
structure.[100]
A protein crystal seen under a Small-molecule and macromolecular crystallography differ in the range of possible
microscope. Crystals used in X- techniques used to produce diffraction-quality crystals. Small molecules generally have few
ray crystallography may be degrees of conformational freedom, and may be crystallized by a wide range of methods,
smaller than a millimeter across. such as chemical vapor deposition and recrystallization. By contrast, macromolecules
generally have many degrees of freedom and their crystallization must be carried out so as to
maintain a stable structure. For example, proteins and larger RNA molecules cannot be crystallized if their tertiary structure has
been unfolded; therefore, the range of crystallization conditions is restricted to solution conditions in which such molecules
remain folded.

Protein crystals are almost always grown in solution. The most common approach is to lower the solubility of its component
molecules very gradually; if this is done too quickly, the molecules will precipitate from solution, forming a useless dust or
amorphous gel on the bottom of the container. Crystal growth in solution is characterized by two steps: nucleation of a
microscopic crystallite (possibly having only 100 molecules), followed by growth of that crystallite, ideally to a diffraction-
https://en.wikipedia.org/wiki/X-ray_crystallography 8/24
7/25/2017 X-ray crystallography - Wikipedia

quality crystal.[101] The solution conditions that favor the first step (nucleation) are not always
the same conditions that favor the second step (subsequent growth). The crystallographer's goal
is to identify solution conditions that favor the development of a single, large crystal, since larger
crystals offer improved resolution of the molecule. Consequently, the solution conditions should
disfavor the first step (nucleation) but favor the second (growth), so that only one large crystal
forms per droplet. If nucleation is favored too much, a shower of small crystallites will form in
the droplet, rather than one large crystal; if favored too little, no crystal will form whatsoever.
Other approaches involves, crystallizing proteins under oil, where aqueous protein solutions are
dispensed under liquid oil, and water evaporates through the layer of oil. Different oils have
different evaporation permeabilities, therefore yielding changes in concentration rates from
different percipient/protein mixture. The technique relies on bringing the protein directly into the
nucleation zone by mixing protein with the appropriate amount of percipient to prevent the
diffusion of water out of the drop.

It is extremely difficult to predict good conditions for nucleation or growth of well-ordered


crystals.[102] In practice, favorable conditions are identified by screening; a very large batch of
the molecules is prepared, and a wide variety of crystallization solutions are tested.[103]
Hundreds, even thousands, of solution conditions are generally tried before finding the successful
one. The various conditions can use one or more physical mechanisms to lower the solubility of
the molecule; for example, some may change the pH, some contain salts of the Hofmeister series
or chemicals that lower the dielectric constant of the solution, and still others contain large
polymers such as polyethylene glycol that drive the molecule out of solution by entropic effects.
It is also common to try several temperatures for encouraging crystallization, or to gradually
lower the temperature so that the solution becomes supersaturated. These methods require large Three methods of preparing
amounts of the target molecule, as they use high concentration of the molecule(s) to be crystals, A: Hanging drop.
crystallized. Due to the difficulty in obtaining such large quantities (milligrams) of B: Sitting drop. C:
crystallization-grade protein, robots have been developed that are capable of accurately Microdialysis
dispensing crystallization trial drops that are in the order of 100 nanoliters in volume. This means
that 10-fold less protein is used per experiment when compared to crystallization trials set up by hand (in the order of 1
microliter).[104]

Several factors are known to inhibit or mar crystallization. The growing crystals are generally held at a constant temperature and
protected from shocks or vibrations that might disturb their crystallization. Impurities in the molecules or in the crystallization
solutions are often inimical to crystallization. Conformational flexibility in the molecule also tends to make crystallization less
likely, due to entropy. Ironically, molecules that tend to self-assemble into regular helices are often unwilling to assemble into
crystals. Crystals can be marred by twinning, which can occur when a unit cell can pack equally favorably in multiple
orientations; although recent advances in computational methods may allow solving the structure of some twinned crystals.
Having failed to crystallize a target molecule, a crystallographer may try again with a slightly modified version of the molecule;
even small changes in molecular properties can lead to large differences in crystallization behavior.

Data collection

Mounting the crystal

The crystal is mounted for measurements so that it may be held in the X-ray beam and rotated. There are several methods of
mounting. In the past, crystals were loaded into glass capillaries with the crystallization solution (the mother liquor). Nowadays,
crystals of small molecules are typically attached with oil or glue to a glass fiber or a loop, which is made of nylon or plastic and
attached to a solid rod. Protein crystals are scooped up by a loop, then flash-frozen with liquid nitrogen.[105] This freezing
reduces the radiation damage of the X-rays, as well as the noise in the Bragg peaks due to thermal motion (the Debye-Waller
effect). However, untreated protein crystals often crack if flash-frozen; therefore, they are generally pre-soaked in a
cryoprotectant solution before freezing.[106] Unfortunately, this pre-soak may itself cause the crystal to crack, ruining it for
crystallography. Generally, successful cryo-conditions are identified by trial and error.

The capillary or loop is mounted on a goniometer, which allows it to be positioned accurately within the X-ray beam and rotated.
Since both the crystal and the beam are often very small, the crystal must be centered within the beam to within ~25 micrometers
accuracy, which is aided by a camera focused on the crystal. The most common type of goniometer is the "kappa goniometer",
which offers three angles of rotation: the ω angle, which rotates about an axis perpendicular to the beam; the κ angle, about an
axis at ~50° to the ω axis; and, finally, the φ angle about the loop/capillary axis. When the κ angle is zero, the ω and φ axes are
aligned. The κ rotation allows for convenient mounting of the crystal, since the arm in which the crystal is mounted may be
swung out towards the crystallographer. The oscillations carried out during data collection (mentioned below) involve the ω axis
only. An older type of goniometer is the four-circle goniometer, and its relatives such as the six-circle goniometer.

https://en.wikipedia.org/wiki/X-ray_crystallography 9/24
7/25/2017 X-ray crystallography - Wikipedia

X-ray sources

Rotating anode

Small scale can be done on a local X-ray tube source, typically coupled with an image
plate detector. These have the advantage of being (relatively) inexpensive and easy to
maintain, and allow for quick screening and collection of samples. However, the
wavelength light produced is limited by anode material, typically copper. Further,
intensity is limited by the power applied and cooling capacity available to avoid melting
the anode. In such systems, electrons are boiled off of a cathode and accelerated through
Animation showing the five motions a strong electric potential of ~50 kV; having reached a high speed, the electrons collide
possible with a four-circle kappa with a metal plate, emitting bremsstrahlung and some strong spectral lines corresponding
goniometer. The rotations about each to the excitation of inner-shell electrons of the metal. The most common metal used is
of the four angles φ, κ, ω and 2θ copper, which can be kept cool easily, due to its high thermal conductivity, and which
leave the crystal within the X-ray produces strong Kα and Kβ lines. The Kβ line is sometimes suppressed with a thin
beam, but change the crystal
(~10 µm) nickel foil. The simplest and cheapest variety of sealed X-ray tube has a
orientation. The detector (red box)
stationary anode (the Crookes tube) and run with ~2 kW of electron beam power. The
can be slid closer or further away
more expensive variety has a rotating-anode type source that run with ~14 kW of e-beam
from the crystal, allowing higher
power.
resolution data to be taken (if closer)
or better discernment of the Bragg X-rays are generally filtered (by use of X-Ray Filters) to a single wavelength (made
peaks (if further away). monochromatic) and collimated to a single direction before they are allowed to strike the
crystal. The filtering not only simplifies the data analysis, but also removes radiation that
degrades the crystal without contributing useful information. Collimation is done either with a collimator (basically, a long tube)
or with a clever arrangement of gently curved mirrors. Mirror systems are preferred for small crystals (under 0.3 mm) or with
large unit cells (over 150 Å)

Synchrotron radiation

Synchrotron radiation are some of the brightest lights on earth. It is the single most powerful tool available to X-ray
crystallographers. It is made of X-ray beams generated in large machines called synchrotrons. These machines accelerate
electrically charged particles, often electrons, to nearly the speed of light and confine them in a (roughly) circular loop using
magnetic fields.

Synchrotrons are generally national facilities, each with several dedicated beamlines where data is collected without interruption.
Synchrotrons were originally designed for use by high-energy physicists studying subatomic particles and cosmic phenomena.
The largest component of each synchrotron is its electron storage ring. This ring is actually not a perfect circle, but a many-sided
polygon. At each corner of the polygon, or sector, precisely aligned magnets bend the electron stream. As the electrons’ path is
bent, they emit bursts of energy in the form of X-rays.

Using synchrotron radiation frequently has specific requirements for X-ray crystallography. The intense ionizing radiation can
cause radiation damage to samples, particularly macromolecular crystals. Cryo crystallography protects the sample from
radiation damage, by freezing the crystal at liquid nitrogen temperatures (~100 K).[107] However, synchrotron radiation
frequently has the advantage of user selectable wavelengths, allowing for anomalous scattering experiments which maximizes
anomalous signal. This is critical in experiments such as SAD and MAD.

Free electron laser

Recently, free electron lasers have been developed for use in X-ray crystallography.[108] These are the brightest X-ray sources
currently available; with the X-rays coming in femtosecond bursts. The intensity of the source is such that atomic resolution
diffraction patterns can be resolved for crystals otherwise too small for collection. However, the intense light source also
destroys the sample,[109] requiring multiple crystals to be shot. As each crystal is randomly oriented in the beam, hundreds of
thousands of individual diffraction images must be collected in order to get a complete data-set. This method, serial femtosecond
crystallography, has been used in solving the structure of a number of protein crystal structures, sometimes noting differences
with equivalent structures collected from synchrotron sources.[110]

Recording the reflections

When a crystal is mounted and exposed to an intense beam of X-rays, it scatters the X-rays into a pattern of spots or reflections
that can be observed on a screen behind the crystal. A similar pattern may be seen by shining a laser pointer at a compact disc.
The relative intensities of these spots provide the information to determine the arrangement of molecules within the crystal in
atomic detail. The intensities of these reflections may be recorded with photographic film, an area detector or with a charge-
https://en.wikipedia.org/wiki/X-ray_crystallography 10/24
7/25/2017 X-ray crystallography - Wikipedia

coupled device (CCD) image sensor. The peaks at small angles correspond to low-
resolution data, whereas those at high angles represent high-resolution data; thus, an
upper limit on the eventual resolution of the structure can be determined from the first
few images. Some measures of diffraction quality can be determined at this point, such as
the mosaicity of the crystal and its overall disorder, as observed in the peak widths. Some
pathologies of the crystal that would render it unfit for solving the structure can also be
diagnosed quickly at this point.

One image of spots is insufficient to reconstruct the whole crystal; it represents only a
small slice of the full Fourier transform. To collect all the necessary information, the
crystal must be rotated step-by-step through 180°, with an image recorded at every step;
actually, slightly more than 180° is required to cover reciprocal space, due to the
curvature of the Ewald sphere. However, if the crystal has a higher symmetry, a smaller
angular range such as 90° or 45° may be recorded. The rotation axis should be changed at An X-ray diffraction pattern of a
least once, to avoid developing a "blind spot" in reciprocal space close to the rotation crystallized enzyme. The pattern of
axis. It is customary to rock the crystal slightly (by 0.5–2°) to catch a broader region of spots (reflections) and the relative
reciprocal space. strength of each spot (intensities) can
be used to determine the structure of
Multiple data sets may be necessary for certain phasing methods. For example, MAD the enzyme.
phasing requires that the scattering be recorded at least three (and usually four, for
redundancy) wavelengths of the incoming X-ray radiation. A single crystal may degrade too much during the collection of one
data set, owing to radiation damage; in such cases, data sets on multiple crystals must be taken.[111]

Data analysis

Crystal symmetry, unit cell, and image scaling

The recorded series of two-dimensional diffraction patterns, each corresponding to a different crystal orientation, is converted
into a three-dimensional model of the electron density; the conversion uses the mathematical technique of Fourier transforms,
which is explained below. Each spot corresponds to a different type of variation in the electron density; the crystallographer must
determine which variation corresponds to which spot (indexing), the relative strengths of the spots in different images (merging
and scaling) and how the variations should be combined to yield the total electron density (phasing).

Data processing begins with indexing the reflections. This means identifying the dimensions of the unit cell and which image
peak corresponds to which position in reciprocal space. A byproduct of indexing is to determine the symmetry of the crystal, i.e.,
its space group. Some space groups can be eliminated from the beginning. For example, reflection symmetries cannot be
observed in chiral molecules; thus, only 65 space groups of 230 possible are allowed for protein molecules which are almost
always chiral. Indexing is generally accomplished using an autoindexing routine.[112] Having assigned symmetry, the data is then
integrated. This converts the hundreds of images containing the thousands of reflections into a single file, consisting of (at the
very least) records of the Miller index of each reflection, and an intensity for each reflection (at this state the file often also
includes error estimates and measures of partiality (what part of a given reflection was recorded on that image)).

A full data set may consist of hundreds of separate images taken at different orientations of the crystal. The first step is to merge
and scale these various images, that is, to identify which peaks appear in two or more images (merging) and to scale the relative
images so that they have a consistent intensity scale. Optimizing the intensity scale is critical because the relative intensity of the
peaks is the key information from which the structure is determined. The repetitive technique of crystallographic data collection
and the often high symmetry of crystalline materials cause the diffractometer to record many symmetry-equivalent reflections
multiple times. This allows calculating the symmetry-related R-factor, a reliability index based upon how similar are the
measured intensities of symmetry-equivalent reflections, thus assessing the quality of the data.

Initial phasing

The data collected from a diffraction experiment is a reciprocal space representation of the crystal lattice. The position of each
diffraction 'spot' is governed by the size and shape of the unit cell, and the inherent symmetry within the crystal. The intensity of
each diffraction 'spot' is recorded, and this intensity is proportional to the square of the structure factor amplitude. The structure
factor is a complex number containing information relating to both the amplitude and phase of a wave. In order to obtain an
interpretable electron density map, both amplitude and phase must be known (an electron density map allows a crystallographer
to build a starting model of the molecule). The phase cannot be directly recorded during a diffraction experiment: this is known
as the phase problem. Initial phase estimates can be obtained in a variety of ways:

Ab initio phasing or direct methods – This is usually the method of choice for small molecules (<1000 non-hydrogen
atoms), and has been used successfully to solve the phase problems for small proteins. If the resolution of the data is better

https://en.wikipedia.org/wiki/X-ray_crystallography 11/24
7/25/2017 X-ray crystallography - Wikipedia

than 1.4 Å (140 pm), direct methods can be used to obtain phase information, by exploiting known phase relationships
between certain groups of reflections.[113][114]
Molecular replacement – if a related structure is known, it can be used as a search model in molecular replacement to
determine the orientation and position of the molecules within the unit cell. The phases obtained this way can be used to
generate electron density maps.[115]
Anomalous X-ray scattering (MAD or SAD phasing) – the X-ray wavelength may be scanned past an absorption edge of
an atom, which changes the scattering in a known way. By recording full sets of reflections at three different wavelengths
(far below, far above and in the middle of the absorption edge) one can solve for the substructure of the anomalously
diffracting atoms and hence the structure of the whole molecule. The most popular method of incorporating anomalous
scattering atoms into proteins is to express the protein in a methionine auxotroph (a host incapable of synthesizing
methionine) in a media rich in seleno-methionine, which contains selenium atoms. A MAD experiment can then be
conducted around the absorption edge, which should then yield the position of any methionine residues within the protein,
providing initial phases.[116]
Heavy atom methods (multiple isomorphous replacement) – If electron-dense metal atoms can be introduced into the
crystal, direct methods or Patterson-space methods can be used to determine their location and to obtain initial phases.
Such heavy atoms can be introduced either by soaking the crystal in a heavy atom-containing solution, or by co-
crystallization (growing the crystals in the presence of a heavy atom). As in MAD phasing, the changes in the scattering
amplitudes can be interpreted to yield the phases. Although this is the original method by which protein crystal structures
were solved, it has largely been superseded by MAD phasing with selenomethionine.[115]

Model building and phase refinement

Having obtained initial phases, an initial model can be built. This model can be used to
refine the phases, leading to an improved model, and so on. Given a model of some
atomic positions, these positions and their respective Debye-Waller factors (or B-factors,
accounting for the thermal motion of the atom) can be refined to fit the observed
diffraction data, ideally yielding a better set of phases. A new model can then be fit to the
new electron density map and a further round of refinement is carried out. This continues
until the correlation between the diffraction data and the model is maximized. The
agreement is measured by an R-factor defined as

where F is the structure factor. A similar quality criterion is Rfree, which is calculated
from a subset (~10%) of reflections that were not included in the structure refinement.
Both R factors depend on the resolution of the data. As a rule of thumb, Rfree should be A protein crystal structure at 2.7 Å
resolution. The mesh encloses the
approximately the resolution in angstroms divided by 10; thus, a data-set with 2 Å
region in which the electron density
resolution should yield a final Rfree ~ 0.2. Chemical bonding features such as exceeds a given threshold. The
stereochemistry, hydrogen bonding and distribution of bond lengths and angles are straight segments represent chemical
complementary measures of the model quality. Phase bias is a serious problem in such bonds between the non-hydrogen
iterative model building. Omit maps are a common technique used to check for this. atoms of an arginine (upper left), a
tyrosine (lower left), a disulfide bond
It may not be possible to observe every atom in the asymmetric unit. In many cases, (upper right, in yellow), and some
disorder smears the electron density map. Weakly scattering atoms such as hydrogen are peptide groups (running left-right in
routinely invisible. It is also possible for a single atom to appear multiple times in an the middle). The two curved green
electron density map, e.g., if a protein sidechain has multiple (<4) allowed tubes represent spline fits to the
conformations. In still other cases, the crystallographer may detect that the covalent polypeptide backbone.
structure deduced for the molecule was incorrect, or changed. For example, proteins may
be cleaved or undergo post-translational modifications that were not detected prior to the
crystallization.

Disorder

A common challenge in refinement of crystal structures results from crystallographic disorder. Disorder can take many forms but
in general involves the coexistence of two or more species or conformations. Failure to recognize disorder results in flawed
interpretation. Pitfalls from improper modeling of disorder are illustrated by the discounted hypothesis of bond stretch
isomerism.[117] Disorder is modelled with respect to the relative population of the components, often only two, and their identity.
In structures of large molecules and ions, solvent and counterions are often disordered.

Deposition of the structure

https://en.wikipedia.org/wiki/X-ray_crystallography 12/24
7/25/2017 X-ray crystallography - Wikipedia

Once the model of a molecule's structure has been finalized, it is often deposited in a crystallographic database such as the
Cambridge Structural Database (for small molecules), the Inorganic Crystal Structure Database (ICSD) (for inorganic
compounds) or the Protein Data Bank (for protein structures). Many structures obtained in private commercial ventures to
crystallize medicinally relevant proteins are not deposited in public crystallographic databases.

Diffraction theory
The main goal of X-ray crystallography is to determine the density of electrons f(r) throughout the crystal, where r represents the
three-dimensional position vector within the crystal. To do this, X-ray scattering is used to collect data about its Fourier
transform F(q), which is inverted mathematically to obtain the density defined in real space, using the formula

where the integral is taken over all values of q. The three-dimensional real vector q represents a point in reciprocal space, that is,
to a particular oscillation in the electron density as one moves in the direction in which q points. The length of q corresponds to
2 divided by the wavelength of the oscillation. The corresponding formula for a Fourier transform will be used below

where the integral is summed over all possible values of the position vector r within the crystal.

The Fourier transform F(q) is generally a complex number, and therefore has a magnitude |F(q)| and a phase φ(q) related by the
equation

The intensities of the reflections observed in X-ray diffraction give us the magnitudes |F(q)| but not the phases φ(q). To obtain
the phases, full sets of reflections are collected with known alterations to the scattering, either by modulating the wavelength past
a certain absorption edge or by adding strongly scattering (i.e., electron-dense) metal atoms such as mercury. Combining the
magnitudes and phases yields the full Fourier transform F(q), which may be inverted to obtain the electron density f(r).

Crystals are often idealized as being perfectly periodic. In that ideal case, the atoms are positioned on a perfect lattice, the
electron density is perfectly periodic, and the Fourier transform F(q) is zero except when q belongs to the reciprocal lattice (the
so-called Bragg peaks). In reality, however, crystals are not perfectly periodic; atoms vibrate about their mean position, and there
may be disorder of various types, such as mosaicity, dislocations, various point defects, and heterogeneity in the conformation of
crystallized molecules. Therefore, the Bragg peaks have a finite width and there may be significant diffuse scattering, a
continuum of scattered X-rays that fall between the Bragg peaks.

Intuitive understanding by Bragg's law

An intuitive understanding of X-ray diffraction can be obtained from the Bragg model of diffraction. In this model, a given
reflection is associated with a set of evenly spaced sheets running through the crystal, usually passing through the centers of the
atoms of the crystal lattice. The orientation of a particular set of sheets is identified by its three Miller indices (h, k, l), and let
their spacing be noted by d. William Lawrence Bragg proposed a model in which the incoming X-rays are scattered specularly
(mirror-like) from each plane; from that assumption, X-rays scattered from adjacent planes will combine constructively
(constructive interference) when the angle θ between the plane and the X-ray results in a path-length difference that is an integer
multiple n of the X-ray wavelength λ.

A reflection is said to be indexed when its Miller indices (or, more correctly, its reciprocal lattice vector components) have been
identified from the known wavelength and the scattering angle 2θ. Such indexing gives the unit-cell parameters, the lengths and
angles of the unit-cell, as well as its space group. Since Bragg's law does not interpret the relative intensities of the reflections,
however, it is generally inadequate to solve for the arrangement of atoms within the unit-cell; for that, a Fourier transform
method must be carried out.

Scattering as a Fourier transform

The incoming X-ray beam has a polarization and should be represented as a vector wave; however, for simplicity, let it be
represented here as a scalar wave. We also ignore the complication of the time dependence of the wave and just concentrate on
the wave's spatial dependence. Plane waves can be represented by a wave vector kin, and so the strength of the incoming wave at
https://en.wikipedia.org/wiki/X-ray_crystallography 13/24
7/25/2017 X-ray crystallography - Wikipedia

time t=0 is given by

At position r within the sample, let there be a density of scatterers f(r); these scatterers should produce a scattered spherical
wave of amplitude proportional to the local amplitude of the incoming wave times the number of scatterers in a small volume dV
about r

where S is the proportionality constant.

Let's consider the fraction of scattered waves that leave with an outgoing wave-vector of kout and strike the screen at rscreen.
Since no energy is lost (elastic, not inelastic scattering), the wavelengths are the same as are the magnitudes of the wave-vectors
|kin|=|kout|. From the time that the photon is scattered at r until it is absorbed at rscreen, the photon undergoes a change in phase

The net radiation arriving at rscreen is the sum of all the scattered waves throughout the crystal

which may be written as a Fourier transform

where q = kout – kin. The measured intensity of the reflection will be square of this amplitude

Friedel and Bijvoet mates

For every reflection corresponding to a point q in the reciprocal space, there is another reflection of the same intensity at the
opposite point -q. This opposite reflection is known as the Friedel mate of the original reflection. This symmetry results from the
mathematical fact that the density of electrons f(r) at a position r is always a real number. As noted above, f(r) is the inverse
transform of its Fourier transform F(q); however, such an inverse transform is a complex number in general. To ensure that f(r)
is real, the Fourier transform F(q) must be such that the Friedel mates F(−q) and F(q) are complex conjugates of one another.
Thus, F(−q) has the same magnitude as F(q) but they have the opposite phase, i.e., φ(q) = −φ(q)

The equality of their magnitudes ensures that the Friedel mates have the same intensity |F|2. This symmetry allows one to
measure the full Fourier transform from only half the reciprocal space, e.g., by rotating the crystal slightly more than 180°
instead of a full 360° revolution. In crystals with significant symmetry, even more reflections may have the same intensity
(Bijvoet mates); in such cases, even less of the reciprocal space may need to be measured. In favorable cases of high symmetry,
sometimes only 90° or even only 45° of data are required to completely explore the reciprocal space.

The Friedel-mate constraint can be derived from the definition of the inverse Fourier transform

Since Euler's formula states that eix = cos(x) + i sin(x), the inverse Fourier transform can be separated into a sum of a purely real
part and a purely imaginary part

https://en.wikipedia.org/wiki/X-ray_crystallography 14/24
7/25/2017 X-ray crystallography - Wikipedia

The function f(r) is real if and only if the second integral Isin is zero for all values of r. In turn, this is true if and only if the
above constraint is satisfied

since Isin = −Isin implies that Isin=0.

Ewald's sphere

Each X-ray diffraction image represents only a slice, a spherical slice of reciprocal space, as may be seen by the Ewald sphere
construction. Both kout and kin have the same length, due to the elastic scattering, since the wavelength has not changed.
Therefore, they may be represented as two radial vectors in a sphere in reciprocal space, which shows the values of q that are
sampled in a given diffraction image. Since there is a slight spread in the incoming wavelengths of the incoming X-ray beam, the
values of|F(q)|can be measured only for q vectors located between the two spheres corresponding to those radii. Therefore, to
obtain a full set of Fourier transform data, it is necessary to rotate the crystal through slightly more than 180°, or sometimes less
if sufficient symmetry is present. A full 360° rotation is not needed because of a symmetry intrinsic to the Fourier transforms of
real functions (such as the electron density), but "slightly more" than 180° is needed to cover all of reciprocal space within a
given resolution because of the curvature of the Ewald sphere. In practice, the crystal is rocked by a small amount (0.25-1°) to
incorporate reflections near the boundaries of the spherical Ewald's shells.

Patterson function

A well-known result of Fourier transforms is the autocorrelation theorem, which states that the autocorrelation c(r) of a function
f(r)

has a Fourier transform C(q) that is the squared magnitude of F(q)

Therefore, the autocorrelation function c(r) of the electron density (also known as the Patterson function[118]) can be computed
directly from the reflection intensities, without computing the phases. In principle, this could be used to determine the crystal
structure directly; however, it is difficult to realize in practice. The autocorrelation function corresponds to the distribution of
vectors between atoms in the crystal; thus, a crystal of N atoms in its unit cell may have N(N-1) peaks in its Patterson function.
Given the inevitable errors in measuring the intensities, and the mathematical difficulties of reconstructing atomic positions from
the interatomic vectors, this technique is rarely used to solve structures, except for the simplest crystals.

Advantages of a crystal

In principle, an atomic structure could be determined from applying X-ray scattering to non-crystalline samples, even to a single
molecule. However, crystals offer a much stronger signal due to their periodicity. A crystalline sample is by definition periodic; a
crystal is composed of many unit cells repeated indefinitely in three independent directions. Such periodic systems have a
Fourier transform that is concentrated at periodically repeating points in reciprocal space known as Bragg peaks; the Bragg
peaks correspond to the reflection spots observed in the diffraction image. Since the amplitude at these reflections grows linearly
with the number N of scatterers, the observed intensity of these spots should grow quadratically, like N2. In other words, using a
crystal concentrates the weak scattering of the individual unit cells into a much more powerful, coherent reflection that can be
observed above the noise. This is an example of constructive interference.

In a liquid, powder or amorphous sample, molecules within that sample are in random orientations. Such samples have a
continuous Fourier spectrum that uniformly spreads its amplitude thereby reducing the measured signal intensity, as is observed
in SAXS. More importantly, the orientational information is lost. Although theoretically possible, it is experimentally difficult to
obtain atomic-resolution structures of complicated, asymmetric molecules from such rotationally averaged data. An intermediate
case is fiber diffraction in which the subunits are arranged periodically in at least one dimension.

Nobel Prizes for X-ray crystallography

https://en.wikipedia.org/wiki/X-ray_crystallography 15/24
7/25/2017 X-ray crystallography - Wikipedia

Year Laureate Prize Rationale

1914 Max von Laue Physics "For his discovery of the diffraction of X-rays by crystals",[119] an important step in the
development of X-ray spectroscopy.
William Henry
1915 Physics "For their services in the analysis of crystal structure by means of X-rays",[120]
Bragg
William
1915
Lawrence Bragg
Physics "For their services in the analysis of crystal structure by means of X-rays",[120]

1962 Max F. Perutz Chemistry "for their studies of the structures of globular proteins"[121]
1962 John C. Kendrew Chemistry "for their studies of the structures of globular proteins"[121]
James Dewey "For their discoveries concerning the molecular structure of nucleic acids and its
1962 Medicine
Watson significance for information transfer in living material"[122]
Francis Harry "For their discoveries concerning the molecular structure of nucleic acids and its
1962 Medicine
Compton Crick significance for information transfer in living material"[122]
Maurice Hugh "For their discoveries concerning the molecular structure of nucleic acids and its
1962 Medicine
Frederick Wilkins significance for information transfer in living material"[122]
"For her determinations by X-ray techniques of the structures of important biochemical
1964 Dorothy Hodgkin Chemistry
substances"[123]
"For their contribution to the understanding of the connection between chemical
1972 Stanford Moore Chemistry
structure and catalytic activity of the active centre of the ribonuclease molecule"[124]
"For their contribution to the understanding of the connection between chemical
1972 William H. Stein Chemistry
structure and catalytic activity of the active centre of the ribonuclease molecule"[124]
William N. "For his studies on the structure of boranes illuminating problems of chemical
1976 Chemistry
Lipscomb bonding"[125]
"For their outstanding achievements in developing direct methods for the determination
1985 Jerome Karle Chemistry
of crystal structures"[126]
Herbert A. "For their outstanding achievements in developing direct methods for the determination
1985 Chemistry
Hauptman of crystal structures"[126]
Johann "For their determination of the three-dimensional structure of a photosynthetic reaction
1988 Chemistry
Deisenhofer centre"[127]
"For their determination of the three-dimensional structure of a photosynthetic reaction
1988 Hartmut Michel Chemistry
centre"[127]
"For their determination of the three-dimensional structure of a photosynthetic reaction
1988 Robert Huber Chemistry
centre"[127]
"For their elucidation of the enzymatic mechanism underlying the synthesis of
1997 John E. Walker Chemistry
adenosine triphosphate (ATP)"[128]
Roderick "For discoveries concerning channels in cell membranes [...] for structural and
2003 Chemistry
MacKinnon mechanistic studies of ion channels"[129]
"For discoveries concerning channels in cell membranes [...] for the discovery of water
2003 Peter Agre Chemistry
channels"[129]
Roger D.
2006 Chemistry "For his studies of the molecular basis of eukaryotic transcription"[130]
Kornberg
2009 Ada E. Yonath Chemistry "For studies of the structure and function of the ribosome"[131]
2009 Thomas A. Steitz Chemistry "For studies of the structure and function of the ribosome"[131]
Venkatraman
2009 Chemistry "For studies of the structure and function of the ribosome"[131]
Ramakrishnan
2012 Brian Kobilka Chemistry "For studies of G-protein-coupled receptors"[132]

Applications of X-ray diffraction


X-ray diffraction has a wide and various applications on the chemical, biochemical, physical, material and mineralogical
sciences. Laue`s said that 'has extended the power of on serving minute structure ten thousand times beyond that of the optical
microscope. X-ray diffraction produced a microscope with atomic resolution which shows the atoms and their electron
https://en.wikipedia.org/wiki/X-ray_crystallography 16/24
7/25/2017 X-ray crystallography - Wikipedia

distribution. X-ray diffraction, electron diffraction, and neutron diffraction give information about the structure of matter,
crystalline and non-crystalline, at the atomic and molecular level. In addition, they are attached to the properties of all materials,
inorganic, organic or biological. Due to the diffraction importance and variety of application of diffraction by crystals, a large
number of Nobel Prizes were presented to studies involving X-ray.[133]

X-ray method for investigation of drugs

X-ray diffraction was used for the identification of antibiotic drugs such as: eight β-lactam (ampicillin sodium, penicillin G
procaine, cefalexin, ampicillin trihydrate, benzathine penicillin, benzylpenicillin sodium, cefotaxime sodium, Ceftriaxone
sodium), three tetracycline (doxycycline hydrochloride, oxytetracycline dehydrate, tetracycline hydrochloride) and two
macrolide (azithromycin, erythromycin estolate) antibiotic drugs. Each of these drugs has a unique XRD pattern that makes their
identification possible.[134]

X-ray method for investigation of textile fibers and polymers

Forensic examination of any trace evidence is based upon Locard's exchange principle. This states that every contact leaves a
trace. In practice, even though a transfer of material has taken place, it may be impossible to detect, because the amount
transferred is very small.[135] Textile fibers are a mixture of crystalline and amorphous substances. Therefore the measurement of
the degree of crystalline gives useful data in the characterization of fibers using X-ray diffractometry. It has been reported that X-
ray diffraction was used to identify of a "crystalline" deposit which was found on a chair. The deposit was found to be
amorphous, but the diffraction pattern present matched that of polymethylmethacrylate. Pyrolysis mass spectrometry later
identified the deposit as polymethylcyanoacrylaon of Boin crystal parameters.[136]

X-ray method for investigation of bones

Hiller investigated the effects of heating and burning on bone mineral using X-ray diffraction (XRD) techniques. The bone
samples were heated in temperature of 500, 700, and 900 C° for 15 and 45 min. The results show bone crystals began to change
during the first 15 min of heating at 500 C0 and above. At higher temperatures, thickness and shape of crystals of bones appear
stabilized, but when the samples were heated at lower temperature or for shorter period, XRD traces showed extreme changes in
crystal parameters.[137]

See also
Beevers–Lipson strip Henderson limit Scherrer equation
Bragg diffraction International Year of Small angle X-ray scattering
Crystallographic database Crystallography (SAXS)
Crystallographic point groups John Desmond Bernal Structure determination
Difference density map Multipole density formalism Ultrafast x-ray
Electron diffraction Neutron diffraction Wide angle X-ray scattering
Energy Dispersive X-Ray Powder diffraction (WAXS)
Diffraction Ptychography

References
1. Kepler J (1611). Strena seu de Nive Sexangula (http://www.thelatinlibrary.com/kepler/strena.html). Frankfurt: G. Tampach. ISBN 3-321-
00021-0.
2. Steno N (1669). De solido intra solidum naturaliter contento dissertationis prodromus. Florentiae.
3. Hessel JFC (1831). Kristallometrie oder Kristallonomie und Kristallographie. Leipzig.
4. Bravais A (1850). "Mémoire sur les systèmes formés par des points distribués regulièrement sur un plan ou dans l'espace". Journal de
l'Ecole Polytechnique. 19: 1.
5. Shafranovskii I I & Belov N V (1962). Paul Ewald, ed. "E. S. Fedorov" (http://www.iucr.org/iucr-top/publ/50YearsOfXrayDiffraction/fe
dorov.pdf) (PDF). 50 Years of X-Ray Diffraction. Springer: 351. ISBN 90-277-9029-9.
6. Schönflies A (1891). Kristallsysteme und Kristallstruktur. Leipzig.
7. Barlow W (1883). "Probable nature of the internal symmetry of crystals". Nature. 29 (738): 186. Bibcode:1883Natur..29..186B (http://a
dsabs.harvard.edu/abs/1883Natur..29..186B). doi:10.1038/029186a0 (https://doi.org/10.1038%2F029186a0). See also Barlow, William
(1883). "Probable Nature of the Internal Symmetry of Crystals". Nature. 29 (739): 205. Bibcode:1883Natur..29..205B (http://adsabs.har
vard.edu/abs/1883Natur..29..205B). doi:10.1038/029205a0 (https://doi.org/10.1038%2F029205a0). Sohncke, L. (1884). "Probable
Nature of the Internal Symmetry of Crystals". Nature. 29 (747): 383. Bibcode:1884Natur..29..383S (http://adsabs.harvard.edu/abs/1884
Natur..29..383S). doi:10.1038/029383a0 (https://doi.org/10.1038%2F029383a0). Barlow, WM. (1884). "Probable Nature of the Internal
Symmetry of Crystals". Nature. 29 (748): 404. Bibcode:1884Natur..29..404B (http://adsabs.harvard.edu/abs/1884Natur..29..404B).
doi:10.1038/029404b0 (https://doi.org/10.1038%2F029404b0).

https://en.wikipedia.org/wiki/X-ray_crystallography 17/24
7/25/2017 X-ray crystallography - Wikipedia
8. Einstein A (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" [A Heuristic
Model of the Creation and Transformation of Light]. Annalen der Physik (in German). 17 (6): 132. Bibcode:1905AnP...322..132E (htt
p://adsabs.harvard.edu/abs/1905AnP...322..132E). doi:10.1002/andp.19053220607 (https://doi.org/10.1002%2Fandp.19053220607).. An
English translation is available from Wikisource.
9. Compare: Einstein A (1909). "Über die Entwicklung unserer Anschauungen über das Wesen und die Konstitution der Strahlung" [The
Development of Our Views on the Composition and Essence of Radiation)]. Physikalische Zeitschrift (in German). 10: 817.. An English
translation is available from Wikisource.
10. Pais A (1982). Subtle is the Lord: The Science and the Life of Albert Einstein. Oxford University Press. ISBN 0-19-853907-X.
11. Compton A (1923). "A Quantum Theory of the Scattering of X-rays by Light Elements" (http://www.aip.org/history/gap/Compton/01_C
ompton.html). Phys. Rev. 21 (5): 483. Bibcode:1923PhRv...21..483C (http://adsabs.harvard.edu/abs/1923PhRv...21..483C).
doi:10.1103/PhysRev.21.483 (https://doi.org/10.1103%2FPhysRev.21.483).
12. Bragg WH (1907). "The nature of Röntgen rays". Transactions of the Royal Society of Science of Australia. 31: 94.
13. Bragg WH (1908). "The nature of γ- and X-rays". Nature. 77 (1995): 270. Bibcode:1908Natur..77..270B (http://adsabs.harvard.edu/abs/
1908Natur..77..270B). doi:10.1038/077270a0 (https://doi.org/10.1038%2F077270a0). See also Bragg, W. H. (1908). "The Nature of the
γ and X-Rays". Nature. 78 (2021): 271. Bibcode:1908Natur..78..271B (http://adsabs.harvard.edu/abs/1908Natur..78..271B).
doi:10.1038/078271a0 (https://doi.org/10.1038%2F078271a0). Bragg, W. H. (1908). "The Nature of the γ and X-Rays". Nature. 78
(2022): 293. Bibcode:1908Natur..78..293B (http://adsabs.harvard.edu/abs/1908Natur..78..293B). doi:10.1038/078293d0 (https://doi.org/
10.1038%2F078293d0). Bragg, W. H. (1908). "The Nature of X-Rays". Nature. 78 (2035): 665. Bibcode:1908Natur..78R.665B (http://a
dsabs.harvard.edu/abs/1908Natur..78R.665B). doi:10.1038/078665b0 (https://doi.org/10.1038%2F078665b0).
14. Bragg WH (1910). "The consequences of the corpuscular hypothesis of the γ- and X-rays, and the range of β-rays". Phil. Mag. 20 (117):
385. doi:10.1080/14786441008636917 (https://doi.org/10.1080%2F14786441008636917).
15. Bragg WH (1912). "On the direct or indirect nature of the ionization by X-rays". Phil. Mag. 23 (136): 647.
doi:10.1080/14786440408637253 (https://doi.org/10.1080%2F14786440408637253).
16. Friedrich W; Knipping P; von Laue M (1912). "Interferenz-Erscheinungen bei Röntgenstrahlen". Sitzungsberichte der Mathematisch-
Physikalischen Classe der Königlich-Bayerischen Akademie der Wissenschaften zu München. 1912: 303.
17. von Laue M (1914). "Concerning the detection of x-ray interferences" (http://nobelprize.org/nobel_prizes/physics/laureates/1914/laue-le
cture.pdf) (PDF). Nobel Lectures, Physics. 1901–1921. Retrieved 2009-02-18.
18. Dana ES; Ford WE (1932). A Textbook of Mineralogy (fourth ed.). New York: John Wiley & Sons. p. 28.
19. Andre Guinier (1952). X-ray Crystallographic Technology. London: Hilger and Watts LTD. p. 271.
20. Bragg WL (1912). "The Specular Reflexion of X-rays". Nature. 90 (2250): 410. Bibcode:1912Natur..90..410B (http://adsabs.harvard.ed
u/abs/1912Natur..90..410B). doi:10.1038/090410b0 (https://doi.org/10.1038%2F090410b0).
21. Bragg WL (1913). "The Diffraction of Short Electromagnetic Waves by a Crystal". Proceedings of the Cambridge Philosophical
Society. 17: 43.
22. Bragg (1914). "Die Reflexion der Röntgenstrahlen". Jahrbuch der Radioaktivität und Elektronik. 11: 350.
23. Bragg (1913). "The Structure of Some Crystals as Indicated by their Diffraction of X-rays". Proc. R. Soc. Lond. A89 (610): 248–277.
Bibcode:1913RSPSA..89..248B (http://adsabs.harvard.edu/abs/1913RSPSA..89..248B). JSTOR 93488 (https://www.jstor.org/stable/934
88). doi:10.1098/rspa.1913.0083 (https://doi.org/10.1098%2Frspa.1913.0083).
24. Bragg WL; James RW; Bosanquet CH (1921). "The Intensity of Reflexion of X-rays by Rock-Salt". Phil. Mag. 41 (243): 309.
doi:10.1080/14786442108636225 (https://doi.org/10.1080%2F14786442108636225).
25. Bragg WL; James RW; Bosanquet CH (1921). "The Intensity of Reflexion of X-rays by Rock-Salt. Part II". Phil. Mag. 42 (247): 1.
doi:10.1080/14786442108633730 (https://doi.org/10.1080%2F14786442108633730).
26. Bragg WL; James RW; Bosanquet CH (1922). "The Distribution of Electrons around the Nucleus in the Sodium and Chlorine Atoms".
Phil. Mag. 44 (261): 433. doi:10.1080/14786440908565188 (https://doi.org/10.1080%2F14786440908565188).
27. Bragg WH; Bragg WL (1913). "The structure of the diamond". Nature. 91 (2283): 557. Bibcode:1913Natur..91..557B (http://adsabs.har
vard.edu/abs/1913Natur..91..557B). doi:10.1038/091557a0 (https://doi.org/10.1038%2F091557a0).
28. Bragg WH; Bragg WL (1913). "The structure of the diamond". Proc. R. Soc. Lond. A89 (610): 277. Bibcode:1913RSPSA..89..277B (htt
p://adsabs.harvard.edu/abs/1913RSPSA..89..277B). doi:10.1098/rspa.1913.0084 (https://doi.org/10.1098%2Frspa.1913.0084).
29. Bragg WL (1914). "The Crystalline Structure of Copper". Phil. Mag. 28 (165): 355. doi:10.1080/14786440908635219 (https://doi.org/1
0.1080%2F14786440908635219).
30. Bragg WL (1914). "The analysis of crystals by the X-ray spectrometer". Proc. R. Soc. Lond. A89 (613): 468.
Bibcode:1914RSPSA..89..468B (http://adsabs.harvard.edu/abs/1914RSPSA..89..468B). doi:10.1098/rspa.1914.0015 (https://doi.org/10.
1098%2Frspa.1914.0015).
31. Bragg WH (1915). "The structure of the spinel group of crystals". Phil. Mag. 30 (176): 305. doi:10.1080/14786440808635400 (https://d
oi.org/10.1080%2F14786440808635400).
32. Nishikawa S (1915). "Structure of some crystals of spinel group". Proc. Tokyo Math. Phys. Soc. 8: 199.
33. Vegard L (1916). "Results of Crystal Analysis". Phil. Mag. 32 (187): 65. doi:10.1080/14786441608635544 (https://doi.org/10.1080%2F
14786441608635544).
34. Aminoff G (1919). "Crystal Structure of Pyrochroite". Stockholm Geol. Fören. Förh. 41: 407.
35. Aminoff G (1921). "Über die Struktur des Magnesiumhydroxids". Z. Kristallogr. 56: 505.
36. Bragg WL (1920). "The crystalline structure of zinc oxide". Phil. Mag. 39 (234): 647. doi:10.1080/14786440608636079 (https://doi.org/
10.1080%2F14786440608636079).
37. Debije P, Scherrer P (1916). "Interferenz an regellos orientierten Teilchen im Röntgenlicht I". Physikalische Zeitschrift. 17: 277.
38. Friedrich W (1913). "Eine neue Interferenzerscheinung bei Röntgenstrahlen". Physikalische Zeitschrift. 14: 317.
39. Hull AW (1917). "A New Method of X-ray Crystal Analysis". Phys. Rev. 10 (6): 661. Bibcode:1917PhRv...10..661H (http://adsabs.harv
ard.edu/abs/1917PhRv...10..661H). doi:10.1103/PhysRev.10.661 (https://doi.org/10.1103%2FPhysRev.10.661).
40. Bernal JD (1924). "The Structure of Graphite". Proc. R. Soc. Lond. A106 (740): 749–773. JSTOR 94336 (https://www.jstor.org/stable/9
4336).
41. Hassel O; Mack H (1924). "Über die Kristallstruktur des Graphits". Zeitschrift für Physik. 25: 317. Bibcode:1924ZPhy...25..317H (htt
p://adsabs.harvard.edu/abs/1924ZPhy...25..317H). doi:10.1007/BF01327534 (https://doi.org/10.1007%2FBF01327534).
42. Hull AW (1917). "The Crystal Structure of Iron". Phys. Rev. 9: 84. Bibcode:1917PhRv....9...83. (http://adsabs.harvard.edu/abs/1917PhR
v....9...83.). doi:10.1103/PhysRev.9.83 (https://doi.org/10.1103%2FPhysRev.9.83).
https://en.wikipedia.org/wiki/X-ray_crystallography 18/24
7/25/2017 X-ray crystallography - Wikipedia
43. Hull AW (1917). "The Crystal Structure of Magnesium". PNAS. 3 (7): 470. Bibcode:1917PNAS....3..470H (http://adsabs.harvard.edu/ab
s/1917PNAS....3..470H). doi:10.1073/pnas.3.7.470 (https://doi.org/10.1073%2Fpnas.3.7.470).
44. Black, Susan AW (2005). "Domesticating the Crystal: Sir Lawrence Bragg and the Aesthetics of "X-ray Analysis" ". Configurations. 13
(2): 257. doi:10.1353/con.2007.0014 (https://doi.org/10.1353%2Fcon.2007.0014).
45. "From Atoms To Patterns" (https://web.archive.org/web/20130907164929/http://www.wellcomecollection.org/whats-on/exhibitions/fro
m-atoms-to-patterns.aspx). Wellcome Collection. Archived from the original (http://www.wellcomecollection.org/whats-on/exhibitions/f
rom-atoms-to-patterns.aspx) on September 7, 2013. Retrieved 17 October 2013.
46. Wyckoff RWG; Posnjak E (1921). "The Crystal Structure of Ammonium Chloroplatinate". J. Amer. Chem. Soc. 43 (11): 2292.
doi:10.1021/ja01444a002 (https://doi.org/10.1021%2Fja01444a002).
47. Bragg WH (1921). "The structure of organic crystals". Proc. R. Soc. Lond. 34: 33. Bibcode:1921PPSL...34...33B (http://adsabs.harvard.
edu/abs/1921PPSL...34...33B). doi:10.1088/1478-7814/34/1/306 (https://doi.org/10.1088%2F1478-7814%2F34%2F1%2F306).
48. Lonsdale K (1928). "The structure of the benzene ring". Nature. 122 (3082): 810. Bibcode:1928Natur.122..810L (http://adsabs.harvard.e
du/abs/1928Natur.122..810L). doi:10.1038/122810c0 (https://doi.org/10.1038%2F122810c0).
49. Pauling L. The Nature of the Chemical Bond (3rd ed.). Ithaca, NY: Cornell University Press. ISBN 0-8014-0333-2.
50. Bragg WH (1922). "The crystalline structure of anthracene". Proc. R. Soc. Lond. 35: 167. Bibcode:1922PPSL...35..167B (http://adsabs.h
arvard.edu/abs/1922PPSL...35..167B). doi:10.1088/1478-7814/35/1/320 (https://doi.org/10.1088%2F1478-7814%2F35%2F1%2F320).
51. Powell HM; Ewens RVG (1939). "The crystal structure of iron enneacarbonyl". J. Chem. Soc.: 286. doi:10.1039/jr9390000286 (https://d
oi.org/10.1039%2Fjr9390000286).
52. Bertrand JA, Cotton, Dollase (1963). "The Metal-Metal Bonded, Polynuclear Complex Anion in CsReCl4". J. Amer. Chem. Soc. 85 (9):
1349. doi:10.1021/ja00892a029 (https://doi.org/10.1021%2Fja00892a029).
53. Robinson WT; Fergusson JE; Penfold BR (1963). "Configuration of Anion in CsReCl4". Proceedings of the Chemical Society of
London: 116.
54. Cotton FA, Curtis, Harris, Johnson, Lippard, Mague, Robinson, Wood (1964). "Mononuclear and Polynuclear Chemistry of Rhenium
(III): Its Pronounced Homophilicity". Science. 145 (3638): 1305–7. Bibcode:1964Sci...145.1305C (http://adsabs.harvard.edu/abs/1964S
ci...145.1305C). PMID 17802015 (https://www.ncbi.nlm.nih.gov/pubmed/17802015). doi:10.1126/science.145.3638.1305 (https://doi.or
g/10.1126%2Fscience.145.3638.1305).
55. Cotton FA, Harris (1965). "The Crystal and Molecular Structure of Dipotassium Octachlorodirhenate(III) Dihydrate". Inorganic
Chemistry. 4 (3): 330. doi:10.1021/ic50025a015 (https://doi.org/10.1021%2Fic50025a015).
56. Cotton FA (1965). "Metal-Metal Bonding in [Re2X8]2− Ions and Other Metal Atom Clusters". Inorganic Chemistry. 4 (3): 334.
doi:10.1021/ic50025a016 (https://doi.org/10.1021%2Fic50025a016).
57. Eberhardt WH; Crawford W, Jr.; Lipscomb WN (1954). "The valence structure of the boron hydrides". J. Chem. Phys. 22 (6): 989.
Bibcode:1954JChPh..22..989E (http://adsabs.harvard.edu/abs/1954JChPh..22..989E). doi:10.1063/1.1740320 (https://doi.org/10.1063%
2F1.1740320).
58. Martin TW; Derewenda ZS (1999). "The name is Bond—H bond". Nature Structural Biology. 6 (5): 403–6. PMID 10331860 (https://w
ww.ncbi.nlm.nih.gov/pubmed/10331860). doi:10.1038/8195 (https://doi.org/10.1038%2F8195).
59. Dunitz JD; Orgel LE; Rich A (1956). "The crystal structure of ferrocene". Acta Crystallographica. 9 (4): 373.
doi:10.1107/S0365110X56001091 (https://doi.org/10.1107%2FS0365110X56001091).
60. Seiler P; Dunitz JD (1979). "A new interpretation of the disordered crystal structure of ferrocene". Acta Crystallographica B. 35 (5):
1068. doi:10.1107/S0567740879005598 (https://doi.org/10.1107%2FS0567740879005598).
61. Wunderlich JA; Mellor DP (1954). "A note on the crystal structure of Zeise's salt". Acta Crystallographica. 7: 130.
doi:10.1107/S0365110X5400028X (https://doi.org/10.1107%2FS0365110X5400028X).
62. Jarvis JAJ; Kilbourn BT; Owston PG (1970). "A re-determination of the crystal and molecular structure of Zeise's salt,
KPtCl3.C2H4.H2O. A correction". Acta Crystallographica B. 26 (6): 876. doi:10.1107/S056774087000328X (https://doi.org/10.1107%2
FS056774087000328X).
63. Jarvis JAJ; Kilbourn BT; Owston PG (1971). "A re-determination of the crystal and molecular structure of Zeise's salt,
KPtCl3.C2H4.H2O". Acta Crystallographica B. 27 (2): 366. doi:10.1107/S0567740871002231 (https://doi.org/10.1107%2FS056774087
1002231).
64. Love RA; Koetzle TF; Williams GJB; Andrews LC; Bau R (1975). "Neutron diffraction study of the structure of Zeise's salt,
KPtCl3(C2H4).H2O". Inorganic Chemistry. 14 (11): 2653. doi:10.1021/ic50153a012 (https://doi.org/10.1021%2Fic50153a012).
65. Brown, Dwayne (October 30, 2012). "NASA Rover's First Soil Studies Help Fingerprint Martian Minerals" (http://www.nasa.gov/home/
hqnews/2012/oct/HQ_12-383_Curiosity_CheMin.html). NASA. Retrieved October 31, 2012.
66. Westgren A; Phragmén G (1925). "X-ray Analysis of the Cu-Zn, Ag-Zn and Au-Zn Alloys". Phil. Mag. 50: 311.
67. Bradley AJ; Thewlis J (1926). "The structure of γ-Brass". Proc. R. Soc. Lond. 112 (762): 678. Bibcode:1926RSPSA.112..678B (http://ad
sabs.harvard.edu/abs/1926RSPSA.112..678B). doi:10.1098/rspa.1926.0134 (https://doi.org/10.1098%2Frspa.1926.0134).
68. Hume-Rothery W (1926). "Researches on the Nature, Properties and Conditions of Formation of Intermetallic Compounds (with special
Reference to certain Compounds of Tin)". Journal of the Institute of Metals. 35: 295.
69. Bradley AJ; Gregory CH (1927). "The Structure of certain Ternary Alloys". Nature. 120 (3027): 678. Bibcode:1927Natur.120..678. (htt
p://adsabs.harvard.edu/abs/1927Natur.120..678.). doi:10.1038/120678a0 (https://doi.org/10.1038%2F120678a0).
70. Westgren A (1932). "Zur Chemie der Legierungen". Angewandte Chemie. 45 (2): 33. doi:10.1002/ange.19320450202 (https://doi.org/1
0.1002%2Fange.19320450202).
71. Bernal JD (1935). "The Electron Theory of Metals". Annual Reports on the Progress of Chemistry. 32: 181.
72. Pauling L (1923). "The Crystal Structure of Magnesium Stannide". J. Amer. Chem. Soc. 45 (12): 2777. doi:10.1021/ja01665a001 (http
s://doi.org/10.1021%2Fja01665a001).
73. Pauling L (1929). "The Principles Determining the Structure of Complex Ionic Crystals". J. Amer. Chem. Soc. 51 (4): 1010.
doi:10.1021/ja01379a006 (https://doi.org/10.1021%2Fja01379a006).
74. Dickinson RG; Raymond AL (1923). "The Crystal Structure of Hexamethylene-Tetramine". J. Amer. Chem. Soc. 45: 22.
doi:10.1021/ja01654a003 (https://doi.org/10.1021%2Fja01654a003).
75. Müller A (1923). "The X-ray Investigation of Fatty Acids". Journal of the Chemical Society (London). 123: 2043.
doi:10.1039/ct9232302043 (https://doi.org/10.1039%2Fct9232302043).

https://en.wikipedia.org/wiki/X-ray_crystallography 19/24
7/25/2017 X-ray crystallography - Wikipedia
76. Saville WB; Shearer G (1925). "An X-ray Investigation of Saturated Aliphatic Ketones". Journal of the Chemical Society (London).
127: 591. doi:10.1039/ct9252700591 (https://doi.org/10.1039%2Fct9252700591).
77. Bragg WH (1925). "The Investigation of thin Films by Means of X-rays". Nature. 115 (2886): 266. Bibcode:1925Natur.115..266B (htt
p://adsabs.harvard.edu/abs/1925Natur.115..266B). doi:10.1038/115266a0 (https://doi.org/10.1038%2F115266a0).
78. de Broglie M, Trillat JJ (1925). "Sur l'interprétation physique des spectres X d'acides gras". Comptes rendus hebdomadaires des séances
de l'Académie des sciences. 180: 1485.
79. Trillat JJ (1926). "Rayons X et Composeés organiques à longe chaine. Recherches spectrographiques sue leurs structures et leurs
orientations". Annales de Physique. 6: 5.
80. Caspari WA (1928). "Crystallography of the Aliphatic Dicarboxylic Acids". Journal of the Chemical Society (London). ?: 3235.
doi:10.1039/jr9280003235 (https://doi.org/10.1039%2Fjr9280003235).
81. Müller A (1928). "X-ray Investigation of Long Chain Compounds (n. Hydrocarbons)". Proc. R. Soc. Lond. 120 (785): 437.
Bibcode:1928RSPSA.120..437M (http://adsabs.harvard.edu/abs/1928RSPSA.120..437M). doi:10.1098/rspa.1928.0158 (https://doi.org/1
0.1098%2Frspa.1928.0158).
82. Piper SH (1929). "Some Examples of Information Obtainable from the long Spacings of Fatty Acids". Transactions of the Faraday
Society. 25: 348. doi:10.1039/tf9292500348 (https://doi.org/10.1039%2Ftf9292500348).
83. Müller A (1929). "The Connection between the Zig-Zag Structure of the Hydrocarbon Chain and the Alternation in the Properties of
Odd and Even Numbered Chain Compounds". Proc. R. Soc. Lond. 124 (794): 317. Bibcode:1929RSPSA.124..317M (http://adsabs.harva
rd.edu/abs/1929RSPSA.124..317M). doi:10.1098/rspa.1929.0117 (https://doi.org/10.1098%2Frspa.1929.0117).
84. Robertson JM (1936). "An X-ray Study of the Phthalocyanines, Part II". Journal of the Chemical Society: 1195.
85. Crowfoot Hodgkin D (1935). "X-ray Single Crystal Photographs of Insulin". Nature. 135 (3415): 591. Bibcode:1935Natur.135..591C (h
ttp://adsabs.harvard.edu/abs/1935Natur.135..591C). doi:10.1038/135591a0 (https://doi.org/10.1038%2F135591a0).
86. Kendrew J. C., et al. (1958-03-08). "A Three-Dimensional Model of the Myoglobin Molecule Obtained by X-Ray Analysis". Nature.
181 (4610): 662–6. Bibcode:1958Natur.181..662K (http://adsabs.harvard.edu/abs/1958Natur.181..662K). PMID 13517261 (https://ww
w.ncbi.nlm.nih.gov/pubmed/13517261). doi:10.1038/181662a0 (https://doi.org/10.1038%2F181662a0).
87. "Table of entries in the PDB, arranged by experimental method" (https://www.rcsb.org/pdb/statistics/holdings.do).
88. "PDB Statistics" (http://pdbbeta.rcsb.org/pdb/static.do?p=general_information/pdb_statistics/index.html). RCSB Protein Data Bank.
Retrieved 2010-02-09.
89. Scapin G (2006). "Structural biology and drug discovery". Curr. Pharm. Des. 12 (17): 2087–97. PMID 16796557 (https://www.ncbi.nl
m.nih.gov/pubmed/16796557). doi:10.2174/138161206777585201 (https://doi.org/10.2174%2F138161206777585201).
90. Lundstrom K (2006). "Structural genomics for membrane proteins". Cell. Mol. Life Sci. 63 (22): 2597–607. PMID 17013556 (https://w
ww.ncbi.nlm.nih.gov/pubmed/17013556). doi:10.1007/s00018-006-6252-y (https://doi.org/10.1007%2Fs00018-006-6252-y).
91. Lundstrom K (2004). "Structural genomics on membrane proteins: mini review". Comb. Chem. High Throughput Screen. 7 (5): 431–9.
PMID 15320710 (https://www.ncbi.nlm.nih.gov/pubmed/15320710). doi:10.2174/1386207043328634 (https://doi.org/10.2174%2F1386
207043328634).
92. Cryogenic (<20 K) helium cooling mitigates radiation damage to protein crystals” Acta Crystallographica Section D. 2007 63 (4) 486-
492
93. J. Claydon, N. Greeves, S. Warren: Organic Chemistry 2nd edition page 45; Oxford University Press 2012
94. Greninger AB (1935). "A back-reflection Laue method for determining crystal orientation". Zeitschrift für Kristallographie. 91: 424.
95. An analogous diffraction pattern may be observed by shining a laser pointer on a compact disc or DVD; the periodic spacing of the CD
tracks corresponds to the periodic arrangement of atoms in a crystal.
96. Miao, J., Charalambous, P., Kirz, J., & Sayre, D. (1999). "Extending the methodology of X-ray crystallography to allow imaging of
micrometre-sized non-crystalline specimens." (http://www.nature.com/nature/journal/v400/n6742/abs/400342a0.html) Nature,
400(6742), 342.
97. Harp, JM; Timm, DE; Bunick, GJ (1998). "Macromolecular crystal annealing: overcoming increased mosaicity associated with
cryocrystallography". Acta Crystallographica D. 54 (Pt 4): 622–8. PMID 9761858 (https://www.ncbi.nlm.nih.gov/pubmed/9761858).
doi:10.1107/S0907444997019008 (https://doi.org/10.1107%2FS0907444997019008).
98. Harp, JM; Hanson, BL; Timm, DE; Bunick, GJ (1999). "Macromolecular crystal annealing: evaluation of techniques and variables".
Acta Crystallographica D. 55 (Pt 7): 1329–34. PMID 10393299 (https://www.ncbi.nlm.nih.gov/pubmed/10393299).
doi:10.1107/S0907444999005442 (https://doi.org/10.1107%2FS0907444999005442).
99. Hanson, BL; Harp, JM; Bunick, GJ (2003). "The well-tempered protein crystal: annealing macromolecular crystals". Methods in
enzymology. Methods in Enzymology. 368: 217–35. ISBN 978-0-12-182271-2. PMID 14674276 (https://www.ncbi.nlm.nih.gov/pubme
d/14674276). doi:10.1016/S0076-6879(03)68012-2 (https://doi.org/10.1016%2FS0076-6879%2803%2968012-2).
100. Geerlof A, et al. (2006). "The impact of protein characterization in structural proteomics". Acta Crystallographica D. 62 (Pt 10): 1125–
36. PMID 17001090 (https://www.ncbi.nlm.nih.gov/pubmed/17001090). doi:10.1107/S0907444906030307 (https://doi.org/10.1107%2F
S0907444906030307).
101. Chernov AA (2003). "Protein crystals and their growth". J. Struct. Biol. 142 (1): 3–21. PMID 12718915 (https://www.ncbi.nlm.nih.gov/
pubmed/12718915). doi:10.1016/S1047-8477(03)00034-0 (https://doi.org/10.1016%2FS1047-8477%2803%2900034-0).
102. Rupp B; Wang J (2004). "Predictive models for protein crystallization". Methods. 34 (3): 390–407. PMID 15325656 (https://www.ncbi.n
lm.nih.gov/pubmed/15325656). doi:10.1016/j.ymeth.2004.03.031 (https://doi.org/10.1016%2Fj.ymeth.2004.03.031).
103. Chayen NE (2005). "Methods for separating nucleation and growth in protein crystallization". Prog. Biophys. Mol. Biol. 88 (3): 329–37.
PMID 15652248 (https://www.ncbi.nlm.nih.gov/pubmed/15652248). doi:10.1016/j.pbiomolbio.2004.07.007 (https://doi.org/10.1016%2
Fj.pbiomolbio.2004.07.007).
104. Stock D; Perisic O; Lowe J (2005). "Robotic nanolitre protein crystallisation at the MRC Laboratory of Molecular Biology". Prog
Biophys Mol Biol. 88 (3): 311–27. PMID 15652247 (https://www.ncbi.nlm.nih.gov/pubmed/15652247).
doi:10.1016/j.pbiomolbio.2004.07.009 (https://doi.org/10.1016%2Fj.pbiomolbio.2004.07.009).
105. Jeruzalmi D (2006). "First analysis of macromolecular crystals: biochemistry and x-ray diffraction". Methods Mol. Biol. 364: 43–62.
ISBN 1-59745-266-1. PMID 17172760 (https://www.ncbi.nlm.nih.gov/pubmed/17172760). doi:10.1385/1-59745-266-1:43 (https://doi.o
rg/10.1385%2F1-59745-266-1%3A43).
106. Helliwell JR (2005). "Protein crystal perfection and its application". Acta Crystallographica D. 61 (Pt 6): 793–8. PMID 15930642 (http
s://www.ncbi.nlm.nih.gov/pubmed/15930642). doi:10.1107/S0907444905001368 (https://doi.org/10.1107%2FS0907444905001368).

https://en.wikipedia.org/wiki/X-ray_crystallography 20/24
7/25/2017 X-ray crystallography - Wikipedia
107. Garman, E. F.; Schneider, T. R. (1997). "Macromolecular Cryocrystallography". Journal of Applied Crystallography. 30 (3): 211.
doi:10.1107/S0021889897002677 (https://doi.org/10.1107%2FS0021889897002677).
108. Schlichting, I; Miao, J (2012). "Emerging opportunities in structural biology with X-ray free-electron lasers" (https://www.ncbi.nlm.nih.
gov/pmc/articles/PMC3495068). Current Opinion in Structural Biology. 22 (5): 613–26. PMC 3495068 (https://www.ncbi.nlm.nih.gov/
pmc/articles/PMC3495068)  . PMID 22922042 (https://www.ncbi.nlm.nih.gov/pubmed/22922042). doi:10.1016/j.sbi.2012.07.015 (http
s://doi.org/10.1016%2Fj.sbi.2012.07.015).
109. Neutze, R; Wouts, R; Van Der Spoel, D; Weckert, E; Hajdu, J (2000). "Potential for biomolecular imaging with femtosecond X-ray
pulses". Nature. 406 (6797): 752–7. Bibcode:2000Natur.406..752N (http://adsabs.harvard.edu/abs/2000Natur.406..752N).
PMID 10963603 (https://www.ncbi.nlm.nih.gov/pubmed/10963603). doi:10.1038/35021099 (https://doi.org/10.1038%2F35021099).
110. Liu, W; Wacker, D; Gati, C; Han, G. W.; James, D; Wang, D; Nelson, G; Weierstall, U; Katritch, V; Barty, A; Zatsepin, N. A.; Li, D;
Messerschmidt, M; Boutet, S; Williams, G. J.; Koglin, J. E.; Seibert, M. M.; Wang, C; Shah, S. T.; Basu, S; Fromme, R; Kupitz, C;
Rendek, K. N.; Grotjohann, I; Fromme, P; Kirian, R. A.; Beyerlein, K. R.; White, T. A.; Chapman, H. N.; et al. (2013). "Serial
femtosecond crystallography of G protein-coupled receptors" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3902108). Science. 342
(6165): 1521–4. Bibcode:2013Sci...342.1521L (http://adsabs.harvard.edu/abs/2013Sci...342.1521L). PMC 3902108 (https://www.ncbi.n
lm.nih.gov/pmc/articles/PMC3902108)  . PMID 24357322 (https://www.ncbi.nlm.nih.gov/pubmed/24357322).
doi:10.1126/science.1244142 (https://doi.org/10.1126%2Fscience.1244142).
111. Ravelli RB; Garman EF (2006). "Radiation damage in macromolecular cryocrystallography". Curr. Opin. Struct. Biol. 16 (5): 624–9.
PMID 16938450 (https://www.ncbi.nlm.nih.gov/pubmed/16938450). doi:10.1016/j.sbi.2006.08.001 (https://doi.org/10.1016%2Fj.sbi.20
06.08.001).
112. Powell HR (1999). "The Rossmann Fourier autoindexing algorithm in MOSFLM". Acta Crystallographica D. 55 (Pt 10): 1690–5.
PMID 10531518 (https://www.ncbi.nlm.nih.gov/pubmed/10531518). doi:10.1107/S0907444999009506 (https://doi.org/10.1107%2FS09
07444999009506).
113. Hauptman H (1997). "Phasing methods for protein crystallography". Curr. Opin. Struct. Biol. 7 (5): 672–80. PMID 9345626 (https://ww
w.ncbi.nlm.nih.gov/pubmed/9345626). doi:10.1016/S0959-440X(97)80077-2 (https://doi.org/10.1016%2FS0959-440X%2897%298007
7-2).
114. Usón I; Sheldrick GM (1999). "Advances in direct methods for protein crystallography". Curr. Opin. Struct. Biol. 9 (5): 643–8.
PMID 10508770 (https://www.ncbi.nlm.nih.gov/pubmed/10508770). doi:10.1016/S0959-440X(99)00020-2 (https://doi.org/10.1016%2F
S0959-440X%2899%2900020-2).
115. Taylor G (2003). "The phase problem". Acta Crystallographica D. 59 (11): 1881. doi:10.1107/S0907444903017815 (https://doi.org/10.1
107%2FS0907444903017815).
116. Ealick SE (2000). "Advances in multiple wavelength anomalous diffraction crystallography". Current Opinion in Chemical Biology. 4
(5): 495–9. PMID 11006535 (https://www.ncbi.nlm.nih.gov/pubmed/11006535). doi:10.1016/S1367-5931(00)00122-8 (https://doi.org/1
0.1016%2FS1367-5931%2800%2900122-8).
117. Parkin, G., "Bond-stretch isomerism in transition metal complexes: a reevaluation of crystallographic data", Chem. Rev. 1993, volume
93, 887-911. doi:10.1021/cr00019a003 (https://dx.doi.org/10.1021%2Fcr00019a003)
118. Patterson AL (1935). "A Direct Method for the Determination of the Components of Interatomic Distances in Crystals". Zeitschrift für
Kristallographie. 90: 517. doi:10.1524/zkri.1935.90.1.517 (https://doi.org/10.1524%2Fzkri.1935.90.1.517).
119. "The Nobel Prize in Physics 1914" (http://nobelprize.org/nobel_prizes/physics/laureates/1914/index.html). Nobel Foundation. Retrieved
2008-10-09.
120. "The Nobel Prize in Physics 1915" (http://nobelprize.org/nobel_prizes/physics/laureates/1915/index.html). Nobel Foundation. Retrieved
2008-10-09.
121. "The Nobel Prize in Chemistry 1962" (http://nobelprize.org/nobel_prizes/chemistry/laureates/1962/index.html). Nobelprize.org.
Retrieved 2008-10-06.
122. "The Nobel Prize in Physiology or Medicine 1962" (http://nobelprize.org/nobel_prizes/medicine/laureates/1962/index.html). Nobel
Foundation. Retrieved 2007-07-28.
123. "The Nobel Prize in Chemistry 1964" (http://nobelprize.org/nobel_prizes/chemistry/laureates/1964/index.html). Nobelprize.org.
Retrieved 2008-10-06.
124. "The Nobel Prize in Chemistry 1972" (http://nobelprize.org/nobel_prizes/chemistry/laureates/1972/index.html). Nobelprize.org.
Retrieved 2008-10-06.
125. "The Nobel Prize in Chemistry 1976" (http://nobelprize.org/nobel_prizes/chemistry/laureates/1976/index.html). Nobelprize.org.
Retrieved 2008-10-06.
126. "The Nobel Prize in Chemistry 1985" (http://nobelprize.org/nobel_prizes/chemistry/laureates/1985/index.html). Nobelprize.org.
Retrieved 2008-10-06.
127. "The Nobel Prize in Chemistry 1988" (http://nobelprize.org/nobel_prizes/chemistry/laureates/1988/index.html). Nobelprize.org.
Retrieved 2008-10-06.
128. "The Nobel Prize in Chemistry 1997" (http://nobelprize.org/nobel_prizes/chemistry/laureates/1997/index.html). Nobelprize.org.
Retrieved 2008-10-06.
129. "The Nobel Prize in Chemistry 2003" (http://nobelprize.org/nobel_prizes/chemistry/laureates/2003/index.html). Nobelprize.org.
Retrieved 2008-10-06.
130. "The Nobel Prize in Chemistry 2006" (http://nobelprize.org/nobel_prizes/chemistry/laureates/2006/index.html). Nobelprize.org.
Retrieved 2008-10-06.
131. "The Nobel Prize in Chemistry 2009" (http://nobelprize.org/nobel_prizes/chemistry/laureates/2009/index.html). Nobelprize.org.
Retrieved 2009-10-07.
132. "The Nobel Prize in Chemistry 2012" (http://nobelprize.org/nobel_prizes/chemistry/laureates/2012/index.html). Nobelprize.org.
Retrieved 2012-10-13.
133. France, André Authier, Institut de Minéralogie et de Physique des Milieux Condensés, Université P. et M. Curie, Paris, (2013). Early
days of X-ray crystallography (First edition. ed.). Oxford: Oxford University Press. pp. 1–8. ISBN 9780199659845.
134. Thangadurai, S; Abraham, JT; Srivastava, AK; Moorthy, MN; Shukla, SK; Anjaneyulu, Y (July 2005). "X-ray powder diffraction
patterns for certain beta-lactam, tetracycline and macrolide antibiotic drugs.". Analytical sciences : the international journal of the
Japan Society for Analytical Chemistry. 21 (7): 833–838. PMID 16038505 (https://www.ncbi.nlm.nih.gov/pubmed/16038505).

https://en.wikipedia.org/wiki/X-ray_crystallography 21/24
7/25/2017 X-ray crystallography - Wikipedia
135. Rendle, David (2005). "Advances in chemistry applied to forensic science". Chem. Soc. Rev. 34: 1021–1030. doi:10.1039/b415890n (htt
ps://doi.org/10.1039%2Fb415890n).
136. Svarcová, S; Kocí, E; Bezdicka, P; Hradil, D; Hradilová, J (September 2010). "Evaluation of laboratory powder X-ray micro-diffraction
for applications in the fields of cultural heritage and forensic science.". Analytical and bioanalytical chemistry. 398 (2): 1061–76.
PMID 20640895 (https://www.ncbi.nlm.nih.gov/pubmed/20640895).
137. Hiller, JC; Thompson, TJ; Evison, MP; Chamberlain, AT; Wess, TJ (December 2003). "Bone mineral change during experimental
heating: an X-ray scattering investigation.". Biomaterials. 24 (28): 5091–7. PMID 14568425 (https://www.ncbi.nlm.nih.gov/pubmed/14
568425).

Further reading
International Tables for Crystallography

Theo Hahn, ed. (2002). International Tables for Crystallography. Volume A, Space-group Symmetry (5th ed.). Dordrecht:
Kluwer Academic Publishers, for the International Union of Crystallography. ISBN 0-7923-6590-9.
Michael G. Rossmann; Eddy Arnold, eds. (2001). International Tables for Crystallography. Volume F, Crystallography of
biological molecules. Dordrecht: Kluwer Academic Publishers, for the International Union of Crystallography. ISBN 0-
7923-6857-6.
Theo Hahn, ed. (1996). International Tables for Crystallography. Brief Teaching Edition of Volume A, Space-group
Symmetry (4th ed.). Dordrecht: Kluwer Academic Publishers, for the International Union of Crystallography. ISBN 0-
7923-4252-6.

Bound collections of articles

Charles W. Carter; Robert M. Sweet., eds. (1997). Macromolecular Crystallography, Part A (Methods in Enzymology, v.
276). San Diego: Academic Press. ISBN 0-12-182177-3.
Charles W. Carter Jr.; Robert M. Sweet., eds. (1997). Macromolecular Crystallography, Part B (Methods in Enzymology, v.
277). San Diego: Academic Press. ISBN 0-12-182178-1.
A. Ducruix; R. Giegé, eds. (1999). Crystallization of Nucleic Acids and Proteins: A Practical Approach (2nd ed.). Oxford:
Oxford University Press. ISBN 0-19-963678-8.

Textbooks
B.E. Warren (1969). X-ray Diffraction. New York. ISBN 0-486-66317-5.
Blow D (2002). Outline of Crystallography for Biologists. Oxford: Oxford University Press. ISBN 0-19-851051-9.
Burns G.; Glazer A M (1990). Space Groups for Scientists and Engineers (2nd ed.). Boston: Academic Press, Inc. ISBN 0-
12-145761-3.
Clegg W (1998). Crystal Structure Determination (Oxford Chemistry Primer). Oxford: Oxford University Press. ISBN 0-
19-855901-1.
Cullity B.D. (1978). Elements of X-Ray Diffraction (2nd ed.). Reading, Massachusetts: Addison-Wesley Publishing
Company. ISBN 0-534-55396-6.
Drenth J (1999). Principles of Protein X-Ray Crystallography. New York: Springer-Verlag. ISBN 0-387-98587-5.
Giacovazzo C (1992). Fundamentals of Crystallography. Oxford: Oxford University Press. ISBN 0-19-855578-4.
Glusker JP; Lewis M; Rossi M (1994). Crystal Structure Analysis for Chemists and Biologists. New York: VCH
Publishers. ISBN 0-471-18543-4.
Massa W (2004). Crystal Structure Determination. Berlin: Springer. ISBN 3-540-20644-2.
McPherson A (1999). Crystallization of Biological Macromolecules. Cold Spring Harbor, NY: Cold Spring Harbor
Laboratory Press. ISBN 0-87969-617-6.
McPherson A (2003). Introduction to Macromolecular Crystallography. John Wiley & Sons. ISBN 0-471-25122-4.
McRee DE (1993). Practical Protein Crystallography. San Diego: Academic Press. ISBN 0-12-486050-8.
O'Keeffe M; Hyde B G (1996). Crystal Structures; I. Patterns and Symmetry. Washington, DC: Mineralogical Society of
America, Monograph Series. ISBN 0-939950-40-5.
Rhodes G (2000). Crystallography Made Crystal Clear. San Diego: Academic Press. ISBN 0-12-587072-8., PDF copy of
select chapters (http://www.chem.uwec.edu/Chem406_F06/Pages/lecture_notes/lect07/Crystallography_Rhodes.pdf)
Rupp B (2009). Biomolecular Crystallography: Principles, Practice and Application to Structural Biology. New York:
Garland Science. ISBN 0-8153-4081-8.
Zachariasen WH (1945). Theory of X-ray Diffraction in Crystals. New York: Dover Publications. LCCN 67026967 (http
s://lccn.loc.gov/67026967).

Applied computational data analysis

Young, R.A., ed. (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography.
ISBN 0-19-855577-6.

Historical
https://en.wikipedia.org/wiki/X-ray_crystallography 22/24
7/25/2017 X-ray crystallography - Wikipedia

Bijvoet JM, Burgers WG, Hägg G, eds. (1969). Early Papers on Diffraction of X-rays by Crystals. I. Utrecht: published
for the International Union of Crystallography by A. Oosthoek's Uitgeversmaatschappij N.V.
Bijvoet JM; Burgers WG; Hägg G, eds. (1972). Early Papers on Diffraction of X-rays by Crystals. II. Utrecht: published
for the International Union of Crystallography by A. Oosthoek's Uitgeversmaatschappij N.V.
Bragg W L; Phillips D C & Lipson H (1992). The Development of X-ray Analysis. New York: Dover. ISBN 0-486-67316-
2.
Ewald, PP, and numerous crystallographers, eds. (1962). Fifty Years of X-ray Diffraction. Utrecht: published for the
International Union of Crystallography by A. Oosthoek's Uitgeversmaatschappij N.V. ISBN 978-1-4615-9963-0.
doi:10.1007/978-1-4615-9961-6 (https://doi.org/10.1007%2F978-1-4615-9961-6).
Ewald, P. P., editor 50 Years of X-Ray Diffraction (http://www.iucr.org/iucr-top/publ/50YearsOfXrayDiffraction/)
(Reprinted in pdf format for the IUCr XVIII Congress, Glasgow, Scotland, International Union of Crystallography).
Friedrich W (1922). "Die Geschichte der Auffindung der Röntgenstrahlinterferenzen". Die Naturwissenschaften. 10 (16):
363. Bibcode:1922NW.....10..363F (http://adsabs.harvard.edu/abs/1922NW.....10..363F). doi:10.1007/BF01565289 (http
s://doi.org/10.1007%2FBF01565289).
Lonsdale, K (1949). Crystals and X-rays. New York: D. van Nostrand.
"The Structures of Life". U.S. Department of Health and Human Services. 2007.

External links
Tutorials

Learning Crystallography (http://www.xtal.iqfr.csic.es/Cristalografia/index-en.html)


Simple, non technical introduction (https://web.archive.org/web/20120303025301/http://stein.bioch.dundee.ac.uk/~charlie/
index.php?section=1)
The Crystallography Collection (http://richannel.org/collections/2013/crystallography), video series from the Royal
Institution
"Small Molecule Crystalization" (http://acaschool.iit.edu/lectures04/JLiangXtal.pdf) (PDF) at Illinois Institute of
Technology website
International Union of Crystallography (http://iucr.org/)
Crystallography 101 (http://www.ruppweb.org/Xray/101index.html)
Interactive structure factor tutorial (http://www.ysbl.york.ac.uk/~cowtan/sfapplet/sfintro.html), demonstrating properties of
the diffraction pattern of a 2D crystal.
Picturebook of Fourier Transforms (http://www.ysbl.york.ac.uk/~cowtan/fourier/fourier.html), illustrating the relationship
between crystal and diffraction pattern in 2D.
Lecture notes on X-ray crystallography and structure determination (http://www.chem.uwec.edu/Chem406_F06/Pages/lect
notes.html#lecture7)
Online lecture on Modern X-ray Scattering Methods for Nanoscale Materials Analysis (http://nanohub.org/resources/5580)
by Richard J. Matyi
Interactive Crystallography Timeline (http://rigb.org/our-history/history-of-research/crystallography-timeline) from the
Royal Institution

Primary databases

Crystallography Open Database (COD)


Protein Data Bank (https://web.archive.org/web/20150418160606/http://www.rcsb.org/pdb/home/home.do) (PDB)
Nucleic Acid Databank (http://ndbserver.rutgers.edu/) (NDB)
Cambridge Structural Database (http://www.ccdc.cam.ac.uk/products/csd/) (CSD)
Inorganic Crystal Structure Database (http://www.fiz-karlsruhe.de/icsd.html) (ICSD)
Biological Macromolecule Crystallization Database (https://web.archive.org/web/20070601170003/http://xpdb.nist.gov:80
60/BMCD4/) (BMCD)

Derivative databases
PDBsum (http://www.ebi.ac.uk/thornton-srv/databases/pdbsum/)
Proteopedia – the collaborative, 3D encyclopedia of proteins and other molecules (http://www.proteopedia.org/)
RNABase (https://web.archive.org/web/20070426104437/http://www.rnabase.org/)
HIC-Up database of PDB ligands (http://xray.bmc.uu.se/hicup/)
Structural Classification of Proteins database
CATH Protein Structure Classification
List of transmembrane proteins with known 3D structure (http://blanco.biomol.uci.edu/Membrane_Proteins_xtal.html)
Orientations of Proteins in Membranes database

Structural validation

MolProbity structural validation suite (http://molprobity.biochem.duke.edu/)


https://en.wikipedia.org/wiki/X-ray_crystallography 23/24
7/25/2017 X-ray crystallography - Wikipedia

ProSA-web (https://prosa.services.came.sbg.ac.at/prosa.php)
NQ-Flipper (https://flipper.services.came.sbg.ac.at/) (check for unfavorable rotamers of Asn and Gln residues)
DALI server (http://www.ebi.ac.uk/dali/) (identifies proteins similar to a given protein)

Retrieved from "https://en.wikipedia.org/w/index.php?title=X-ray_crystallography&oldid=792067106"

Categories: X-ray crystallography Crystallography Diffraction X-rays Protein structure Protein methods
Protein imaging Synchrotron-related techniques

This page was last edited on 24 July 2017, at 07:18.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this
site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia
Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/X-ray_crystallography 24/24
7/25/2017 Crystallization - Wikipedia

Crystallization
From Wikipedia, the free encyclopedia

Crystallization is the (natural or artificial) process by which a


solid forms, where the atoms or molecules are highly organized Crystallization
into a structure known as a crystal. Some of the ways by which
crystals form are through precipitating from a solution, melting, or
more rarely deposition directly from a gas. Attributes of the
resulting crystal depend largely on factors such as temperature, air
pressure, and in the case of liquid crystals, time of fluid
evaporation.

Crystallization occurs in two major steps. The first is nucleation,


the appearance of a crystalline phase from either a supercooled
liquid or a supersaturated solvent. The second step is known as Concepts
crystal growth, which is the increase in the size of particles and Crystallization · Crystal growth
leads to a crystal state. An important feature of this step is that Recrystallization · Seed crystal
loose particles form layers at the crystal's surface lodge Protocrystalline · Single crystal
themselves into open inconsistencies such as pores, cracks, etc.
Methods and technology
The majority of minerals and organic molecules crystallize easily, Boules
and the resulting crystals are generally of good quality, i.e. without
Bridgman–Stockbarger technique
visible defects. However, larger biochemical particles, like
Czochralski process
proteins, are often difficult to crystallize. The ease with which
molecules will crystallize strongly depends on the intensity of Fractional crystallization
either atomic forces (in the case of mineral substances), Fractional freezing
intermolecular forces (organic and biochemical substances) or Hydrothermal synthesis
intramolecular forces (biochemical substances). Laser-heated pedestal growth
Crystal bar process
Crystallization is also a chemical solid–liquid separation
Fundamentals
technique, in which mass transfer of a solute from the liquid
solution to a pure solid crystalline phase occurs. In chemical Nucleation · Crystal
engineering, crystallization occurs in a crystallizer. Crystallization Crystal structure · Solid
is therefore related to precipitation, although the result is not
amorphous or disordered, but a crystal.

Contents
1 Process
2 In nature
3 Methods
3.1 Typical equipment
4 Thermodynamic view
5 Dynamics
5.1 Nucleation
5.1.1 Primary nucleation
5.1.2 Secondary nucleation
5.2 Growth
5.3 Size distribution
6 Main crystallization processes
6.1 Cooling crystallization
6.1.1 Application
6.1.2 Cooling crystallizers
https://en.wikipedia.org/wiki/Crystallization 1/9
7/25/2017 Crystallization - Wikipedia

6.2 Evaporative crystallization


6.2.1 Evaporative crystallizers
6.3 DTB crystallizer
7 See also
8 References
9 Further reading
10 External links

Process
The crystallization process consists of two major events, nucleation and
crystal growth which are driven by thermodynamic properties as well as
chemical properties. In crystallization Nucleation is the step where the
solute molecules or atoms dispersed in the solvent start to gather into
clusters, on the microscopic scale (elevating solute concentration in a
small region), that become stable under the current operating
conditions. These stable clusters constitute the nuclei. Therefore, the
clusters need to reach a critical size in order to become stable nuclei.
Such critical size is dictated by many different factors (temperature,
supersaturation, etc.). It is at the stage of nucleation that the atoms or
Time-lapse of growth of a citric acid
molecules arrange in a defined and periodic manner that defines the
crystal. The video covers an area of
crystal structure — note that "crystal structure" is a special term that
2.0 by 1.5 mm and was captured over
refers to the relative arrangement of the atoms or molecules, not the
7.2 min.
macroscopic properties of the crystal (size and shape), although those
are a result of the internal crystal structure.

The crystal growth is the subsequent size increase of the nuclei that succeed in achieving the critical cluster
size. Crystal growth is a dynamic process occurring in equilibrium where solute molecules or atoms precipitate
out of solution, and dissolve back into solution. Supersaturation is one of the driving forces of crystallization,
as the solubility of a species is an equilibrium process quantified by Ksp. Depending upon the conditions, either
nucleation or growth may be predominant over the other, dictating crystal size.

Many compounds have the ability to crystallize with some having different crystal structures, a phenomenon
called polymorphism. Each polymorph is in fact a different thermodynamic solid state and crystal polymorphs
of the same compound exhibit different physical properties, such as dissolution rate, shape (angles between
facets and facet growth rates), melting point, etc. For this reason, polymorphism is of major importance in
industrial manufacture of crystalline products. Additionally, crystal phases can sometimes be interconverted by
varying factors such as temperature.

In nature
There are many examples of natural process that involve crystallization.

Geological time scale process examples include:

Natural (mineral) crystal formation (see also gemstone);


Stalactite/stalagmite, rings formation.

Usual time scale process examples include:

Snow flakes formation;


Honey crystallization (nearly all types of honey crystallize).

https://en.wikipedia.org/wiki/Crystallization 2/9
7/25/2017 Crystallization - Wikipedia

Methods
Crystal formation can be divided into two types, where the first type of
crystals are composed of a cation and anion, also known as a salt, such
as sodium acetate. The second type of crystals are composed of
uncharged species, for example menthol.[1]

Crystal formation can be achieved by various methods, such as: cooling,


evaporation, addition of a second solvent to reduce the solubility of the
solute (technique known as antisolvent or drown-out), solvent layering,
sublimation, changing the cation or anion, as well as other methods.

The formation of a supersaturated solution does not guarantee crystal


formation, and often a seed crystal or scratching the glass is required to
form nucleation sites.
Snowflakes are a very well-known
A typical laboratory technique for crystal formation is to dissolve the
example, where subtle differences in
solid in a solution in which it is partially soluble, usually at high
crystal growth conditions result in
temperatures to obtain supersaturation. The hot mixture is then filtered
different geometries.
to remove any insoluble impurities. The filtrate is allowed to slowly
cool. Crystals that form are then filtered and washed with a solvent in
which they are not soluble, but is miscible with the mother liquor. The
process is then repeated to increase the purity in a technique known as
recrystallization.

Typical equipment

Equipment for the main industrial processes for crystallization.

1. Tank crystallizers. Tank crystallization is an old method still used


in some specialized cases. Saturated solutions, in tank Crystallized honey
crystallization, are allowed to cool in open tanks. After a period
of time the mother liquor is drained and the crystals removed.
Nucleation and size of crystals are difficult to control. Typically, labor costs are very high.

Thermodynamic view
The crystallization process appears to violate the second principle of thermodynamics. Whereas most processes
that yield more orderly results are achieved by applying heat, crystals usually form at lower temperatures—
especially by supercooling. However, due to the release of the heat of fusion during crystallization, the entropy
of the universe increases, thus this principle remains unaltered.

The molecules within a pure, perfect crystal, when heated by an external source, will become liquid. This
occurs at a sharply defined temperature (different for each type of crystal). As it liquifies, the complicated
architecture of the crystal collapses. Melting occurs because the entropy (S) gain in the system by spatial
randomization of the molecules has overcome the enthalpy (H) loss due to breaking the crystal packing forces:

Regarding crystals, there are no exceptions to this rule. Similarly, when the molten crystal is cooled, the
molecules will return to their crystalline form once the temperature falls beyond the turning point. This is
because the thermal randomization of the surroundings compensates for the loss of entropy that results from the

https://en.wikipedia.org/wiki/Crystallization 3/9
7/25/2017 Crystallization - Wikipedia

reordering of molecules within the system. Such liquids that crystallize on cooling
are the exception rather than the rule.

The nature of a crystallization process is governed by both thermodynamic and


kinetic factors, which can make it highly variable and difficult to control. Factors
such as impurity level, mixing regime, vessel design, and cooling profile can have
a major impact on the size, number, and shape of crystals produced.

Dynamics
As mentioned above, a crystal is formed following a well-defined pattern, or
structure, dictated by forces acting at the molecular level. As a consequence,
during its formation process the crystal is in an environment where the solute
concentration reaches a certain critical value, before changing status. Solid
formation, impossible below the solubility threshold at the given temperature and
pressure conditions, may then take place at a concentration higher than the
theoretical solubility level. The difference between the actual value of the solute
concentration at the crystallization limit and the theoretical (static) solubility
threshold is called supersaturation and is a fundamental factor in crystallization.

Nucleation

Nucleation is the initiation of a phase change in a small region, such as the


formation of a solid crystal from a liquid solution. It is a consequence of rapid
local fluctuations on a molecular scale in a homogeneous phase that is in a state of
metastable equilibrium. Total nucleation is the sum effect of two categories of
nucleation – primary and secondary.

Primary nucleation

Primary nucleation is the initial formation of a crystal where there are no other
crystals present or where, if there are crystals present in the system, they do not
have any influence on the process. This can occur in two conditions. The first is
homogeneous nucleation, which is nucleation that is not influenced in any way by
solids. These solids include the walls of the crystallizer vessel and particles of any
foreign substance. The second category, then, is heterogeneous nucleation. This Low-temperature SEM
occurs when solid particles of foreign substances cause an increase in the rate of magnification series for a
nucleation that would otherwise not be seen without the existence of these foreign snow crystal. The
particles. Homogeneous nucleation rarely occurs in practice due to the high energy crystals are captured,
necessary to begin nucleation without a solid surface to catalyse the nucleation. stored, and sputter coated
with platinum at cryo-
Primary nucleation (both homogeneous and heterogeneous) has been modelled
temperatures for
with the following:[2] imaging.

B is the number of nuclei formed per unit volume per unit time.
N is the number of nuclei per unit volume.
kn is a rate constant.
c is the instantaneous solute concentration.
c* is the solute concentration at saturation.
(c–c*) is also known as supersaturation.
n is an empirical exponent that can be as large as 10, but generally ranges between 3 and 4.
https://en.wikipedia.org/wiki/Crystallization 4/9
7/25/2017 Crystallization - Wikipedia

Secondary nucleation

Secondary nucleation is the formation of nuclei attributable to the influence of the existing microscopic crystals
in the magma.[3] Simply put, secondary nucleation is when crystal growth is initiated with contact of other
existing crystals or "seeds."[4] The first type of known secondary crystallization is attributable to fluid shear, the
other due to collisions between already existing crystals with either a solid surface of the crystallizer or with
other crystals themselves. Fluid shear nucleation occurs when liquid travels across a Crystal at a high speed,
sweeping away nuclei that would otherwise be incorporated into a Crystal, causing the swept-away nuclei to
become new crystals. Contact nucleation has been found to be the most effective and common method for
nucleation. The benefits include the following:[3]

Low kinetic order and rate-proportional to supersaturation, allowing easy control without unstable
operation.
Occurs at low supersaturation, where growth rate is optimum for good quality.
Low necessary energy at which crystals strike avoids the breaking of existing crystals into new crystals.
The quantitative fundamentals have already been isolated and are being incorporated into practice.

The following model, although somewhat simplified, is often used to model secondary nucleation:[2]

k1 is a rate constant.
MT is the suspension density.
j is an empirical exponent that can range up to 1.5, but is generally 1.
b is an empirical exponent that can range up to 5, but is generally 2.

Growth

Once the first small crystal, the nucleus, forms it acts as a


convergence point (if unstable due to supersaturation) for molecules
of solute touching – or adjacent to – the crystal so that it increases its
own dimension in successive layers. The pattern of growth resembles
the rings of an onion, as shown in the picture, where each colour
indicates the same mass of solute; this mass creates increasingly thin
layers due to the increasing surface area of the growing crystal. The
supersaturated solute mass the original nucleus may capture in a time
unit is called the growth rate expressed in kg/(m2*h), and is a
constant specific to the process. Growth rate is influenced by several
physical factors, such as surface tension of solution, pressure,
temperature, relative crystal velocity in the solution, Reynolds Crystal growth
number, and so forth.

The main values to control are therefore:

Supersaturation value, as an index of the quantity of solute available for the growth of the crystal;
Total crystal surface in unit fluid mass, as an index of the capability of the solute to fix onto the crystal;
Retention time, as an index of the probability of a molecule of solute to come into contact with an
existing crystal;
Flow pattern, again as an index of the probability of a molecule of solute to come into contact with an
existing crystal (higher in laminar flow, lower in turbulent flow, but the reverse applies to the probability
of contact).

The first value is a consequence of the physical characteristics of the solution, while the others define a
difference between a well- and poorly designed crystallizer.
https://en.wikipedia.org/wiki/Crystallization 5/9
7/25/2017 Crystallization - Wikipedia

Size distribution

The appearance and size range of a crystalline product is extremely important in crystallization. If further
processing of the crystals is desired, large crystals with uniform size are important for washing, filtering,
transportation, and storage, because large crystals are easier to filter out of a solution than small crystals. Also,
larger crystals have a smaller surface area to volume ratio, leading to a higher purity. This higher purity is due
to less retention of mother liquor which contains impurities, and a smaller loss of yield when the crystals are
washed to remove the mother liquor. The theoretical crystal size distribution can be estimated as a function of
operating conditions with a fairly complicated mathematical process called population balance theory (using
population balance equations).

Main crystallization processes


Some of the important factors influencing solubility are:

Concentration
Temperature
Polarity
Ionic strength

So one may identify two main families of crystallization


processes:

Cooling crystallization
Evaporative crystallization

This division is not really clear-cut, since hybrid systems


exist, where cooling is performed through evaporation, thus obtaining at the same time a concentration of the
solution.

A crystallization process often referred to in chemical engineering is the fractional crystallization. This is not a
different process, rather a special application of one (or both) of the above.

Cooling crystallization

Application

Most chemical compounds, dissolved in most solvents, show the so-called direct solubility that is, the solubility
threshold increases with temperature.

So, whenever the conditions are favourable, crystal


formation results from simply cooling the solution. Here
cooling is a relative term: austenite crystals in a steel form
well above 1000 °C. An example of this crystallization
process is the production of Glauber's salt, a crystalline
form of sodium sulfate. In the diagram, where equilibrium
temperature is on the x-axis and equilibrium concentration
(as mass percent of solute in saturated solution) in y-axis, it
is clear that sulfate solubility quickly decreases below
32.5 °C. Assuming a saturated solution at 30 °C, by cooling Solubility of the system Na2SO4 – H2O
it to 0 °C (note that this is possible thanks to the freezing-
point depression), the precipitation of a mass of sulfate occurs corresponding to the change in solubility from
29% (equilibrium value at 30 °C) to approximately 4.5% (at 0 °C) – actually a larger crystal mass is
precipitated, since sulfate entrains hydration water, and this has the side effect of increasing the final
concentration.
https://en.wikipedia.org/wiki/Crystallization 6/9
7/25/2017 Crystallization - Wikipedia

There are limitations in the use of cooling crystallization:

Many solutes precipitate in hydrate form at low temperatures: in the previous example this is acceptable,
and even useful, but it may be detrimental when, for example, the mass of water of hydration to reach a
stable hydrate crystallization form is more than the available water: a single block of hydrate solute will
be formed – this occurs in the case of calcium chloride);
Maximum supersaturation will take place in the coldest points. These may be the heat exchanger tubes
which are sensitive to scaling, and heat exchange may be greatly reduced or discontinued;
A decrease in temperature usually implies an increase of the viscosity of a solution. Too high a viscosity
may give hydraulic problems, and the laminar flow thus created may affect the crystallization dynamics.
It is of course not applicable to compounds having reverse solubility, a term to indicate that solubility
increases with temperature decrease (an example occurs with sodium sulfate where solubility is reversed
above 32.5 °C).

Cooling crystallizers

The simplest cooling crystallizers are tanks provided with a mixer for
internal circulation, where temperature decrease is obtained by heat
exchange with an intermediate fluid circulating in a jacket. These simple
machines are used in batch processes, as in processing of pharmaceuticals
and are prone to scaling. Batch processes normally provide a relatively
variable quality of product along the batch.

The Swenson-Walker crystallizer is a model, specifically conceived by


Swenson Co. around 1920, having a semicylindric horizontal hollow
trough in which a hollow screw conveyor or some hollow discs, in which a
refrigerating fluid is circulated, plunge during rotation on a longitudinal
axis. The refrigerating fluid is sometimes also circulated in a jacket around
the trough. Crystals precipitate on the cold surfaces of the screw/discs,
from which they are removed by scrapers and settle on the bottom of the
trough. The screw, if provided, pushes the slurry towards a discharge port.

A common practice is to cool the solutions by flash evaporation: when a


liquid at a given T0 temperature is transferred in a chamber at a pressure P1 Vertical cooling crystallizer in a
such that the liquid saturation temperature T1 at P1 is lower than T0, the beet sugar factory

liquid will release heat according to the temperature difference and a


quantity of solvent, whose total latent heat of vaporization equals the difference in enthalpy. In simple words,
the liquid is cooled by evaporating a part of it.

In the sugar industry, vertical cooling crystallizers are used to exhaust the molasses in the last crystallization
stage downstream of vacuum pans, prior to centrifugation. The massecuite enters the crystallizers at the top,
and cooling water is pumped through pipes in counterflow.

Evaporative crystallization

Another option is to obtain, at an approximately constant temperature, the precipitation of the crystals by
increasing the solute concentration above the solubility threshold. To obtain this, the solute/solvent mass ratio is
increased using the technique of evaporation. This process is of course insensitive to change in temperature (as
long as hydration state remains unchanged).

All considerations on control of crystallization parameters are the same as for the cooling models.

Evaporative crystallizers

https://en.wikipedia.org/wiki/Crystallization 7/9
7/25/2017 Crystallization - Wikipedia

Most industrial crystallizers are of the evaporative type, such as the very large sodium chloride and sucrose
units, whose production accounts for more than 50% of the total world production of crystals. The most
common type is the forced circulation (FC) model (see evaporator). A pumping device (a pump or an axial
flow mixer) keeps the crystal slurry in homogeneous suspension throughout the tank, including the exchange
surfaces; by controlling pump flow, control of the contact time of the crystal mass with the supersaturated
solution is achieved, together with reasonable velocities at the exchange surfaces. The Oslo, mentioned above,
is a refining of the evaporative forced circulation crystallizer, now equipped with a large crystals settling zone
to increase the retention time (usually low in the FC) and to roughly separate heavy slurry zones from clear
liquid. Evaporative crystallizers tend to yield larger average crystal size and narrows the crystal size
distribution curve.[5]

DTB crystallizer

Whichever the form of the crystallizer, to achieve


an effective process control it is important to
control the retention time and the crystal mass, to
obtain the optimum conditions in terms of crystal
specific surface and the fastest possible growth.
This is achieved by a separation – to put it
simply – of the crystals from the liquid mass, in
order to manage the two flows in a different way.
The practical way is to perform a gravity settling
to be able to extract (and possibly recycle
separately) the (almost) clear liquid, while
DTB Crystallizer managing the mass flow around the crystallizer
to obtain a precise slurry density elsewhere. A
typical example is the DTB (Draft Tube and
Baffle) crystallizer, an idea of Richard Chisum Bennett (a Swenson
engineer and later President of Swenson) at the end of the 1950s. The
DTB crystallizer (see images) has an internal circulator, typically an
axial flow mixer – yellow – pushing upwards in a draft tube while
outside the crystallizer there is a settling area in an annulus; in it the
exhaust solution moves upwards at a very low velocity, so that large Schematic of DTB
crystals settle – and return to the main circulation – while only the fines,
below a given grain size are extracted and eventually destroyed by increasing or decreasing temperature, thus
creating additional supersaturation. A quasi-perfect control of all parameters is achieved as DTF crystallizers
offer superior control over crystal size and characteristics.[6] This crystallizer, and the derivative models
(Krystal, CSC, etc.) could be the ultimate solution if not for a major limitation in the evaporative capacity, due
to the limited diameter of the vapour head and the relatively low external circulation not allowing large
amounts of energy to be supplied to the system.

See also
Abnormal grain growth Igneous differentiation Recrystallization
Chiral resolution by Laser heated pedestal growth (metallurgy)
crystallization Micro-pulling-down Seed crystal
Crystal habit Protein crystallization Single crystal
Crystal structure Pumpable ice technology Symplectite
Crystallite Quasicrystal Vitrification
Fractional crystallization Recrystallization (chemistry) X-ray crystallography
(chemistry)

References
https://en.wikipedia.org/wiki/Crystallization 8/9
7/25/2017 Crystallization - Wikipedia

1. Lin, Yibin (2008). "An Extensive Study of Protein Phase Diagram Modification:Increasing Macromolecular
Crystallizability by Temperature Screening". Crystal Growth & Design. 8 (12): 4277. doi:10.1021/cg800698p (http
s://doi.org/10.1021%2Fcg800698p).
2. Tavare, N.S. (1995). Industrial Crystallization Plenum Press, New York
3. McCabe & Smith (2000). Unit Operations of Chemical Engineering' McGraw-Hill, New York
4. "Crystallization" (http://www.reciprocalnet.org/edumodules/crystallization/). www.reciprocalnet.org. Retrieved
2017-01-03.
5. "Submerge Circulating Crystallizers - Thermal Kinetics Engineering, PLLC" (http://thermalkinetics.net/evaporation-
equipment/submerge-circulating-crystallizer). Thermal Kinetics Engineering, PLLC. Retrieved 2017-01-03.
6. "Draft Tube Baffle (DTB) Crystallizer - Swenson Technology" (http://www.swensontechnology.com/equipment/draft
-tube-baffle-dtb-crystallizer/). Swenson Technology. Retrieved 2017-01-03.

Further reading
A. Mersmann, Crystallization Technology Handbook (2001) CRC; 2nd ed. ISBN 0-8247-0528-9
Tine Arkenbout-de Vroome, Melt Crystallization Technology (1995) CRC ISBN 1-56676-181-6
"Small Molecule Crystallization" (http://acaschool.iit.edu/lectures04/JLiangXtal.pdf) (PDF) at Illinois
Institute of Technology website
Glynn P.D. and Reardon E.J. (1990) "Solid-solution aqueous-solution equilibria: thermodynamic theory
and representation". Amer. J. Sci. 290, 164–201.
Geankoplis, C.J. (2003) "Transport Processes and Separation Process Principles". 4th Ed. Prentice-Hall
Inc.
S.J. Jancic, P.A.M. Grootscholten: “Industrial Crystallization”, Textbook, Delft University Press and
Reidel Publishing Company, Delft, The Netherlands, 1984.

External links
Batch Crystallization (http://www.ashemorris.com/crystallisation.aspx)
Industrial Crystallization (http://www.cheresources.com/cryst.shtml)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Crystallization&oldid=790905100"

Categories: Inorganic chemistry Liquid-solid separation Crystallography Laboratory techniques


Phase transitions

This page was last edited on 16 July 2017, at 21:48.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Crystallization 9/9
7/25/2017 Thin film - Wikipedia

Thin film
From Wikipedia, the free encyclopedia

A thin film is a layer of material ranging from fractions of a nanometer (monolayer) to several micrometers in
thickness. The controlled synthesis of materials as thin films (a process referred to as deposition) is a
fundamental step in many applications. A familiar example is the household mirror, which typically has a thin
metal coating on the back of a sheet of glass to form a reflective interface. The process of silvering was once
commonly used to produce mirrors, while more recently the metal layer is deposited using techniques such as
sputtering. Advances in thin film deposition techniques during the 20th century have enabled a wide range of
technological breakthroughs in areas such as magnetic recording media, electronic semiconductor devices,
LEDs, optical coatings (such as antireflective coatings), hard coatings on cutting tools, and for both energy
generation (e.g. thin film solar cells) and storage (thin-film batteries). It is also being applied to
pharmaceuticals, via thin-film drug delivery. A stack of thin films is called a multilayer.

In addition to their applied interest, thin films play an important role in the development and study of materials
with new and unique properties. Examples include multiferroic materials, and superlattices that allow the study
of quantum confinement by creating two-dimensional electron states.

Contents
1 Deposition
1.1 Chemical deposition
1.2 Physical deposition
1.3 Growth modes
1.4 Epitaxy
2 Applications
2.1 Decorative coatings
2.2 Optical coatings
2.3 Protective coatings
2.4 Electrically operating coatings
2.5 Thin-film photovoltaic cells
2.6 Thin-film batteries
3 See also
4 References
5 Further reading
5.1 Textbooks
5.2 Historical

Deposition
The act of applying a thin film to a surface is thin-film deposition – any technique for depositing a thin film of
material onto a substrate or onto previously deposited layers. "Thin" is a relative term, but most deposition
techniques control layer thickness within a few tens of nanometres. Molecular beam epitaxy and atomic layer
deposition allow a single layer of atoms to be deposited at a time.

It is useful in the manufacture of optics (for reflective, anti-reflective coatings or self-cleaning glass, for
instance), electronics (layers of insulators, semiconductors, and conductors form integrated circuits), packaging
(i.e., aluminium-coated PET film), and in contemporary art (see the work of Larry Bell). Similar processes are
sometimes used where thickness is not important: for instance, the purification of copper by electroplating, and
the deposition of silicon and enriched uranium by a CVD-like process after gas-phase processing.

https://en.wikipedia.org/wiki/Thin_film 1/7
7/25/2017 Thin film - Wikipedia

Deposition techniques fall into two broad categories, depending on whether the process is primarily chemical
or physical.[1]

Chemical deposition

Here, a fluid precursor undergoes a chemical change at a solid surface, leaving a solid layer. An everyday
example is the formation of soot on a cool object when it is placed inside a flame. Since the fluid surrounds the
solid object, deposition happens on every surface, with little regard to direction; thin films from chemical
deposition techniques tend to be conformal, rather than directional.

Chemical deposition is further categorized by the phase of the precursor:

Plating relies on liquid precursors, often a solution of water with a salt of the metal to be deposited. Some
plating processes are driven entirely by reagents in the solution (usually for noble metals), but by far the most
commercially important process is electroplating. It was not commonly used in semiconductor processing for
many years, but has seen a resurgence with more widespread use of chemical-mechanical polishing techniques.

Chemical solution deposition (CSD) or chemical bath deposition (CBD) uses a liquid precursor, usually a
solution of organometallic powders dissolved in an organic solvent. This is a relatively inexpensive, simple thin
film process that is able to produce stoichiometrically accurate crystalline phases. This technique is also known
as the sol-gel method because the 'sol' (or solution) gradually evolves towards the formation of a gel-like
diphasic system.

Spin coating or spin casting, uses a liquid precursor, or sol-gel precursor deposited onto a smooth, flat substrate
which is subsequently spun at a high velocity to centrifugally spread the solution over the substrate. The speed
at which the solution is spun and the viscosity of the sol determine the ultimate thickness of the deposited film.
Repeated depositions can be carried out to increase the thickness of films as desired. Thermal treatment is often
carried out in order to crystallize the amorphous spin coated film. Such crystalline films can exhibit certain
preferred orientations after crystallization on single crystal substrates.[2]

Dip coating is similar to spin coating in that a liquid precursor or sol-gel precursor is deposited on a substrate,
but in this case the substrate is completely submerged in the solution and then withdrawn under controlled
conditions. By controlling the withdrawal speed, the evaporation conditions (principally the humidity,
temperature) and the volatility/viscosity of the solvent, the film thickness, homogeneity and nanoscopic
morphology are controlled. There are two evaporation regimes: the capillary zone at very low withdrawal
speeds, and the draining zone at faster evaporation speeds.[3]

Chemical vapor deposition (CVD) generally uses a gas-phase precursor, often a halide or hydride of the
element to be deposited. In the case of MOCVD, an organometallic gas is used. Commercial techniques often
use very low pressures of precursor gas.

Plasma enhanced CVD (PECVD) uses an ionized vapor, or plasma, as a precursor. Unlike the soot example
above, commercial PECVD relies on electromagnetic means (electric current, microwave excitation), rather
than a chemical reaction, to produce a plasma.

Atomic layer deposition (ALD) uses gaseous precursor to deposit conformal thin films one layer at a time. The
process is split up into two half reactions, run in sequence and repeated for each layer, in order to ensure total
layer saturation before beginning the next layer. Therefore, one reactant is deposited first, and then the second
reactant is deposited, during which a chemical reaction occurs on the substrate, forming the desired
composition. As a result of the stepwise, the process is slower than CVD, however it can be run at low
temperatures, unlike CVD.

Physical deposition

https://en.wikipedia.org/wiki/Thin_film 2/7
7/25/2017 Thin film - Wikipedia

Physical deposition uses mechanical, electromechanical or thermodynamic means to produce a thin film of
solid. An everyday example is the formation of frost. Since most engineering materials are held together by
relatively high energies, and chemical reactions are not used to store these energies, commercial physical
deposition systems tend to require a low-pressure vapor environment to function properly; most can be
classified as physical vapor deposition (PVD).

The material to be deposited is placed in an energetic, entropic environment, so that particles of material escape
its surface. Facing this source is a cooler surface which draws energy from these particles as they arrive,
allowing them to form a solid layer. The whole system is kept in a vacuum deposition chamber, to allow the
particles to travel as freely as possible. Since particles tend to follow a straight path, films deposited by
physical means are commonly directional, rather than conformal.

Examples of physical deposition include: A thermal evaporator that uses an electric resistance heater to melt
the material and raise its vapor pressure to a useful range. This is done in a high vacuum, both to allow the
vapor to reach the substrate without reacting with or scattering against other gas-phase atoms in the chamber,
and reduce the incorporation of impurities from the residual gas in the vacuum chamber. Obviously, only
materials with a much higher vapor pressure than the heating element can be deposited without contamination
of the film. Molecular beam epitaxy is a particularly sophisticated form of thermal evaporation.

An electron beam evaporator fires a high-energy beam from an electron gun to boil a small spot of material;
since the heating is not uniform, lower vapor pressure materials can be deposited. The beam is usually bent
through an angle of 270° in order to ensure that the gun filament is not directly exposed to the evaporant flux.
Typical deposition rates for electron beam evaporation range from 1 to 10 nanometres per second.

In molecular beam epitaxy (MBE), slow streams of an element can be directed at the substrate, so that material
deposits one atomic layer at a time. Compounds such as gallium arsenide are usually deposited by repeatedly
applying a layer of one element (i.e., gallium), then a layer of the other (i.e., arsenic), so that the process is
chemical, as well as physical. The beam of material can be generated by either physical means (that is, by a
furnace) or by a chemical reaction (chemical beam epitaxy).

Sputtering relies on a plasma (usually a noble gas, such as argon) to knock material from a "target" a few atoms
at a time. The target can be kept at a relatively low temperature, since the process is not one of evaporation,
making this one of the most flexible deposition techniques. It is especially useful for compounds or mixtures,
where different components would otherwise tend to evaporate at different rates. Note, sputtering's step
coverage is more or less conformal. It is also widely used in the optical media. The manufacturing of all
formats of CD, DVD, and BD are done with the help of this technique. It is a fast technique and also it provides
a good thickness control. Presently, nitrogen and oxygen gases are also being used in sputtering.

Pulsed laser deposition systems work by an ablation process. Pulses of focused laser light vaporize the surface
of the target material and convert it to plasma; this plasma usually reverts to a gas before it reaches the
substrate.[4]

Cathodic arc deposition (arc-PVD) which is a kind of ion beam deposition where an electrical arc is created
that literally blasts ions from the cathode. The arc has an extremely high power density resulting in a high level
of ionization (30–100%), multiply charged ions, neutral particles, clusters and macro-particles (droplets). If a
reactive gas is introduced during the evaporation process, dissociation, ionization and excitation can occur
during interaction with the ion flux and a compound film will be deposited.

Electrohydrodynamic deposition (electrospray deposition) is a relatively new process of thin film deposition.
The liquid to be deposited, either in the form of nano-particle solution or simply a solution, is fed to a small
capillary nozzle (usually metallic) which is connected to a high voltage. The substrate on which the film has to
be deposited is connected to ground. Through the influence of electric field, the liquid coming out of the nozzle
takes a conical shape (Taylor cone) and at the apex of the cone a thin jet emanates which disintegrates into very
fine and small positively charged droplets under the influence of Rayleigh charge limit. The droplets keep
getting smaller and smaller and ultimately get deposited on the substrate as a uniform thin layer.

https://en.wikipedia.org/wiki/Thin_film 3/7
7/25/2017 Thin film - Wikipedia

Growth modes

Frank-van-der-Merwe[5][6][7] ("layer-by-layer"). In this growth mode


the adsorbate-surface and adsorbate-adsorbate interactions are balanced.
This type of growth requires lattice matching, and hence considered an
"ideal" growth mechanism.

Stranski–Krastanov growth[8] ("joint islands" or "layer-plus-island"). In


this growth mode the adsorbate-surface interactions are stronger than Frank-van-der-Merwe mode
adsorbate-adsorbate interactions.

Volmer-Weber[9] ("isolated islands"). In this growth mode the


adsorbate-adsorbate interactions are stronger than adsorbate-surface
interactions, hence "islands" are formed right away.

Epitaxy

A subset of thin film deposition processes and applications is focused Stranski-Krastanow mode
on the so-called epitaxial growth of materials, the deposition of
crystalline thin films that grow following the crystalline structure of the
substrate. The term epitaxy comes from the Greek roots epi (ἐπί),
meaning "above", and taxis (τάξις), meaning "an ordered manner". It
can be translated as "arranging upon".

The term homoepitaxy refers to the specific case in which a film of the
same material is grown on a crystalline substrate. This technology is
Volmer-Weber mode
used, for instance, to grow a film which is more pure than the substrate,
has a lower density of defects, and to fabricate layers having different
doping levels. Heteroepitaxy refers to the case in which the film being deposited is different than the substrate.

Techniques used for epitaxial growth of thin films include molecular beam epitaxy, chemical vapor deposition,
and pulsed laser deposition.[10]

Applications
Decorative coatings

The usage of thin films for decorative coatings probably represents their oldest application. This encompasses
ca. 100 nm thin gold leafs that were already used in ancient India more than 5000 years ago. It may also be
understood as any form of painting, although this kind of work is generally considered as an arts craft rather
than an engineering or scientific discipline. Today, thin film materials of variable thickness and high refractive
index like titanium dioxide are often applied for decorative coatings on glass for instance, causing a rainbow-
color appearance like oil on water. In addition, intransparent gold-colored surfaces may either be prepared by
sputtering of gold or titanium nitride.

Optical coatings

These layers serve in both reflective and refractive systems. Large-area (reflective) mirrors became available
during the 19th century and were produced by sputtering of metallic silver or aluminum on glass. Refractive
lenses for optical instruments like cameras and microscopes typically exhibit aberrations, i.e. non-ideal
refractive behavior. While large sets of lenses had to be lined up along the optical path previously, nowadays ,
the coating of optical lenses with transparent multilayers of titanium dioxide, silicon nitride or silicon oxide etc.

https://en.wikipedia.org/wiki/Thin_film 4/7
7/25/2017 Thin film - Wikipedia

may correct these aberrations. A well-known example for the progress in optical systems by thin film
technology is represented by the only a few mm wide lens in smart phone cameras. Other examples are given
by anti-reflection coatings on eyeglasses or solar panels.

Protective coatings

Thin films are often deposited to protect an underlying work piece from external influences. The protection
may operate by minimizing the contact with the exterior medium in order to reduce the diffusion from the
medium to the work piece or vice versa. For instance, plastic lemonade bottles are frequently coated by anti-
diffusion layers to avoid the out-diffusion of CO2, into which carbonic acid decomposes that was introduced
into the beverage under high pressure. Another example is represented by thin TiN films in microelectronic
chips separating electrically conducting aluminum lines from the embedding insulator SiO2 in order to suppress
the formation of Al2O3. Often, thin films serve as protection against abrasion between mechanically moving
parts. Examples for the latter application are diamond-like carbon (DLC) layers used in car engines or thin
films made of nanocomposites.

Electrically operating coatings

Thin layers from elemental metals like copper, aluminum, gold or silver etc.
and alloys have found numerous applications in electrical devices. Due to their
high electrical conductivity they are able to transport electrical currents or
supply voltages. Thin metal layers serve in conventional electrical system, for
instance, as Cu layers on printed circuit boards, as the outer ground conductor
in coaxial cables and various other forms like sensors etc.[12]. A major field of
application became their use in integrated circuits, where the electrical network
among active and passive devices like transistors and capacitors etc. is built up
from thin Al or Cu layers. These layers dispose of thicknesses in the range of a
few 100 nm up to a few µm, and they are often embedded into a few nm thin
titanium nitride layers in order to block a chemical reaction with the Laterally structured metal
surrounding dielectric like SiO2. The figure shows a micrograph of a laterally layer of an integrated
structured TiN/Al/TiN metal stack in a microelectronic chip[11]. circuit[11]

Thin-film photovoltaic cells

Thin-film technologies are also being developed as a means of substantially reducing the cost of solar cells.
The rationale for this is thin film solar cells are cheaper to manufacture owing to their reduced material costs,
energy costs, handling costs and capital costs. This is especially represented in the use of printed electronics
(roll-to-roll) processes. Other thin-film technologies, that are still in an early stage of ongoing research or with
limited commercial availability, are often classified as emerging or third generation photovoltaic cells and
include, organic, dye-sensitized, and polymer solar cells, as well as quantum dot, copper zinc tin sulfide,
nanocrystal and perovskite solar cells.

Thin-film batteries

Thin-film printing technology is being used to apply solid-state lithium polymers to a variety of substrates to
create unique batteries for specialized applications. Thin-film batteries can be deposited directly onto chips or
chip packages in any shape or size. Flexible batteries can be made by printing onto plastic, thin metal foil, or
paper.[13]

See also
Coating
https://en.wikipedia.org/wiki/Thin_film 5/7
7/25/2017 Thin film - Wikipedia

Dual Polarisation Kelvin probe force Sarfus


Interferometry microscope Thin-film interference
Ellipsometry Layer by layer Thin-film optics
Hydrogenography Microfabrication Thin-film solar cell
Organic LED

References

1. Functional Polymer Films Eds. R. Advincula and W. Knoll – Wiley, 2011, ISBN 978-3527321902.
2. Hanaor, D; Triani G.; Sorrell C.C.; (2011). "Morphology and photocatalytic activity of highly oriented mixed phase
titanium dioxide thin films". Surface and Coatings Technology:. 205 (12): 3658–3664.
doi:10.1016/j.surfcoat.2011.01.007 (https://doi.org/10.1016%2Fj.surfcoat.2011.01.007).
3. "M. Faustini, G. L. Drisko, C. Boissiere, D. Grosso Liquid deposition approaches to self-assembled periodic
nanomasks, Scripta Materiala 2014.".
https://www.researchgate.net/publication/256103949_Liquid_deposition_approaches_to_self-
assembled_periodic_nanomasks. External link in |journal= (help)
4. Rashidian Vaziri, M R. "Monte Carlo simulation of the subsurface growth mode during pulsed laser deposition" (htt
p://scitation.aip.org/content/aip/journal/jap/110/4/10.1063/1.3624768). Journal of Applied Physics. 110: 043304.
doi:10.1063/1.3624768 (https://doi.org/10.1063%2F1.3624768).
5. Frank, F. C.; van der Merwe, J. H. (1949). "One-Dimensional Dislocations. I. Static Theory". Proceedings of the
Royal Society of London. Series A, Mathematical and Physical Sciences. 198 (1053): 205–216.
Bibcode:1949RSPSA.198..205F (http://adsabs.harvard.edu/abs/1949RSPSA.198..205F). JSTOR 98165 (https://ww
w.jstor.org/stable/98165). doi:10.1098/rspa.1949.0095 (https://doi.org/10.1098%2Frspa.1949.0095).
6. Frank, F. C.; van der Merwe, J. H. (1949). "One-Dimensional Dislocations. II. Misfitting Monolayers and Oriented
Overgrowth". Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences. 198
(1053): 216–225. Bibcode:1949RSPSA.198..216F (http://adsabs.harvard.edu/abs/1949RSPSA.198..216F).
JSTOR 98166 (https://www.jstor.org/stable/98166). doi:10.1098/rspa.1949.0096 (https://doi.org/10.1098%2Frspa.19
49.0096).
7. Frank, F. C.; van der Merwe, J. H. (1949). "One-Dimensional Dislocations. III. Influence of the Second Harmonic
Term in the Potential Representation, on the Properties of the Model". Proceedings of the Royal Society of London.
Series A, Mathematical and Physical Sciences. 200 (1060): 125–134. Bibcode:1949RSPSA.200..125F (http://adsabs.
harvard.edu/abs/1949RSPSA.200..125F). JSTOR 98394 (https://www.jstor.org/stable/98394).
doi:10.1098/rspa.1949.0163 (https://doi.org/10.1098%2Frspa.1949.0163).
8. Stranski, I. N.; Krastanov, L. (1938). "Zur Theorie der orientierten Ausscheidung von Ionenkristallen aufeinander".
Sitzungsber. Akad. Wiss. Wien. Math.-Naturwiss. 146: 797–810.
9. Volmer, M.; Weber, A. (1926). "Keimbildung in übersättigten Gebilden". Z. Phys. Chem. 119: 277–301.
10. Vaziri, M R R; et al. "Microscopic description of the thermalization process during pulsed laser deposition of
aluminium in the presence of argon background gas" (http://iopscience.iop.org/article/10.1088/0022-3727/43/42/425
205/meta#). Journal of Physics D: Applied Physics. 43: 425205. doi:10.1088/0022-3727/43/42/425205 (https://doi.or
g/10.1088%2F0022-3727%2F43%2F42%2F425205).
11. M. Birkholz; K.-E. Ehwald; D. Wolansky; I. Costina; C. Baristyran-Kaynak; M. Fröhlich; H. Beyer; A. Kapp; F.
Lisdat (2010). "Corrosion-resistant metal layers from a CMOS process for bioelectronic applications" (https://www.r
esearchgate.net/profile/Mario_Birkholz/publication/230817001_Corrosion-resistant_metal_layers_from_a_CMOS_p
rocess_for_bioelectronic_applications/links/0a85e52dfb001d07b5000000.pdf) (PDF). Surf. Coat. Technol. 204 (12–
13): 2055–2059. doi:10.1016/j.surfcoat.2009.09.075 (https://doi.org/10.1016%2Fj.surfcoat.2009.09.075).
12. G. Korotchenkov (2013). "Thin metal films". Handbook of Gas Sensor Materials. Integrated Analytical Systems.
Springer. pp. 153–166. ISBN 978-1-4614-7164-6.
13. Flexible Cell Construction (http://www.mpoweruk.com/cell_construction.htm#flexible). Mpoweruk.com. Retrieved
on 2012-01-15.

Further reading
Textbooks

M. Birkholz, with contributions by P.F. Fewster and C. Genzel (2005). Thin Film Analysis by X-Ray
Scattering. Weinheim: Wiley-VCH. ISBN 978-3-527-31052-4. Table of contents (https://www.researchga

https://en.wikipedia.org/wiki/Thin_film 6/7
7/25/2017 Thin film - Wikipedia

te.net/profile/Mario_Birkholz/publication/260086272_Table_of_contents__Preface/file/e2e7e52f7724f35
75f.pdf)
M. Ohring (2001). Materials Science of Thin Films (2nd ed.). Boston: Academic Press.
ISBN 9780125249751.
K. Seshan, ed. (2012). Handbook of Thin Film Deposition (3rd ed.). Amsterdam: Elsevier. ISBN 978-1-
4377-7873-1.

Historical
D.M. Mattox (2003). The Foundations of Vacuum Coating Technology (http://www.svc.org/assets/file/HI
STORYA.PDF) (PDF). Norwich: Noyes / William Andrew Publishing. ISBN 0-8155-1495-6.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Thin_film&oldid=788345079"

Categories: Thin films Artificial materials Nanotechnology

This page was last edited on 30 June 2017, at 22:43.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Thin_film 7/7
7/25/2017 Epitaxy - Wikipedia

Epitaxy
From Wikipedia, the free encyclopedia

Epitaxy refers to the deposition of a crystalline overlayer on a crystalline substrate.

The overlayer is called an epitaxial film or epitaxial layer. The term epitaxy comes from the Greek roots epi
(ἐπί), meaning "above", and taxis (τάξις), meaning "an ordered manner". It can be translated as "arranging
upon". For most technological applications, it is desired that the deposited material form a crystalline overlayer
that has one well-defined orientation with respect to the substrate crystal structure (single-domain epitaxy).

Epitaxial films may be grown from gaseous or liquid precursors. Because the substrate acts as a seed crystal,
the deposited film may lock into one or more crystallographic orientations with respect to the substrate crystal.
If the overlayer either forms a random orientation with respect to the substrate or does not form an ordered
overlayer, it is termed non-epitaxial growth. If an epitaxial film is deposited on a substrate of the same
composition, the process is called homoepitaxy; otherwise it is called heteroepitaxy.

Contents
1 Types
2 Applications
3 Methods
3.1 Vapor-phase
3.2 Liquid-phase
3.3 Solid-phase
4 Molecular-beam epitaxy
5 Doping
6 Minerals
6.1 Isomorphic minerals
6.2 Polymorphic minerals
6.3 Rutile on hematite
6.4 Hematite on magnetite
7 See also
8 References
9 External links

Types
Homoepitaxy is a kind of epitaxy performed with only one material, in which a crystalline film is grown on a
substrate or film of the same material. This technology is used to grow a film which is more pure than the
substrate and to fabricate layers having different doping levels. In academic literature, homoepitaxy is often
abbreviated to "homoepi".

Heteroepitaxy is a kind of epitaxy performed with materials that are different from each other. In
heteroepitaxy, a crystalline film grows on a crystalline substrate or film of a different material. This technology
is often used to grow crystalline films of materials for which crystals cannot otherwise be obtained and to
fabricate integrated crystalline layers of different materials. Examples include silicon on sapphire, gallium
nitride (GaN) on sapphire, aluminium gallium indium phosphide (AlGaInP) on gallium arsenide (GaAs) or
diamond or iridium.,[1] and graphene on hexagonal boron nitride.[2]

Heterotopotaxy is a process similar to heteroepitaxy except that thin film growth is not limited to two-
dimensional growth; the substrate is similar only in structure to the thin-film material.

https://en.wikipedia.org/wiki/Epitaxy 1/6
7/25/2017 Epitaxy - Wikipedia

Pendeo-epitaxy is a process in which the heteroepitaxial film is growing vertically and laterally at the same
time. In 2D crystal heterostructure, graphenen nanoribbons embedded in hexagonal boron nitride [3] are an
example for pendeo-epitaxy. Epitaxy is used in silicon-based manufacturing processes for bipolar junction
transistors (BJTs) and modern complementary metal–oxide–semiconductors (CMOS), but it is particularly
important for compound semiconductors such as gallium arsenide. Manufacturing issues include control of the
amount and uniformity of the deposition's resistivity and thickness, the cleanliness and purity of the surface and
the chamber atmosphere, the prevention of the typically much more highly doped substrate wafer's diffusion of
dopant to the new layers, imperfections of the growth process, and protecting the surfaces during the
manufacture and handling.

Applications
Epitaxy is used in nanotechnology and in semiconductor fabrication. Indeed, epitaxy is the only affordable
method of high quality crystal growth for many semiconductor materials. In surface science, epitaxy is used to
create and study monolayer and multilayer films of adsorbed organic molecules on single crystalline surfaces.
Adsorbed molecules form ordered structures on atomically flat terraces of single crystalline surfaces and can
directly be observed via scanning tunnelling microscopy.[4] In contrast, surface defects and their geometry have
significant influence on the adsorption of organic molecules[5]

Methods
Epitaxial silicon is usually grown using vapor-phase epitaxy (VPE), a modification of chemical vapor
deposition. Molecular-beam and liquid-phase epitaxy (MBE and LPE) are also used, mainly for compound
semiconductors. Solid-phase epitaxy is used primarily for crystal-damage healing.

Vapor-phase

Silicon is most commonly deposited by doping with silicon tetrachloride and hydrogen at approximately 1200
to 1250 °C:[6]

SiCl4(g) + 2H2(g) ↔ Si(s) + 4HCl(g)

This reaction is reversible, and the growth rate depends strongly upon the proportion of the two source gases.
Growth rates above 2 micrometres per minute produce polycrystalline silicon, and negative growth rates
(etching) may occur if too much hydrogen chloride byproduct is present. (In fact, hydrogen chloride may be
added intentionally to etch the wafer.) An additional etching reaction competes with the deposition reaction:

SiCl4(g) + Si(s) ↔ 2SiCl2(g)

Silicon VPE may also use silane, dichlorosilane, and trichlorosilane source gases. For instance, the silane
reaction occurs at 650 °C in this way:

SiH4 → Si + 2H2

This reaction does not inadvertently etch the wafer, and takes place at lower temperatures than deposition from
silicon tetrachloride. However, it will form a polycrystalline film unless tightly controlled, and it allows
oxidizing species that leak into the reactor to contaminate the epitaxial layer with unwanted compounds such as
silicon dioxide.

VPE is sometimes classified by the chemistry of the source gases, such as hydride VPE and metalorganic VPE.

Liquid-phase

https://en.wikipedia.org/wiki/Epitaxy 2/6
7/25/2017 Epitaxy - Wikipedia

Liquid phase epitaxy (LPE) is a method to grow semiconductor crystal layers from the melt on solid substrates.
This happens at temperatures well below the melting point of the deposited semiconductor. The semiconductor
is dissolved in the melt of another material. At conditions that are close to the equilibrium between dissolution
and deposition, the deposition of the semiconductor crystal on the substrate is relatively fast and uniform. The
most used substrate is indium phosphide (InP). Other substrates like glass or ceramic can be applied for special
applications. To facilitate nucleation, and to avoid tension in the grown layer the thermal expansion coefficient
of substrate and grown layer should be similar.

Solid-phase

Solid Phase Epitaxy (SPE) is a transition between the amorphous and crystalline phases of a material. It is
usually done by first depositing a film of amorphous material on a crystalline substrate. The substrate is then
heated to crystallize the film. The single crystal substrate serves as a template for crystal growth. The annealing
step used to recrystallize or heal silicon layers amorphized during ion implantation is also considered one type
of Solid Phase Epitaxy. The Impurity segregation and redistribution at the growing crystal-amorphous layer
interface during this process is used to incorporate low-solubility dopants in metals and Silicon.[7]

Molecular-beam epitaxy
In molecular beam epitaxy (MBE), a source material is heated to produce an evaporated beam of particles.
These particles travel through a very high vacuum (10−8 Pa; practically free space) to the substrate, where they
condense. MBE has lower throughput than other forms of epitaxy. This technique is widely used for growing
periodic groups III, IV, and V semiconductor crystals.[8][9]

Doping
An epitaxial layer can be doped during deposition by adding impurities to the source gas, such as arsine,
phosphine or diborane. The concentration of impurity in the gas phase determines its concentration in the
deposited film. As in chemical vapor deposition (CVD), impurities change the deposition rate. Additionally, the
high temperatures at which CVD is performed may allow dopants to diffuse into the growing layer from other
layers in the wafer («out-diffusion»). Also, dopants in the source gas, liberated by evaporation or wet etching of
the surface, may diffuse into the epitaxial layer («autodoping»). The dopant profiles of underlying layers
change as well, however not as significantly.

Minerals
In mineralogy epitaxy is the overgrowth of one mineral on another in an orderly way, such that certain crystal
directions of the two minerals are aligned. This occurs when some planes in the lattices of the overgrowth and
the substrate have similar spacings between atoms.[10]

If the crystals of both minerals are well formed so that the directions of the crystallographic axes are clear then
the epitaxic relationship can be deduced just by a visual inspection.[10]

Sometimes many separate crystals form the overgrowth on a single substrate, and then if there is epitaxy all the
overgrowth crystals will have a similar orientation. The reverse, however, is not necessarily true. If the
overgrowth crystals have a similar orientation there is probably an epitaxic relationship, but it is not certain.[10]

Some authors[11] consider that overgrowths of a second generation of the same mineral species should also be
considered as epitaxy, and this is common terminology for semiconductor scientists who induce epitaxic
growth of a film with a different doping level on a semiconductor substrate of the same material. For naturally
produced minerals, however, the International Mineralogical Association (IMA) definition requires that the two
minerals be of different species.[12]
https://en.wikipedia.org/wiki/Epitaxy 3/6
7/25/2017 Epitaxy - Wikipedia

Another man-made application of epitaxy is the making of artificial


snow using silver iodide, which is possible because hexagonal silver
iodide and ice have similar cell dimensions.[11]

Isomorphic minerals

Minerals that have the same structure (isomorphic minerals) may have
epitaxic relations. An example is albite NaAlSi3O8 on microcline
KAlSi3O8. Both these minerals are triclinic, with space group 1, and
with similar unit cell parameters, a = 8.16 Å, b = 12.87 Å, c = 7.11 Å, α
= 93.45°, β = 116.4°, γ = 90.28° for albite and a = 8.5784 Å, b = 12.96
Å, c = 7.2112 Å, α = 90.3°, β = 116.05°, γ = 89° for microcline.

Polymorphic minerals

Minerals that have the same composition but different structures


(polymorphic minerals) may also have epitaxic relations. Examples are
pyrite and marcasite, both FeS2, and sphalerite and wurtzite, both
ZnS.[10] Rutile epitaxial on hematite nearly 6
cm long. Bahia, Brazil
Rutile on hematite

Some pairs of minerals that are not related structurally or


compositionally may also exhibit epitaxy. A common example is rutile
TiO2 on hematite Fe2O3.[10][13] Rutile is tetragonal and hematite is
trigonal, but there are directions of similar spacing between the atoms in
the (100) plane of rutile (perpendicular to the a axis) and the (001) plane
of hematite (perpendicular to the c axis). In epitaxy these directions tend
to line up with each other, resulting in the axis of the rutile overgrowth
being parallel to the c axis of hematite, and the c axis of rutile being
parallel to one of the axes of hematite.[10]

Hematite on magnetite Hematite pseudomorph after


magnetite, with terraced epitaxial
Another example is hematite Fe3+2O3 on magnetite Fe2+Fe3+2O4. The faces. La Rioja, Argentina
magnetite structure is based on close packed oxygen anions stacked in
an ABC-ABC sequence. In this packing the close-packed layers are parallel to (111) (a plane that
symmetrically "cuts off" a corner of a cube). The hematite structure is based on close-packed oxygen anions
stacked in an AB-AB sequence, which results in a crystal with hexagonal symmetry.[14]

If the cations were small enough to fit into a truly close-packed structure of oxygen anions then the spacing
between the nearest neighbour oxygen sites would be the same for both species. The radius of the oxygen ion,
however, is only 1.36 Å[15] and the Fe cations are big enough to cause some variations. The Fe radii vary from
0.49 Å to 0.92 Å,[16] depending on the charge (2+ or 3+) and the coordination number (4 or 8). Nevertheless,
the O spacings are similar for the two minerals hence hematite can readily grow on the (111) faces of
magnetite, with hematite (001) parallel to magnetite (111).[14]

See also
Atomic layer epitaxy

https://en.wikipedia.org/wiki/Epitaxy 4/6
7/25/2017 Epitaxy - Wikipedia

Epi wafer
Exchange bias
Gallium nitride
Heterojunction
Island growth
Nano-RAM
Quantum cascade laser
Selective area epitaxy
Silicon on sapphire
Single event upset
Topotaxy
VCSEL
Wake Shield Facility
Zhores Ivanovich Alferov

References

1. M. Schreck et al., Appl. Phys. Lett. 78, 192 (2001); doi:10.1063/1.1337648 (https://dx.doi.org/10.1063%2F1.133764
8)
2. Tang, Shujie; Wang, Haomin; Wang, Huishan (2015). "Silane-catalysed fast growth of large single-crystalline
graphene on hexagonal boron nitride". Nature Communications: 6499. doi:10.1038/ncomms7499 (https://doi.org/10.
1038%2Fncomms7499).
3. Chen, Lingxiu; He, Li; Wang, Huishan (2017). "Oriented graphene nanoribbons embedded in hexagonal boron
nitride trenches". Nature Communications: 14703. doi:10.1038/ncomms14703 (https://doi.org/10.1038%2Fncomms1
4703).
4. Waldmann, T. (2011). "Growth of an oligopyridine adlayer on Ag(100) – A scanning tunnelling microscopy study".
Physical Chemistry Chemical Physics. 13: 20724. doi:10.1039/C1CP22546D (https://doi.org/10.1039%2FC1CP2254
6D).
5. Waldmann, T. (2012). "The role of surface defects in large organic molecule adsorption: substrate configuration
effects". Physical Chemistry Chemical Physics. 14: 10726–31. Bibcode:2012PCCP...1410726W (http://adsabs.harvar
d.edu/abs/2012PCCP...1410726W). PMID 22751288 (https://www.ncbi.nlm.nih.gov/pubmed/22751288).
doi:10.1039/C2CP40800G (https://doi.org/10.1039%2FC2CP40800G).
6. Morgan, D. V.; Board, K. (1991). An Introduction To Semiconductor Microtechnology (2nd ed.). Chichester, West
Sussex, England: John Wiley & Sons. p. 23. ISBN 0471924784.
7. J.S. Custer et al., J. Appl. Phys., Vol. 75, No. 6, 15 March 1994
8. A. Y. Cho, "Growth of III\–V semiconductors by molecular beam epitaxy and their properties," Thin Solid Films,
vol. 100, pp. 291-317, 1983.
9. Cheng, K.-Y., "Molecular beam epitaxy technology of III-V compound semiconductors for optoelectronic
applications," Proceedings of the IEEE , vol.85, no.11, pp.1694-1714, Nov 1997 URL:
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=649646&isnumber=14175
10. Rakovan, John (2006) Rocks & Minerals 81:317 to 320
11. White and Richards (2010) Rocks & Minerals 85:173 to 176
12. Acta Crystallographica Section A Crystal Physics, Diffraction, Theoretical and General Crystallography Volume 33,
Part 4 (July 1977)
13. http://www.mineral-forum.com
14. Nesse, William (2000). Introduction to Mineralogy. Oxford University Press. Page 79
15. Klein and Hurlbut (1993) Manual of Mineralogy 21st Edition. Wiley
16. Imperial College Database abulafia.mt.ic.ac.uk (http://abulafia.mt.ic.ac.uk/shannon/ptable.php)

Jaeger, Richard C. (2002). "Film Deposition". Introduction to Microelectronic Fabrication (2nd ed.).
Upper Saddle River: Prentice Hall. ISBN 0-201-44494-1.

External links
epitaxy.net (http://www.epitaxy.net/): a central forum for the epitaxy-communities
Deposition processes (http://www.memsnet.org/mems/processes/deposition.html)
CrystalXE.com (http://www.crystalxe.com/): a specialized software in epitaxy
https://en.wikipedia.org/wiki/Epitaxy 5/6
7/25/2017 Epitaxy - Wikipedia

Retrieved from "https://en.wikipedia.org/w/index.php?title=Epitaxy&oldid=781601548"

Categories: Thin film deposition Semiconductor device fabrication Crystallography

This page was last edited on 22 May 2017, at 05:44.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Epitaxy 6/6
7/25/2017 Crystal growth - Wikipedia

Crystal growth
From Wikipedia, the free encyclopedia

A crystal is a solid material whose constituent atoms, molecules,


or ions are arranged in an orderly repeating pattern extending in all Crystallization
three spatial dimensions. Crystal growth is a major stage of a
crystallization process, and consists in the addition of new atoms,
ions, or polymer strings into the characteristic arrangement of a
crystalline Bravais lattice. The growth typically follows an initial
stage of either homogeneous or heterogeneous (surface catalyzed)
nucleation, unless a "seed" crystal, purposely added to start the
growth, was already present.

The action of crystal growth yields a crystalline solid whose atoms


or molecules are typically close packed, with fixed positions in Concepts
space relative to each other. The crystalline state of matter is Crystallization · Crystal growth
characterized by a distinct structural rigidity and very high Recrystallization · Seed crystal
resistance to deformation (i.e. changes of shape and/or volume). Protocrystalline · Single crystal
Most crystalline solids have high values both of Young's modulus
and of the shear modulus of elasticity. This contrasts with most Methods and technology
liquids or fluids, which have a low shear modulus, and typically Boules
exhibit the capacity for macroscopic viscous flow. Bridgman–Stockbarger technique
Czochralski process
Fractional crystallization
Contents Fractional freezing
Hydrothermal synthesis
1 Introduction Laser-heated pedestal growth
2 Discontinuity Crystal bar process
3 Nucleation
4 Mechanisms of growth Fundamentals
4.1 Driving force Nucleation · Crystal
5 Morphology Crystal structure · Solid
5.1 Diffusion-control
6 See also
6.1 Crystal growth simulation
7 References

Introduction
Crystalline solids are typically formed by cooling and
solidification from the liquid state. According to the
Ehrenfest classification of first-order phase transitions,
there is a discontinuous change in volume (and thus a
discontinuity in the slope or first derivative with respect to
temperature, dV/dT) at the melting point. Within this
context, the crystal and melt are distinct phases with an
interfacial discontinuity having a surface of tension with a
positive surface energy. Thus, a metastable parent phase is
always stable with respect to the nucleation of small
embryos or droplets from a daughter phase, provided it has Quartz is one of the several thermodynamically
stable crystalline forms of silica, SiO2

https://en.wikipedia.org/wiki/Crystal_growth 1/6
7/25/2017 Crystal growth - Wikipedia

a positive surface of tension. Such first-order transitions must proceed by the advancement of an interfacial
region whose structure and properties vary discontinuously from the parent phase.[1][2][3][4]

The process of nucleation and growth generally occurs in two different stages. In the first nucleation stage, a
small nucleus containing the newly forming crystal is created. Nucleation occurs relatively slowly as the initial
crystal components must impinge on each other in the correct orientation and placement for them to adhere and
form the crystal. After crystal nucleation, the second stage of growth rapidly ensues. Crystal growth spreads
outwards from the nucleating site. In this faster process, the elements which form the motif add to the growing
crystal in a prearranged system, the crystal lattice, started in crystal nucleation. As first pointed out by Frank,
perfect crystals would only grow exceedingly slowly. Real crystals grow comparatively rapidly because they
contain dislocations (and other defects), which provide the necessary growth points, thus providing the
necessary catalyst for structural transformation and long-range order formation.[5]

Discontinuity
The conditions of a homogeneous environment are often approximated to but rarely ever realized. Crystal
growth always involves some form of transport of matter or heat (or both). And homogeneous conditions for
the transport process can only exist for spherical, cylindrical, or infinite plane surfaces. A polyhedral crystal
cannot grow (remaining polyhedral) with uniform levels of supersaturation (or supercooling) over its faces. In
general, the supersaturation is greatest at its corners. This refutes the assumption that the growth rate is a
function of orientation and local supersaturation.

Thus, the crystal face must grow as a whole. The growth rate of the entire face is determined by the driving
force (level of supersaturation) at the point of emergence of the predominant point of growth (e.g. a dislocation,
a foreign particle acting as catalyst, or crystal twin). The defect-free habit face can thus resist a finite level of
supersaturation without any growth at all.

Josiah Willard Gibbs was the first to point out that in the growth of a perfect crystal, the first derivative of the
free energy with respect to mass becomes periodically undefinable — at each time that an additional layer on
the crystal face is completed. There is discontinuity in the chemical potential at each such point.

In one sense, the crystal can then be in equilibrium with environments having a range of chemical potentials. In
another sense, it is not in equilibrium. There are available states of lower free energy. But any free energy
barrier must be passed by a fluctuation, or nucleation process, in order to access it. The fundamental
thermodynamic effect of a screw dislocation is to eliminate this discontinuity in the chemical potential, by
making it impossible to ever complete a single crystal face.

Nucleation
Nucleation can be either homogeneous, without the influence of foreign particles, or heterogeneous, with the
influence of foreign particles. Generally, heterogeneous nucleation takes place more quickly since the foreign
particles act as a scaffold for the crystal to grow on, thus eliminating the necessity of creating a new surface and
the incipient surface energy requirements.

Heterogeneous nucleation can take place by several methods. Some of the most typical are small inclusions, or
cuts, in the container the crystal is being grown on. This includes scratches on the sides and bottom of
glassware. A common practice in crystal growing is to add a foreign substance, such as a string or a rock, to the
solution, thereby providing nucleation sites for facilitating crystal growth and reducing the time to fully
crystallize.

The number of nucleating sites can also be controlled in this manner. If a brand-new piece of glassware or a
plastic container is used, crystals may not form because the container surface is too smooth to allow
heterogeneous nucleation. On the other hand, a badly scratched container will result in many lines of small
crystals. To achieve a moderate number of medium-sized crystals, a container which has a few scratches works
https://en.wikipedia.org/wiki/Crystal_growth 2/6
7/25/2017 Crystal growth - Wikipedia

best. Likewise, adding small previously made crystals, or


seed crystals, to a crystal growing project will provide
nucleating sites to the solution. The addition of only one
seed crystal should result in a larger single crystal.

Some important features during growth are the


arrangement, the origin of growth, the interface form
(important for the driving force), and the final size. When
origin of growth is only in one direction for all the crystals,
it can result in the material becoming very anisotropic
(different properties in different directions). The interface
form determines the additional free energy for each volume
of crystal growth.

Lattice arrangement in metals often takes the structure of


body centered cubic, face centered cubic, or hexagonal
close packed. The final size of the crystal is important for
mechanical properties of materials. (For example, in metals Silver crystal growing on a ceramic substrate.
it is widely acknowledged that large crystals can stretch
further due to the longer deformation path and thus lower
internal stresses.).

Mechanisms of growth
The interface between a crystal and its vapor can be
molecularly sharp at temperatures well below the melting
point. An ideal crystalline surface grows by the spreading
of single layers, or equivalently, by the lateral advance of
the growth steps bounding the layers. For perceptible
growth rates, this mechanism requires a finite driving force
(or degree of supercooling) in order to lower the nucleation
barrier sufficiently for nucleation to occur by means of
thermal fluctuations.[6] In the theory of crystal growth from
the melt, Burton and Cabrera have distinguished between
two major mechanisms:[7][8][9]

Non-uniform lateral growth. The surface advances


An example of the cubic crystals typical of the
by the lateral motion of steps which are one
interplanar spacing in height (or some integral rock-salt structure.
multiple thereof). An element of surface undergoes
no change and does not advance normal to itself except
during the passage of a step, and then it advances by the step
height. It is useful to consider the step as the transition
between two adjacent regions of a surface which are parallel
to each other and thus identical in configuration — displaced
from each other by an integral number of lattice planes. Note
here the distinct possibility of a step in a diffuse surface,
even though the step height would be much smaller than the
thickness of the diffuse surface.
Uniform normal growth. The surface advances normal to
itself without the necessity of a stepwise growth mechanism.
This means that in the presence of a sufficient Time-lapse of growth of a citric acid
thermodynamic driving force, every element of surface is crystal. The video covers an area of 2.0 by
capable of a continuous change contributing to the 1.5 mm and was captured over 7.2 min.
advancement of the interface. For a sharp or discontinuous
surface, this continuous change
https://en.wikipedia.org/wiki/Crystal_growth may be more or less uniform over large areas each successive new layer.3/6
7/25/2017 Crystal growth - Wikipedia

surface, this continuous change may be more or less uniform over large areas each successive new layer.
For a more diffuse surface, a continuous growth mechanism may require change over several successive
layers simultaneously.

Non-uniform lateral growth is a geometrical motion of steps — as opposed to motion of the entire surface
normal to itself. Alternatively, uniform normal growth is based on the time sequence of an element of surface.
In this mode, there is no motion or change except when a step passes via a continual change. The prediction of
which mechanism will be operative under any set of given conditions is fundamental to the understanding of
crystal growth. Two criteria have been used to make this prediction:

Whether or not the surface is diffuse. A diffuse surface is one in which the change from one phase to
another is continuous, occurring over several atomic planes. This is in contrast to a sharp surface for
which the major change in property (e.g. density or composition) is discontinuous, and is generally
confined to a depth of one interplanar distance.[10][11]
Whether or not the surface is singular. A singular surface is one in which the surface tension as a
function of orientation has a pointed minimum. Growth of singular surfaces is known to requires steps,
whereas it is generally held that non-singular surfaces can continuously advance normal to
themselves.[12]

Driving force

Consider next the necessary requirements for the appearance of lateral growth. It is evident that the lateral
growth mechanism will be found when any area in the surface can reach a metastable equilibrium in the
presence of a driving force. It will then tend to remain in such an equilibrium configuration until the passage of
a step. Afterward, the configuration will be identical except that each part of the step but will have advanced by
the step height. If the surface cannot reach equilibrium in the presence of a driving force, then it will continue
to advance without waiting for the lateral motion of steps.

Thus, Cahn concluded that the distinguishing feature is the ability of the surface to reach an equilibrium state in
the presence of the driving force. He also concluded that for every surface or interface in a crystalline medium,
there exists a critical driving force, which, if exceeded, will enable the surface or interface to advance normal to
itself, and, if not exceeded, will require the lateral growth mechanism.

Thus, for sufficiently large driving forces, the interface can move uniformly without the benefit of either a
heterogeneous nucleation or screw dislocation mechanism. What constitutes a sufficiently large driving force
depends upon the diffuseness of the interface, so that for extremely diffuse interfaces, this critical driving force
will be so small that any measurable driving force will exceed it. Alternatively, for sharp interfaces, the critical
driving force will be very large, and most growth will occur by the lateral step mechanism.

Note that in a typical solidification or crystallization process, the thermodynamic driving force is dictated by
the degree of supercooling.

Morphology
It is generally believed that the mechanical and other properties of the crystal are also pertinent to the subject
matter, and that crystal morphology provides the missing link between growth kinetics and physical properties.
The necessary thermodynamic apparatus was provided by Josiah Willard Gibbs'study of heterogeneous
equilibrium. He provided a clear definition of surface energy, by which the concept of surface tension is made
applicable to solids as well as liquids. He also appreciated that an anisotropic surface free energy implied a
non-spherical equilibrium shape, which should be thermodynamically defined as the shape which minimizes
the total surface free energy.[13]

It may be instructional to note that whisker growth provides the link between the mechanical phenomenon of
high strength in whiskers and the various growth mechanisms which are responsible for their fibrous
morphologies. (Prior to the discovery of carbon nanotubes, single-crystal whiskers had the highest tensile
https://en.wikipedia.org/wiki/Crystal_growth 4/6
7/25/2017 Crystal growth - Wikipedia

strength of any materials known). Some mechanisms


produce defect-free whiskers, while others may have single
screw dislocations along the main axis of growth —
producing high strength whiskers.

The mechanism behind whisker growth is not well


understood, but seems to be encouraged by compressive
mechanical stresses including mechanically induced
stresses, stresses induced by diffusion of different elements,
and thermally induced stresses. Metal whiskers differ from
metallic dendrites in several respects. Dendrites are fern-
shaped like the branches of a tree, and grow across the
surface of the metal. In contrast, whiskers are fibrous and Silver sulfide whiskers growing out of surface-
project at a right angle to the surface of growth, or mount resistors.
substrate.

Diffusion-control

Very commonly when the


supersaturation (or degree
of supercooling) is high,
and sometimes even when
it is not high, growth
kinetics may be diffusion-
NASA animation of controlled. Under such
dendrite formation in conditions, the polyhedral
microgravity. crystal form will be
unstable, it will sprout
protrusions at its corners
and edges where the degree of supersaturation is at
its highest level. The tips of these protrusions will
clearly be the points of highest supersaturation. It is
Manganese dendrites on a limestone bedding plane from
generally believed that the protrusion will become
Solnhofen, Germany. Scale in mm.
longer (and thinner at the tip) until the effect of
interfacial free energy in raising the chemical
potential slows the tip growth and maintains a constant value for the tip thickness.

In the subsequent tip-thickening process, there should be a corresponding instability of shape. Minor bumps or
"bulges" should be exaggerated — and develop into rapidly growing side branches. In such an unstable (or
metastable) situation, minor degrees of anisotropy should be sufficient to determine directions of significant
branching and growth. The most appealing aspect of this argument, of course, is that it yields the primary
morphological features of dendritic growth.

See also
Abnormal grain growth Dendrite (metal) Monocrystalline whisker
Chvorinov's rule Diana's Tree Protocrystalline
Cloud condensation nuclei Fractional crystallization Recrystallization (chemistry)
Crystal structure Ice nucleus Seed crystal
Crystallization Laser-heated pedestal growth Single crystal
Czochralski process Manganese nodule Whisker (metallurgy)
Dendrite (crystal) Micro-pulling-down

Crystal growth simulation

https://en.wikipedia.org/wiki/Crystal_growth 5/6
7/25/2017 Crystal growth - Wikipedia

Kinetic Monte Carlo surface growth method

References
1. Atkins, P.W., Physical Chemistry (W.H. Freeman & Co., New York, 1997) ISBN 0-7167-3465-6
2. Hilliard, J; Cahn, J (1958). "On the nature of the interface between a solid metal and its melt". Acta Metallurgica. 6
(12): 772. doi:10.1016/0001-6160(58)90052-X (https://doi.org/10.1016%2F0001-6160%2858%2990052-X).
3. Cahn, J (1960). "Theory of crystal growth and interface motion in crystalline materials". Acta Metallurgica. 8 (8):
554. doi:10.1016/0001-6160(60)90110-3 (https://doi.org/10.1016%2F0001-6160%2860%2990110-3).
4. Cahn, J; Hillig, W; Sears, G (1964). "The molecular mechanism of solidification". Acta Metallurgica. 12 (12): 1421.
doi:10.1016/0001-6160(64)90130-0 (https://doi.org/10.1016%2F0001-6160%2864%2990130-0).
5. Frank, F. C. (1949). "The influence of dislocations on crystal growth". Discussions of the Faraday Society. 5: 48.
doi:10.1039/DF9490500048 (https://doi.org/10.1039%2FDF9490500048).
6. Volmer, M., "Kinetic der Phasenbildung", T. Steinkopf, Dresden (1939)
7. Burton, W. K.; Cabrera, N. (1949). "Crystal growth and surface structure. Part I". Discussions of the Faraday
Society. 5: 33. doi:10.1039/DF9490500033 (https://doi.org/10.1039%2FDF9490500033).
8. Burton, W. K.; Cabrera, N. (1949). "Crystal growth and surface structure. Part II". Discuss. Faraday Soc. 5: 40–48.
doi:10.1039/DF9490500040 (https://doi.org/10.1039%2FDF9490500040).
9. E.M. Aryslanova, A.V.Alfimov, S.A. Chivilikhin, "Model of porous aluminum oxide growth in the initial stage of
anodization" (http://nanojournal.ifmo.ru/en/articles-2/volume4/4-5/physics/paper01/), Nanosystems: physics,
chemistry, mathematics, October 2013, Volume 4, Issue 5, pp 585
10. Burton, W. K.; Cabrera, N.; Frank, F. C. (1951). "The Growth of Crystals and the Equilibrium Structure of their
Surfaces". Philosophical Transactions of the Royal Society A. 243 (866): 299. Bibcode:1951RSPTA.243..299B (htt
p://adsabs.harvard.edu/abs/1951RSPTA.243..299B). doi:10.1098/rsta.1951.0006 (https://doi.org/10.1098%2Frsta.195
1.0006).
11. Jackson, K.A. (1958) in Growth and Perfection of Crystals, Doremus, R.H., Roberts, B.W. and Turnbull, D. (eds.).
Wiley, New York.
12. Cabrera, N. (1959). "The structure of crystal surfaces". Discussions of the Faraday Society. 28: 16.
doi:10.1039/DF9592800016 (https://doi.org/10.1039%2FDF9592800016).
13. Gibbs, J.W. (1874–1878) On the Equilibrium of Heterogeneous Substances, Collected Works, Longmans, Green &
Co., New York. PDF (http://web.mit.edu/jwk/www/docs/Gibbs1875-1878-Equilibrium_of_Heterogeneous_Substanc
es.pdf), archive.org (https://archive.org/details/Onequilibriumhe00GibbA)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Crystal_growth&oldid=783629265"

Categories: Condensed matter physics Crystallography Crystals Materials science Mineralogy

This page was last edited on 3 June 2017, at 16:17.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Crystal_growth 6/6
7/25/2017 Protein crystallization - Wikipedia

Protein crystallization
From Wikipedia, the free encyclopedia

Protein crystallization is the process of formation of a protein crystal.


While some protein crystals have been observed in nature,[1] protein
crystallization is predominantly used for scientific or industrial
purposes, most notably for study by X-ray crystallography. Like many
other types of molecules, proteins can be prompted to form crystals
when the solution in which they are dissolved becomes supersaturated.
Under these conditions, individual protein molecules can pack in a
repeating array, held together by noncovalent interactions.[2] These
crystals can then be used in structural biology to study the molecular
structure of the protein, or for various industrial or biotechnological
purposes.

Proteins are biological macromolecules and function in an aqueous


environment, so protein crystallization is predominantly carried out in
water. Protein crystallization is generally considered challenging due to
the restrictions of the aqueous environment, difficulties in obtaining Crystals of proteins grown on the
high-quality protein samples, as well as sensitivity of protein samples to U.S. Space Shuttle or Russian Space
temperature, pH, ionic strength, and other factors. Proteins vary greatly Station, Mir.
in their physicochemical characteristics, and so crystallization of a
particular protein is rarely predictable. Determination of appropriate
crystallization conditions for a given protein often requires empirical testing of many conditions before a
successful crystallization condition is found.

Contents
1 Development of protein crystallization
2 Principles of protein crystallization
2.1 Crystallization conditions
3 Methods of protein crystallization
3.1 Vapor diffusion
3.2 Microbatch
3.3 Microdialysis
3.4 Free-interface diffusion
4 Specialized protein crystallization techniques
5 Technologies that assist with protein crystallization
5.1 High throughput crystallization screening
5.2 Protein engineering
6 Alternatives
7 Applications of protein crystallization
8 See also
9 References
10 External links

Development of protein crystallization


Crystallization of protein molecules has been known for over 150 years.[3]

https://en.wikipedia.org/wiki/Protein_crystallization 1/6
7/25/2017 Protein crystallization - Wikipedia

In 1934, John Desmond Bernal and his student Dorothy Hodgkin discovered that protein crystals surrounded by
their mother liquor gave better diffraction patterns than dried crystals. Using pepsin, they were the first to
discern the diffraction pattern of a wet, globular protein. Prior to Bernal and Hodgkin, protein crystallography
had only been performed in dry conditions with inconsistent and unreliable results.[4]

In 1958, the structure of myoglobin, determined by X-ray crystallography, was first reported by John
Kendrew.[5] Kendrew shared the 1962 Nobel Prize in Chemistry with Max Perutz for this discovery.

Principles of protein crystallization


The solubility of protein molecules is subject to many factors,
especially the interaction with other compounds in solution. Most
proteins are soluble at physiological conditions, but as the concentration
of solutes rises, the protein becomes less soluble, driving it to crystallize
or precipitate. This phenomenon is known as "salting out".
Counterintuitively, at very low solute concentrations, proteins also
become less soluble, because some solutes are necessary for the protein
to remain in solution. This converse phenomenon is known as "salting
in". Most protein crystallization techniques form crystals by salting out
the protein into crystals, although some experimental setups can
produce crystals using the salting in effect.

The goal of crystallization is to produce a well-ordered crystal that is


lacking in contaminants while still large enough to provide a diffraction
pattern when exposed to X-rays. This diffraction pattern can then be
analyzed to discern the protein’s tertiary structure. Protein Lysozyme crystals observed through
crystallization is inherently difficult because of the fragile nature of polarizing filter.
protein crystals. Proteins have irregularly shaped surfaces, which results
in the formation of large channels within any protein crystal. Therefore,
the noncovalent bonds that hold together the lattice must often be formed through several layers of solvent
molecules.[2]

In addition to overcoming the inherent fragility of protein crystals, a number of environmental factors must also
be overcome. Due to the molecular variations between individual proteins, conditions unique to each protein
must be obtained for a successful crystallization. Therefore, attempting to crystallize a protein without a proven
protocol can be very challenging and time consuming.

Crystallization conditions

Many factors influence the likelihood of crystallization of a protein sample. Some of these factors include
protein purity, pH, concentration of protein, temperature, precipitants and additives. The more homogeneous
the protein solution is, the more likely that it will crystallize. Typically, protein samples above 97% purity are
considered suitable for crystallization, although high purity is neither necessary nor sufficient for
crystallization. Solution pH can be very important and in extreme cases can result in different packing
orientations. Buffers, such as Tris-HCl, are often necessary for the maintenance of a particular pH.[6]
Precipitants, such as ammonium sulfate or polyethylene glycol, are usually used to promote the formation of
protein crystals.[2]

Methods of protein crystallization


Vapor diffusion

https://en.wikipedia.org/wiki/Protein_crystallization 2/6
7/25/2017 Protein crystallization - Wikipedia

Vapor diffusion is the most commonly employed method of protein


crystallization. In this method, a droplet containing purified protein,
buffer, and precipitant are allowed to equilibrate with a larger reservoir
containing similar buffers and precipitants in higher concentrations.
Initially, the droplet of protein solution contains comparatively low
precipitant and protein concentrations, but as the drop and reservoir
equilibrate, the precipitant and protein concentrations increase in the
drop. If the appropriate crystallization solutions are used for a given
protein, crystal growth will occur in the drop.[2][7] This method is used
because it allows for gentle and gradual changes in concentration of
protein and precipitant concentration, which aid in the growth of large
and well-ordered crystals.

Vapor diffusion can be performed in either hanging-drop or sitting-drop


format. Hanging-drop apparatus involve a drop of protein solution
placed on an inverted cover slip, which is then suspended above the
reservoir. Sitting-drop crystallization apparatus place the drop on a
pedestal that is separated from the reservoir. Both of these methods
require sealing of the environment so that equilibration between the
drop and reservoir can occur.[2][8]

Microbatch

It is a type of batch crystallization method in which all the components


are directly combined into a single, supersaturated protein solution,
which is then left undisturbed. The technique can be miniaturized by
immersing protein droplets as small as 1 µl into an inert oil. The oil
prevents evaporation of the sample. This is the so-called ‘microbatch’
method. Besides the very limited amounts of sample needed, the latter
method has as further advantage that the samples are protected from Three methods of preparing crystals,
airborne contamination, as they are never exposed to the air during the
A: Hanging drop. B: Sitting drop. C:
experiment.
Microdialysis

Microdialysis

Microdialysis takes advantage of a semi-permeable membrane, across which small molecules and ions can
pass, while proteins and large polymers cannot cross. By establishing a gradient of solute concentration across
the membrane and allowing the system to progress toward equilibrium, the system can slowly move toward
supersaturation, at which point protein crystals may form.

Microdialysis can produce crystals by salting out, employing high concentrations of salt or other small
membrane-permeable compounds that decrease the solubility of the protein. Very occasionally, some proteins
can be crystallized by dialysis salting in, by dialyzing against pure water, removing solutes, driving self-
association and crystallization.

Free-interface diffusion

This technique brings together protein and precipitation solutions without premixing them, but instead,
injecting them through either sides of a channel, allowing equilibrium through diffusion. The two solutions
come into contact in a reagent chamber, both at their maximum concentrations, initiating spontaneous
nucleation. As the system comes into equilibrium, the level of supersaturation decreases, favouring crystal
growth. [9]

https://en.wikipedia.org/wiki/Protein_crystallization 3/6
7/25/2017 Protein crystallization - Wikipedia

Specialized protein crystallization techniques


Some proteins present unique challenges for crystallization. Membrane proteins frequently require the addition
of a detergent for isolation and crystallization, and tend to form "very small, weakly (x-ray) diffracting,
radiation-sensitive crystals".[10] Proteins that form fibres need to be stabilized in a monomeric form. Small
proteins can have poor solubility in water and require specialized crystallization techniques.[11]

Technologies that assist with protein crystallization


High throughput crystallization screening

High through-put methods exist to help streamline the large number of experiments required to explore the
various conditions that are necessary for successful crystal growth. There are numerous commercials kits
available for order which apply preassembled ingredients in systems guaranteed to produce successful
crystallization. Using such a kit, a scientist avoids the hassle of purifying a protein and determining the
appropriate crystallization conditions.

Liquid-handling robots can be used to set up and automate large number of crystallization experiments
simultaneously. What would otherwise be slow and potentially error-prone process carried out by a human can
be accomplished efficiently and accurately with an automated system. Robotic crystallization systems use the
same components described above, but carry out each step of the procedure quickly and with a large number of
replicates. Each experiment utilizes tiny amounts of solution, and the advantage of the smaller size is two-fold:
the smaller sample sizes not only cut-down on expenditure of purified protein, but smaller amounts of solution
lead to quicker crystallizations. Each experiment is monitored by a camera which detects crystal growth.[7]

Protein engineering

Techniques of molecular biology, especially molecular cloning, recombinant protein expression, and site-
directed mutagenesis can be employed to engineer and produce proteins with increased propensity to
crystallize. Frequently, problematic cysteine residues can be replaced by alanine to avoid disulfide-mediated
aggregation, and residues such as lysine, glutamate, and glutamine can be changed to alanine to reduce intrinsic
protein flexibility, which can hinder crystallization.

Alternatives
Some proteins do not fold properly outside their native environment, e.g. proteins which are part of the cell
membrane like ion channels and G-protein coupled receptors, their structure is altered by interacting proteins or
switch between different states. All those conditions prevent crystal growth or give crystal structures which do
not represent the natural structure of the protein. In order to determine the 3D structure of proteins which are
hard to crystallize researchers may use nuclear magnetic resonance, also known as protein NMR, which is best
suited to small proteins, or transmission electron microscopy, which is best suited to large proteins or protein
complexes.

Applications of protein crystallization


Protein crystallization is required for structural analysis by X-ray diffraction, neutron diffraction, and some
techniques of electron microscopy. These techniques can be used to determine the molecular structure of the
protein. For a better part of the 20th century, progress in determining protein structure was slow due to the
difficulty inherent in crystallizing proteins. When the Protein Data Bank was founded in 1971, it contained only
seven structures.[12] Since then, the pace at which protein structures are being discovered has grown
exponentially, with the PDB surpassing 20,000 structures in 2003, and containing over 100,000 as of 2014 (htt
p://www.rcsb.org/pdb/statistics/contentGrowthChart.do?content=total&seqid=100).
https://en.wikipedia.org/wiki/Protein_crystallization 4/6
7/25/2017 Protein crystallization - Wikipedia

Crystallization of proteins can also be useful in the formulation of proteins for pharmaceutical purposes.[13]

See also
Crystal engineering Crystallographic database Neutron crystallography
Crystal growth Crystallographic group Neutron diffraction
Crystal optics Diffraction X-ray diffraction
Crystal system Electron crystallography
Crystallization processes Electron diffraction
Crystallographic database Neutron crystallography
References
1. Doye, J. P. K., and Poon, W. C. K. (2006) Current Opinion in Colloid & Interface Science 11, 40–46
2. Rhodes, G. (2006) Crystallography Made Crystal Clear, Third Edition: A Guide for Users of Macromolecular
Models, 3rd Ed., Academic Press
3. McPherson, A. (1991) Journal of Crystal Growth 110, 1–10
4. Tulinksi, A (1999). "The Protein Structure Project, 1950-1959: First Concerted Effort Of a Protein Structure
Determination In the U.S". The Rigaku Journal. 16.
5. KENDREW, J. C., BODO, G., DINTZIS, H. M., PARRISH, R. G., WYCKOFF, H., and PHILLIPS, D. C. (1958)
Nature 181, 662–666
6. Branden C, Tooze J (1999). Introduction to Protein Structure (https://books.google.com/books?id=MERrAAAAMA
AJ&cd=1&dq=Introduction+to+Protein+Structure+branden&q=#search_anchor). New York: Garland. pp. 374–376.
ISBN 9780815303442.
7. "The Crystal Robot" (http://www.eurekalert.org/features/doe/2000-12/drnl-tcr061902.php). December 2000.
Retrieved 2003-02-18.
8. McRee, D (1993). Practical Protein Crystallography (https://books.google.com/books?id=FfxqAAAAMAAJ&dq=P
ractical+Protein+Crystallography). San Diego: Academic Press. pp. 1–23. ISBN 978-0-12-486052-0.
9. Rupp, Bernhard (20 October 2009). Biomolecular Crystallography: Principles, Practice, and Application to
Structural Biology (https://books.google.ch/books?id=gTAWBAAAQBAJ&dq=free+interface+diffusion+crystallizat
ion&hl=fr&source=gbs_navlinks_s). Garland Science. p. 800. Retrieved 28 December 2016.
10. Liszewski, Kathy (1 October 2015). "Dissecting the Structure of Membrane Proteins" (http://www.genengnews.com/
gen-articles/dissecting-the-structure-of-membrane-proteins/5583/). Genetic Engineering & Biotechnology News. 35
(17): 14.(subscription required)
11. Teeter MM, Hendrickson WA (1979). "Highly ordered crystals of the plant seed protein crambin". J Mol Biol. 127
(2): 219–23. PMID 430565 (https://www.ncbi.nlm.nih.gov/pubmed/430565). doi:10.1016/0022-2836(79)90242-0 (ht
tps://doi.org/10.1016%2F0022-2836%2879%2990242-0).
12. Berman H, Westbrook J, Feng Z, Gilliland G, Bhat T, Weissig H, Shindyalov I, Bourne P (2000). "The Protein Data
Bank" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC102472). Nucleic Acids Research. 28 (1): 235–242.
PMC 102472 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC102472)  . PMID 10592235 (https://www.ncbi.nlm.n
ih.gov/pubmed/10592235). doi:10.1093/nar/28.1.235 (https://doi.org/10.1093%2Fnar%2F28.1.235).
13. Jen, A., and Merkle, H. P. (2001) Diamonds in the Rough: Protein Crystals from a Formulation Perspective Pharm
Res 18, 1483–1488

External links
"Protein Crystallization and Dumb Luck". An essay on the haphazard side of protein crystallization by
Bob Cudney: http://www.rigaku.com/downloads/journal/Vol16.2.1999/cudney.pdf
Owens, Ray. "Protein Crystals" (http://www.backstagescience.com/videos/protein_factory.html).
Backstage Science. Brady Haran.
This page was reproduced (with modifications) with expressed consent from Dr. A. Malcolm Campbell.
As of 2010, the original page can be found at
http://www.bio.davidson.edu/Courses/Molbio/MolStudents/spring2003/Kogoy/protein.html

Retrieved from "https://en.wikipedia.org/w/index.php?title=Protein_crystallization&oldid=759049884"

Categories: Protein structure Crystallography


https://en.wikipedia.org/wiki/Protein_crystallization 5/6
7/25/2017 Protein crystallization - Wikipedia

This page was last edited on 9 January 2017, at 00:17.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Protein_crystallization 6/6
7/25/2017 Protein Data Bank - Wikipedia

Protein Data Bank


From Wikipedia, the free encyclopedia

The Protein Data Bank (PDB) is a crystallographic


database for the three-dimensional structural data of large Protein Data Bank
biological molecules, such as proteins and nucleic acids. The
data, typically obtained by X-ray crystallography, NMR
spectroscopy, or, increasingly, cryo-electron microscopy, and Content
submitted by biologists and biochemists from around the Description Protein structure
world, are freely accessible on the Internet via the websites X-ray crystallography
of its member organisations (PDBe,[1] PDBj,[2] and NMR Structure Determination
RCSB[3]). The PDB is overseen by an organization called the Access
Worldwide Protein Data Bank, wwPDB.
Data format PDB
The PDB is a key resource in areas of structural biology, Website www.wwpdb.org (http://www.
such as structural genomics. Most major scientific journals, wwPDB.org/)
and some funding agencies, now require scientists to submit www.pdbe.org (http://www.pd
their structure data to the PDB. Many other databases use be.org)
protein structures deposited in the PDB. For example, SCOP
www.rcsb.org/pdb (http://ww
and CATH classify protein structures, while PDBsum
w.rcsb.org/pdb)
provides a graphic overview of PDB entries using
www.pdbj.org (http://www.pd
information from other sources, such as Gene ontology.[4][5]
bj.org)

Contents
1 History
2 Contents
3 File format
4 Viewing the data
5 See also
6 References
7 External links

History
Two forces converged to initiate the PDB: 1) a small but growing collection of sets of protein structure data
determined by X-ray diffraction; and 2) the newly available (1968) molecular graphics display, the Brookhaven
RAster Display (BRAD), to visualize these protein structures in 3-D. In 1969, with the sponsorship of Walter
Hamilton at the Brookhaven National Laboratory, Edgar Meyer (Texas A&M University) began to write
software to store atomic coordinate files in a common format to make them available for geometric and
graphical evaluation. By 1971, one of Meyer's programs, SEARCH, enabled researchers to remotely access
information from the database to study protein structures offline.[6] SEARCH was instrumental in enabling
networking, thus marking the functional beginning of the PDB.

Upon Hamilton's death in 1973, Tom Koeztle took over direction of the PDB for the subsequent 20 years. In
January 1994, Joel Sussman of Israel's Weizmann Institute of Science was appointed head of the PDB. In
October 1998,[7] the PDB was transferred to the Research Collaboratory for Structural Bioinformatics
(RCSB);[8] the transfer was completed in June 1999. The new director was Helen M. Berman of Rutgers
University (one of the member institutions of the RCSB).[9] In 2003, with the formation of the wwPDB, the

https://en.wikipedia.org/wiki/Protein_Data_Bank 1/5
7/25/2017 Protein Data Bank - Wikipedia

PDB became an international organization. The founding members are PDBe (Europe),[1] RCSB (USA), and
PDBj (Japan).[2] The BMRB[10] joined in 2006. Each of the four members of wwPDB can act as deposition,
data processing and distribution centers for PDB data. The data processing refers to the fact that wwPDB staff
review and annotate each submitted entry.[11] The data are then automatically checked for plausibility (the
source code[12] for this validation software has been made available to the public at no charge).

Contents
The PDB database is updated weekly (UTC+0 Wednesday). Likewise,
the PDB holdings list[13] is also updated weekly. As of 14 March 2017,
the breakdown of current holdings is as follows:

Protein/Nucleic
Experimental Nucleic
Proteins Acid Other Total
Method Acids
complexes
X-ray
106595 1820 5471 4 113890
diffraction
NMR 10296 1190 241 8 11735
Electron
1021 30 367 0 1418
microscopy
Hybrid 99 3 2 1 105
Other 181 4 6 13 204
Total: 118192 3047 6087 26 127352 Examples of protein structures from
the PDB (created with UCSF
103,514 structures in the PDB have a structure factor file. Chimera)
9,057 structures have an NMR restraint file.
2,826 structures in the PDB
have a chemical shifts file.
1,403 structures in the PDB
have a 3DEM map file
deposited in EM Data Bank

These data show that most structures are


determined by X-ray diffraction, but about
10% of structures are now determined by
protein NMR. When using X-ray
diffraction, approximations of the
coordinates of the atoms of the protein are
obtained, whereas estimations of the
distances between pairs of atoms of the
protein are found through NMR
experiments. Therefore, the final
conformation of the protein is obtained, in
the latter case, by solving a distance Rate of Protein Structure Determination by Method and Year
geometry problem. A few proteins are
determined by cryo-electron microscopy. (Clicking on the numbers in the original table will bring up examples
of structures determined by that method.)

The significance of the structure factor files, mentioned above, is that, for PDB structures determined by X-ray
diffraction that have a structure file, the electron density map may be viewed. The data of such structures is
stored on the "electron density server".[14][15]

https://en.wikipedia.org/wiki/Protein_Data_Bank 2/5
7/25/2017 Protein Data Bank - Wikipedia

In the past, the number of structures in the PDB has grown at an approximately exponential rate, passing the
100 registered structures milestone in 1982, the 1,000 in 1993, the 10,000 in 1999, and the 100,000 in
2014.[16][17] However, since 2007, the rate of accumulation of new protein structures appears to have plateaued.

File format
The file format initially used by the PDB was called the PDB file format. This original format was restricted by
the width of computer punch cards to 80 characters per line. Around 1996, the "macromolecular
Crystallographic Information file" format, mmCIF, which is an extension of the CIF format started to be phased
in. mmCIF is now the master format for the PDB archive.[18] An XML version of this format, called PDBML,
was described in 2005.[19] The structure files can be downloaded in any of these three formats. In fact,
individual files are easily downloaded into graphics packages using web addresses:

For PDB format files, use, e.g., http://www.pdb.org/pdb/files/4hhb.pdb.gz or


http://pdbe.org/download/4hhb
For PDBML (XML) files, use, e.g., http://www.pdb.org/pdb/files/4hhb.xml.gz or
http://pdbe.org/pdbml/4hhb

The "4hhb" is the PDB identifier. Each structure published in PDB receives a four-character alphanumeric
identifier, its PDB ID. (This cannot be used as an identifier for biomolecules, because often several structures
for the same molecule—in different environments or conformations—are contained in PDB with different PDB
IDs.)

Viewing the data


The structure files may be viewed using one of several free and open source computer programs, including
Jmol, Pymol, VMD, and Rasmol. Other non-free, shareware programs include ICM-Browser,[20] MDL Chime,
UCSF Chimera, Swiss-PDB Viewer,[21] StarBiochem[22] (a Java-based interactive molecular viewer with
integrated search of protein databank), Sirius, and VisProt3DS[23] (a tool for Protein Visualization in 3D
stereoscopic view in anaglyth and other modes), and Discovery Studio. The RCSB PDB website contains an
extensive list of both free and commercial molecule visualization programs and web browser plugins.

See also
Crystallographic database
Protein structure
Protein structure prediction
Protein structure database
PDBREPORT lists all anomalies (also errors) in PDB structures
PDBsum — extracts data from other databases about PDB structures
Proteopedia — a collaborative 3D encyclopedia of proteins and other molecules

References
1. PDBe Protein Data Bank in Europe (http://www.pdbe.org/)
2. Welcome to PDBj - Home (http://www.pdbj.org/)
3. http://www.rcsb.org/
4. Berman, H. M. (January 2008). "The Protein Data Bank: a historical perspective" (http://journals.iucr.org/a/issues/20
08/01/00/sc5004/sc5004.pdf) (PDF). Acta Crystallographica Section A. A64 (1): 88–95. PMID 18156675 (https://ww
w.ncbi.nlm.nih.gov/pubmed/18156675). doi:10.1107/S0108767307035623 (https://doi.org/10.1107%2FS010876730
7035623).

https://en.wikipedia.org/wiki/Protein_Data_Bank 3/5
7/25/2017 Protein Data Bank - Wikipedia

5. Laskowski RA, Hutchinson EG, Michie AD, Wallace AC, Jones ML, Thornton JM (December 1997). "PDBsum: a
Web-based database of summaries and analyses of all PDB structures". Trends Biochem. Sci. 22 (12): 488–90.
PMID 9433130 (https://www.ncbi.nlm.nih.gov/pubmed/9433130). doi:10.1016/S0968-0004(97)01140-7 (https://doi.
org/10.1016%2FS0968-0004%2897%2901140-7).
6. Meyer EF (1997). "The first years of the Protein Data Bank" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC21437
43). Protein Science. Cambridge University Press. 6 (7): 1591–1597. PMC 2143743 (https://www.ncbi.nlm.nih.gov/p
mc/articles/PMC2143743)  . PMID 9232661 (https://www.ncbi.nlm.nih.gov/pubmed/9232661).
doi:10.1002/pro.5560060724 (https://doi.org/10.1002%2Fpro.5560060724).
7. Berman HM, Westbrook J, Feng Z, Gilliland G, Bhat TN, Weissig H, Shindyalov IN, Bourne PE (January 2000).
"The Protein Data Bank" (http://nar.oxfordjournals.org/cgi/content/full/28/1/235). Nucleic Acids Res. 28 (1): 235–
242. PMC 102472 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC102472)  . PMID 10592235 (https://www.ncbi.
nlm.nih.gov/pubmed/10592235). doi:10.1093/nar/28.1.235 (https://doi.org/10.1093%2Fnar%2F28.1.235).
8. RCSB | Research Collaboratory for Structural Bioinformatics (http://home.rcsb.org/)
9. "RCSB PDB Newsletter Archive" (http://www.rcsb.org/pdb/static.do?p=general_information/news_publications/new
sletters/newsletter.html). RCSB Protein Data Bank.
10. BMRB - Biological Magnetic Resonance Bank (http://www.bmrb.wisc.edu)
11. Curry E, Freitas A, O'Riáin S (2010). "The Role of Community-Driven Data Curation for Enterprises". In D. Wood.
Linking Enterprise Data (https://books.google.com/books?id=DsMrnk9-4NsC&lpg=PA25&pg=PA25#v=onepage&q
&f=false). Boston, MA: Springer US. pp. 25–47. ISBN 978-1-441-97664-2.
12. PDB Validation Suite (http://sw-tools.pdb.org/apps/VAL/index.html)
13. "PDB Current Holdings Breakdown" (http://www.rcsb.org/pdb/statistics/holdings.do). RCSB.
14. "The Uppsala Electron Density Server" (http://eds.bmc.uu.se/eds/index.html). Uppsala University. Retrieved
2013-04-06.
15. Kleywegt GJ, Harris MR, Zou JY, Taylor TC, Wählby A, Jones TA (Dec 2004). "The Uppsala Electron-Density
Server". Acta Crystallogr D. 60 (Pt 12 Pt 1): 2240–2249. PMID 15572777 (https://www.ncbi.nlm.nih.gov/pubmed/1
5572777). doi:10.1107/S0907444904013253 (https://doi.org/10.1107%2FS0907444904013253).
16. Anon (2014). "Hard data: It has been no small feat for the Protein Data Bank to stay relevant for 100,000 structures".
Nature. 509 (7500): 260. PMID 24834514 (https://www.ncbi.nlm.nih.gov/pubmed/24834514). doi:10.1038/509260a
(https://doi.org/10.1038%2F509260a).
17. "Content Growth Report" (http://www.rcsb.org/pdb/statistics/contentGrowthChart.do?content=total&seqid=100).
RCSB PDB. Retrieved 2013-04-06.
18. http://wwpdb.org/news/news?year=2013#22-May-2013
19. Westbrook J, Ito N, Nakamura H, Henrick K, Berman HM (April 2005). "PDBML: the representation of archival
macromolecular structure data in XML" (http://bioinformatics.oxfordjournals.org/cgi/reprint/21/7/988.pdf) (PDF).
Bioinformatics. 21 (7): 988–992. PMID 15509603 (https://www.ncbi.nlm.nih.gov/pubmed/15509603).
doi:10.1093/bioinformatics/bti082 (https://doi.org/10.1093%2Fbioinformatics%2Fbti082).
20. "ICM-Browser" (http://www.molsoft.com/icm_browser.html). Molsoft L.L.C. Retrieved 2013-04-06.
21. "Swiss PDB Viewer" (http://www.expasy.ch/spdbv/mainpage.html). Swiss Institute of Bioinformatics. Retrieved
2013-04-06.
22. STAR: Biochem - Home (http://web.mit.edu/star/biochem)
23. "VisProt3DS" (http://www.molsystems.com/vp3ds.html). Molecular Systems Ltd. Retrieved 2013-04-06.

External links
The Worldwide Protein Data Bank (wwPDB) (http://www.wwpdb.org/) — parent site to regional hosts
(below)
RCSB Protein Data Bank (http://www.rcsb.org) (USA)
PDBe (http://www.pdbe.org/) (Europe)
PDBj (http://www.pdbj.org/) (Japan)
BMRB, Biological Magnetic Resonance Data Bank (http://www.bmrb.wisc.edu/) (USA)
wwPDB Documentation (http://www.wwpdb.org/docs.html) — documentation on both the PDB and
PDBML file formats
Looking at Structures (http://www.rcsb.org/pdb/static.do?p=education_discussion/Looking-at-Structures/
intro.html) — The RCSB's introduction to crystallography
PDBsum Home Page (http://www.ebi.ac.uk/thornton-srv/databases/pdbsum/) — Extracts data from other
databases about PDB structures.
Nucleic Acid Database, NDB (http://ndbserver.rutgers.edu/index.html) — a PDB mirror especially for
searching for nucleic acids
Introductory PDB tutorial sponsored by PDB (http://www.openhelix.com/pdb)
https://en.wikipedia.org/wiki/Protein_Data_Bank 4/5
7/25/2017 Protein Data Bank - Wikipedia

PDBe: Quick Tour on EBI Train OnLine (http://www.ebi.ac.uk/training/online/course/pdbe-quick-tour)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Protein_Data_Bank&oldid=779596439"

Categories: Biological databases Crystallographic databases Protein structure


Science and technology in the United States Crystallography

This page was last edited on 9 May 2017, at 20:58.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Protein_Data_Bank 5/5
7/24/2017 Density functional theory - Wikipedia

Density functional theory


From Wikipedia, the free encyclopedia

Density functional theory (DFT) is a computational quantum mechanical modelling method used in physics,
chemistry and materials science to investigate the electronic structure (principally the ground state) of many-
body systems, in particular atoms, molecules, and the condensed phases. Using this theory, the properties of a
many-electron system can be determined by using functionals, i.e. functions of another function, which in this
case is the spatially dependent electron density. Hence the name density functional theory comes from the use
of functionals of the electron density. DFT is among the most popular and versatile methods available in
condensed-matter physics, computational physics, and computational chemistry.

DFT has been very popular for calculations in solid-state physics since the 1970s. However, DFT was not
considered accurate enough for calculations in quantum chemistry until the 1990s, when the approximations
used in the theory were greatly refined to better model the exchange and correlation interactions.
Computational costs are relatively low when compared to traditional methods, such as exchange only Hartree–
Fock theory and its descendants that include electron correlation.

Despite recent improvements, there are still difficulties in using density functional theory to properly describe
intermolecular interactions (of critical importance to understanding chemical reactions), especially van der
Waals forces (dispersion); charge transfer excitations; transition states, global potential energy surfaces, dopant
interactions and some other strongly correlated systems; and in calculations of the band gap and
ferromagnetism in semiconductors.[1] Its incomplete treatment of dispersion can adversely affect the accuracy
of DFT (at least when used alone and uncorrected) in the treatment of systems which are dominated by
dispersion (e.g. interacting noble gas atoms)[2] or where dispersion competes significantly with other effects
(e.g. in biomolecules).[3] The development of new DFT methods designed to overcome this problem, by
alterations to the functional[4] or by the inclusion of additive terms,[5][6][7][8] is a current research topic.

Contents
1 Overview of method
1.1 Origins
2 Derivation and formalism
3 Relativistic density functional theory (explicit functional forms)
4 Approximations (exchange-correlation functionals)
5 Generalizations to include magnetic fields
6 Applications
7 Thomas–Fermi model
8 Hohenberg–Kohn theorems
9 Pseudo-potentials
10 Electron smearing
11 Software supporting DFT
12 See also
13 Lists
14 References
15 Key papers
16 External links

Overview of method

https://en.wikipedia.org/wiki/Density_functional_theory 1/13
7/24/2017 Density functional theory - Wikipedia

In the context of computational materials science, ab initio (from first principles) DFT calculations allow the
prediction and calculation of material behaviour on the basis of quantum mechanical considerations, without
requiring higher order parameters such as fundamental material properties. In contemporary DFT techniques
the electronic structure is evaluated using a potential acting on the system’s electrons. This DFT potential is
constructed as the sum of external potentials (Vext), which is determined solely by the structure and the
elemental composition of the system, and an effective potential (Veff ), which represents inter-electronic
interactions. Thus, a problem for a representative super-cell of a material with n electrons can be studied as a
set of n one-electron Schrödinger like equations, which are also known as Kohn–Sham equations [9].

Origins

Although density functional theory has its roots in the Thomas–Fermi model for the electronic structure of
materials, DFT was first put on a firm theoretical footing by Walter Kohn and Pierre Hohenberg in the
framework of the two Hohenberg–Kohn theorems (H–K).[10] The original H–K theorems held only for non-
degenerate ground states in the absence of a magnetic field, although they have since been generalized to
encompass these.[11][12]

The first H–K theorem demonstrates that the ground state properties of a many-electron system are uniquely
determined by an electron density that depends on only 3 spatial coordinates. It set down the groundwork for
reducing the many-body problem of N electrons with 3N spatial coordinates to 3 spatial coordinates, through
the use of functionals of the electron density. This theorem has since been extended to the time-dependent
domain to develop time-dependent density functional theory (TDDFT), which can be used to describe excited
states.

The second H–K theorem defines an energy functional for the system and proves that the correct ground state
electron density minimizes this energy functional.

In work that later won them the Nobel prize in chemistry, The H–K theorem was further developed by Walter
Kohn and Lu Jeu Sham to produce Kohn–Sham DFT (KS DFT). Within this framework, the intractable many-
body problem of interacting electrons in a static external potential is reduced to a tractable problem of non-
interacting electrons moving in an effective potential. The effective potential includes the external potential and
the effects of the Coulomb interactions between the electrons, e.g., the exchange and correlation interactions.
Modeling the latter two interactions becomes the difficulty within KS DFT. The simplest approximation is the
local-density approximation (LDA), which is based upon exact exchange energy for a uniform electron gas,
which can be obtained from the Thomas–Fermi model, and from fits to the correlation energy for a uniform
electron gas. Non-interacting systems are relatively easy to solve as the wavefunction can be represented as a
Slater determinant of orbitals. Further, the kinetic energy functional of such a system is known exactly. The
exchange-correlation part of the total-energy functional remains unknown and must be approximated.

Another approach, less popular than KS DFT but arguably more closely related to the spirit of the original H-K
theorems, is orbital-free density functional theory (OFDFT), in which approximate functionals are also used for
the kinetic energy of the non-interacting system.

Derivation and formalism


As usual in many-body electronic structure calculations, the nuclei of the treated molecules or clusters are seen
as fixed (the Born–Oppenheimer approximation), generating a static external potential V in which the electrons
are moving. A stationary electronic state is then described by a wavefunction satisfying the
many-electron time-independent Schrödinger equation

https://en.wikipedia.org/wiki/Density_functional_theory 2/13
7/24/2017 Density functional theory - Wikipedia

where, for the -electron system, is the Hamiltonian, is the total energy, is the kinetic energy, is the
potential energy from the external field due to positively charged nuclei, and is the electron-electron
interaction energy. The operators and are called universal operators as they are the same for any -
electron system, while is system dependent. This complicated many-particle equation is not separable into
simpler single-particle equations because of the interaction term .

There are many sophisticated methods for solving the many-body Schrödinger equation based on the expansion
of the wavefunction in Slater determinants. While the simplest one is the Hartree–Fock method, more
sophisticated approaches are usually categorized as post-Hartree–Fock methods. However, the problem with
these methods is the huge computational effort, which makes it virtually impossible to apply them efficiently to
larger, more complex systems.

Here DFT provides an appealing alternative, being much more versatile as it provides a way to systematically
map the many-body problem, with , onto a single-body problem without . In DFT the key variable is the
electron density which for a normalized is given by

This relation can be reversed, i.e., for a given ground-state density it is possible, in principle, to calculate
the corresponding ground-state wavefunction . In other words, is a unique functional of
,[10]

and consequently the ground-state expectation value of an observable is also a functional of

In particular, the ground-state energy is a functional of

where the contribution of the external potential can be written explicitly in terms of the
ground-state density

More generally, the contribution of the external potential can be written explicitly in terms of the
density ,

The functionals and are called universal functionals, while is called a non-universal functional,
as it depends on the system under study. Having specified a system, i.e., having specified , one then has to
minimize the functional

https://en.wikipedia.org/wiki/Density_functional_theory 3/13
7/24/2017 Density functional theory - Wikipedia

with respect to , assuming one has got reliable expressions for and . A successful minimization
of the energy functional will yield the ground-state density and thus all other ground-state observables.

The variational problems of minimizing the energy functional can be solved by applying the Lagrangian
method of undetermined multipliers.[13] First, one considers an energy functional that doesn't explicitly have an
electron-electron interaction energy term,

where denotes the kinetic energy operator and is an external effective potential in which the particles are
moving, so that .

Thus, one can solve the so-called Kohn–Sham equations of this auxiliary non-interacting system,

which yields the orbitals that reproduce the density of the original many-body system

The effective single-particle potential can be written in more detail as

where the second term denotes the so-called Hartree term describing the electron-electron Coulomb repulsion,
while the last term is called the exchange-correlation potential. Here, includes all the many-particle
interactions. Since the Hartree term and depend on , which depends on the , which in turn depend
on , the problem of solving the Kohn–Sham equation has to be done in a self-consistent (i.e., iterative) way.
Usually one starts with an initial guess for , then calculates the corresponding and solves the Kohn–
Sham equations for the . From these one calculates a new density and starts again. This procedure is then
repeated until convergence is reached. A non-iterative approximate formulation called Harris functional DFT is
an alternative approach to this.

NOTE1: The one-to-one correspondence between electron density and single-particle potential is not so
smooth. It contains kinds of non-analytic structure. contains kinds of singularities, cuts and branches.
This may indicate a limitation of our hope for representing exchange-correlation functional in a simple analytic
form.

NOTE2: It is possible to extend the DFT idea to the case of Green function instead of the density . It is
called as Luttinger–Ward functional (or kinds of similar functionals), written as . However, is
determined not as its minimum, but as its extremum. Thus we may have some theoretical and practical
difficulties.

NOTE3: There is no one-to-one correspondence between one-body density matrix and the one-body
potential . (Remember that all the eigenvalues of is unity). In other words, it ends up with a
theory similar as the Hartree-Fock (or hybrid) theory.

Relativistic density functional theory (explicit functional forms)


https://en.wikipedia.org/wiki/Density_functional_theory 4/13
7/24/2017 Density functional theory - Wikipedia

The same theorems can be proven in the case of relativistic electrons thereby providing generalization of DFT
for the relativistic case. Unlike nonrelativistic theory, in the relativistic case it is possible to derive a few exact
and explicit formulas for relativistic density functional.

Let one consider an electron in a hydrogen-like ion obeying the relativistic Dirac equation. Hamiltonian for
relativistic electron moving in the Coulomb potential can be chosen in the following form (atomic units are
used):

where is Coulomb potential of a point-like nucleus, is a momentum operator of electron, ,


and are electron electric charge, mass and speed of light in vacuum constants respectively, and finally and
are set of Dirac matrixes:

, .

To find out eigen functions and corresponding energies one solves the eigen function equation:

where is a four component wave function and is associated eigen energy.


It is demonstrated in the article [14]
that application of the virial theorem to eigenfunction equation produces the
following formula for eigen energy of any bound state:

and analogously the virial theorem applied to the eigenfunction equation with squared Hamiltonian [15] (see
also references therein) yields:

It is easy to see that both written above formulas represent density functionals. The former formula can be
easily generalized for multi-electron case.[16]

Approximations (exchange-correlation functionals)


The major problem with DFT is that the exact functionals for exchange and correlation are not known except
for the free electron gas. However, approximations exist which permit the calculation of certain physical
quantities quite accurately.[17] In physics the most widely used approximation is the local-density
approximation (LDA), where the functional depends only on the density at the coordinate where the functional
is evaluated:

The local spin-density approximation (LSDA) is a straightforward generalization of the LDA to include
electron spin:

https://en.wikipedia.org/wiki/Density_functional_theory 5/13
7/24/2017 Density functional theory - Wikipedia

Highly accurate formulae for the exchange-correlation energy density have been constructed from
quantum Monte Carlo simulations of jellium.[18]

The LDA assumes that the density is the same everywhere. Because of this, the LDA has a tendency to under-
estimate the exchange energy and over-estimate the correlation energy.[19] The errors due to the exchange and
correlation parts tend to compensate each other to a certain degree. To correct for this tendency, it is common to
expand in terms of the gradient of the density in order to account for the non-homogeneity of the true electron
density. This allows for corrections based on the changes in density away from the coordinate. These
expansions are referred to as generalized gradient approximations (GGA)[20][21][22] and have the following
form:

Using the latter (GGA), very good results for molecular geometries and ground-state energies have been
achieved.

Potentially more accurate than the GGA functionals are the meta-GGA functionals, a natural development after
the GGA (generalized gradient approximation). Meta-GGA DFT functional in its original form includes the
second derivative of the electron density (the Laplacian) whereas GGA includes only the density and its first
derivative in the exchange-correlation potential.

Functionals of this type are, for example, TPSS and the Minnesota Functionals. These functionals include a
further term in the expansion, depending on the density, the gradient of the density and the Laplacian (second
derivative) of the density.

Difficulties in expressing the exchange part of the energy can be relieved by including a component of the exact
exchange energy calculated from Hartree–Fock theory. Functionals of this type are known as hybrid
functionals.

Generalizations to include magnetic fields


The DFT formalism described above breaks down, to various degrees, in the presence of a vector potential, i.e.
a magnetic field. In such a situation, the one-to-one mapping between the ground-state electron density and
wavefunction is lost. Generalizations to include the effects of magnetic fields have led to two different theories:
current density functional theory (CDFT) and magnetic field density functional theory (BDFT). In both these
theories, the functional used for the exchange and correlation must be generalized to include more than just the
electron density. In current density functional theory, developed by Vignale and Rasolt,[12] the functionals
become dependent on both the electron density and the paramagnetic current density. In magnetic field density
functional theory, developed by Salsbury, Grayce and Harris,[23] the functionals depend on the electron density
and the magnetic field, and the functional form can depend on the form of the magnetic field. In both of these
theories it has been difficult to develop functionals beyond their equivalent to LDA, which are also readily
implementable computationally. Recently an extension by Pan and Sahni[24] extended the Hohenberg-Kohn
theorem for non constant magnetic fields using the density and the current density as fundamental variables.

Applications
In general, density functional theory finds increasingly broad application in the chemical and material sciences
for the interpretation and prediction of complex system behavior at an atomic scale. Specifically, DFT
computational methods are applied for the study of systems to synthesis and processing parameters. In such
systems, experimental studies are often encumbered by inconsistent results and non-equilibrium conditions.
Examples of contemporary DFT applications include studying the effects of dopants on phase transformation
behavior in oxides, magnetic behaviour in dilute magnetic semiconductor materials and the study of magnetic
https://en.wikipedia.org/wiki/Density_functional_theory 6/13
7/24/2017 Density functional theory - Wikipedia

and electronic behavior in ferroelectrics and dilute magnetic


semiconductors.[1][25]. Also, it has been shown that DFT has a good results in the
prediction of sensitivity of some nanostructures to environment pollutants like
SO2[26] or Acrolein[27] as well as prediction of mechanical properties.[28]

In practice, Kohn–Sham theory can be applied in several distinct ways depending


on what is being investigated. In solid state calculations, the local density
approximations are still commonly used along with plane wave basis sets, as an
electron gas approach is more appropriate for electrons delocalised through an C60 with isosurface of
infinite solid. In molecular calculations, however, more sophisticated functionals ground-state electron
are needed, and a huge variety of exchange-correlation functionals have been density as calculated with
developed for chemical applications. Some of these are inconsistent with the DFT.
uniform electron gas approximation, however, they must reduce to LDA in the
electron gas limit. Among physicists, probably the most widely used functional is
the revised Perdew–Burke–Ernzerhof exchange model (a direct generalized-gradient parametrization of the free
electron gas with no free parameters); however, this is not sufficiently calorimetrically accurate for gas-phase
molecular calculations. In the chemistry community, one popular functional is known as BLYP (from the name
Becke for the exchange part and Lee, Yang and Parr for the correlation part). Even more widely used is B3LYP
which is a hybrid functional in which the exchange energy, in this case from Becke's exchange functional, is
combined with the exact energy from Hartree–Fock theory. Along with the component exchange and
correlation funсtionals, three parameters define the hybrid functional, specifying how much of the exact
exchange is mixed in. The adjustable parameters in hybrid functionals are generally fitted to a 'training set' of
molecules. Unfortunately, although the results obtained with these functionals are usually sufficiently accurate
for most applications, there is no systematic way of improving them (in contrast to some of the traditional
wavefunction-based methods like configuration interaction or coupled cluster theory). Hence in the current
DFT approach it is not possible to estimate the error of the calculations without comparing them to other
methods or experiments.

Thomas–Fermi model
The predecessor to density functional theory was the Thomas–Fermi model, developed independently by both
Thomas and Fermi in 1927. They used a statistical model to approximate the distribution of electrons in an
atom. The mathematical basis postulated that electrons are distributed uniformly in phase space with two
electrons in every of volume.[29] For each element of coordinate space volume we can fill out a sphere
of momentum space up to the Fermi momentum [30]

Equating the number of electrons in coordinate space to that in phase space gives:

Solving for and substituting into the classical kinetic energy formula then leads directly to a kinetic energy
represented as a functional of the electron density:

https://en.wikipedia.org/wiki/Density_functional_theory 7/13
7/24/2017 Density functional theory - Wikipedia

where

As such, they were able to calculate the energy of an atom using this kinetic energy functional combined with
the classical expressions for the nuclear-electron and electron-electron interactions (which can both also be
represented in terms of the electron density).

Although this was an important first step, the Thomas–Fermi equation's accuracy is limited because the
resulting kinetic energy functional is only approximate, and because the method does not attempt to represent
the exchange energy of an atom as a conclusion of the Pauli principle. An exchange energy functional was
added by Dirac in 1928.

However, the Thomas–Fermi–Dirac theory remained rather inaccurate for most applications. The largest source
of error was in the representation of the kinetic energy, followed by the errors in the exchange energy, and due
to the complete neglect of electron correlation.

Teller (1962) showed that Thomas–Fermi theory cannot describe molecular bonding. This can be overcome by
improving the kinetic energy functional.

The kinetic energy functional can be improved by adding the Weizsäcker (1935) correction:[31][32]

Hohenberg–Kohn theorems
The Hohenberg-Kohn theorems relate to any system consisting of electrons moving under the influence of an
external potential.

Theorem 1. The external potential (and hence the total energy), is a unique functional of the electron density.

If two systems of electrons, one trapped in a potential and the other in , have the same ground-state
density then necessarily .

Corollary: the ground state density uniquely determines the potential and thus all properties of the system,
including the many-body wave function. In particular, the "HK" functional, defined as is
a universal functional of the density (not depending explicitly on the external potential).

Theorem 2. The functional that delivers the ground state energy of the system, gives the lowest energy if and
only if the input density is the true ground state density.

For any positive integer and potential , a density functional exists such that
obtains its minimal value at the ground-state density of electrons in
the potential . The minimal value of is then the ground state energy of this system.

Pseudo-potentials
The many electron Schrödinger equation can be very much simplified if electrons are divided in two groups:
valence electrons and inner core electrons. The electrons in the inner shells are strongly bound and do not play
a significant role in the chemical binding of atoms; they also partially screen the nucleus, thus forming with the
nucleus an almost inert core. Binding properties are almost completely due to the valence electrons, especially
in metals and semiconductors. This separation suggests that inner electrons can be ignored in a large number of
https://en.wikipedia.org/wiki/Density_functional_theory 8/13
7/24/2017 Density functional theory - Wikipedia

cases, thereby reducing the atom to an ionic core that interacts with the valence electrons. The use of an
effective interaction, a pseudopotential, that approximates the potential felt by the valence electrons, was first
proposed by Fermi in 1934 and Hellmann in 1935. In spite of the simplification pseudo-potentials introduce in
calculations, they remained forgotten until the late 50's.

Ab initio Pseudo-potentials

A crucial step toward more realistic pseudo-potentials was given by Topp and Hopfield[33] and more recently
Cronin, who suggested that the pseudo-potential should be adjusted such that they describe the valence charge
density accurately. Based on that idea, modern pseudo-potentials are obtained inverting the free atom
Schrödinger equation for a given reference electronic configuration and forcing the pseudo wave-functions to
coincide with the true valence wave functions beyond a certain distance . The pseudo wave-functions are
also forced to have the same norm as the true valence wave-functions and can be written as

where is the radial part of the wavefunction with angular momentum , and and denote,
respectively, the pseudo wave-function and the true (all-electron) wave-function. The index n in the true wave-
functions denotes the valence level. The distance beyond which the true and the pseudo wave-functions are
equal, , is also -dependent.

Electron smearing
The electrons of system will occupy the lowest Kohn-Sham eigenstates up to a given energy level according to
the Aufbau principle. This corresponds to the step-like Fermi-Dirac distribution at absolute zero. If there are
several degenerate or close to degenerate eigenstates at the Fermi level, it is possible to get convergence
problems, since very small perturbations may change the electron occupation. One way of damping these
oscillations is to smear the electrons, i.e. allowing fractional occupancies.[34] One approach of doing this is to
assign a finite temperature to the electron Fermi-Dirac distribution. Other ways is to assign a cumulative
Gaussian distribution of the electrons or using a Methfessel-Paxton method.[35][36]

Software supporting DFT


DFT is supported by many Quantum chemistry and solid state physics software packages, often along with
other methods.

See also
Basis set (chemistry)
Dynamical mean field theory
Gas in a box
Harris functional
Helium atom
Kohn–Sham equations
Local density approximation
Molecule
Molecular design software
Molecular modelling
Quantum chemistry
Thomas–Fermi model
https://en.wikipedia.org/wiki/Density_functional_theory 9/13
7/24/2017 Density functional theory - Wikipedia

Time-dependent density functional theory

Lists
List of quantum chemistry and solid state physics software
List of software for molecular mechanics modeling

References
1. Assadi, M.H.N; et al. (2013). "Theoretical study on copper's energetics and magnetism in TiO2 polymorphs".
Journal of Applied Physics. 113 (23): 233913. Bibcode:2013JAP...113w3913A (http://adsabs.harvard.edu/abs/2013J
AP...113w3913A). arXiv:1304.1854 (https://arxiv.org/abs/1304.1854)  . doi:10.1063/1.4811539 (https://doi.org/10.1
063%2F1.4811539).
2. Van Mourik, Tanja; Gdanitz, Robert J. (2002). "A critical note on density functional theory studies on rare-gas
dimers". Journal of Chemical Physics. 116 (22): 9620–9623. Bibcode:2002JChPh.116.9620V (http://adsabs.harvard.
edu/abs/2002JChPh.116.9620V). doi:10.1063/1.1476010 (https://doi.org/10.1063%2F1.1476010).
3. Vondrášek, Jiří; Bendová, Lada; Klusák, Vojtěch; Hobza, Pavel (2005). "Unexpectedly strong energy stabilization
inside the hydrophobic core of small protein rubredoxin mediated by aromatic residues: correlated ab initio quantum
chemical calculations". Journal of the American Chemical Society. 127 (8): 2615–2619. PMID 15725017 (https://ww
w.ncbi.nlm.nih.gov/pubmed/15725017). doi:10.1021/ja044607h (https://doi.org/10.1021%2Fja044607h).
4. Grimme, Stefan (2006). "Semiempirical hybrid density functional with perturbative second-order correlation".
Journal of Chemical Physics. 124 (3): 034108. Bibcode:2006JChPh.124c4108G (http://adsabs.harvard.edu/abs/2006
JChPh.124c4108G). PMID 16438568 (https://www.ncbi.nlm.nih.gov/pubmed/16438568). doi:10.1063/1.2148954 (ht
tps://doi.org/10.1063%2F1.2148954).
5. Zimmerli, Urs; Parrinello, Michele; Koumoutsakos, Petros (2004). "Dispersion corrections to density functionals for
water aromatic interactions". Journal of Chemical Physics. 120 (6): 2693–2699. Bibcode:2004JChPh.120.2693Z (htt
p://adsabs.harvard.edu/abs/2004JChPh.120.2693Z). PMID 15268413 (https://www.ncbi.nlm.nih.gov/pubmed/152684
13). doi:10.1063/1.1637034 (https://doi.org/10.1063%2F1.1637034).
6. Grimme, Stefan (2004). "Accurate description of van der Waals complexes by density functional theory including
empirical corrections". Journal of Computational Chemistry. 25 (12): 1463–1473. PMID 15224390 (https://www.ncb
i.nlm.nih.gov/pubmed/15224390). doi:10.1002/jcc.20078 (https://doi.org/10.1002%2Fjcc.20078).
7. Von Lilienfeld, O. Anatole; Tavernelli, Ivano; Rothlisberger, Ursula; Sebastiani, Daniel (2004). "Optimization of
effective atom centered potentials for London dispersion forces in density functional theory". Physical Review
Letters. 93 (15): 153004. Bibcode:2004PhRvL..93o3004V (http://adsabs.harvard.edu/abs/2004PhRvL..93o3004V).
PMID 15524874 (https://www.ncbi.nlm.nih.gov/pubmed/15524874). doi:10.1103/PhysRevLett.93.153004 (https://do
i.org/10.1103%2FPhysRevLett.93.153004).
8. Tkatchenko, Alexandre; Scheffler, Matthias (2009). "Accurate Molecular Van Der Waals Interactions from Ground-
State Electron Density and Free-Atom Reference Data". Physical Review Letters. 102 (7): 073005.
Bibcode:2009PhRvL.102g3005T (http://adsabs.harvard.edu/abs/2009PhRvL.102g3005T). PMID 19257665 (https://
www.ncbi.nlm.nih.gov/pubmed/19257665). doi:10.1103/PhysRevLett.102.073005 (https://doi.org/10.1103%2FPhys
RevLett.102.073005).
9. Hanaor, D.A.H.; Assadi, M.H.N; Li, S; Yu, A; Sorrell, C.C. (2012). "Ab initio study of phase stability in doped
TiO2". Computational Mechanics. 50 (2): 185–194. doi:10.1007/s00466-012-0728-4 (https://doi.org/10.1007%2Fs00
466-012-0728-4).
10. Hohenberg, Pierre; Walter Kohn (1964). "Inhomogeneous electron gas". Physical Review. 136 (3B): B864–B871.
Bibcode:1964PhRv..136..864H (http://adsabs.harvard.edu/abs/1964PhRv..136..864H).
doi:10.1103/PhysRev.136.B864 (https://doi.org/10.1103%2FPhysRev.136.B864).
11. Levy, Mel (1979). "Universal variational functionals of electron densities, first-order density matrices, and natural
spin-orbitals and solution of the v-representability problem". Proceedings of the National Academy of Sciences.
United States National Academy of Sciences. 76 (12): 6062–6065. Bibcode:1979PNAS...76.6062L (http://adsabs.har
vard.edu/abs/1979PNAS...76.6062L). doi:10.1073/pnas.76.12.6062 (https://doi.org/10.1073%2Fpnas.76.12.6062).
12. Vignale, G.; Mark Rasolt (1987). "Density-functional theory in strong magnetic fields". Physical Review Letters.
American Physical Society. 59 (20): 2360–2363. Bibcode:1987PhRvL..59.2360V (http://adsabs.harvard.edu/abs/198
7PhRvL..59.2360V). PMID 10035523 (https://www.ncbi.nlm.nih.gov/pubmed/10035523).
doi:10.1103/PhysRevLett.59.2360 (https://doi.org/10.1103%2FPhysRevLett.59.2360).
13. Kohn, W.; Sham, L. J. (1965). "Self-consistent equations including exchange and correlation effects". Physical
Review. 140 (4A): A1133–A1138. Bibcode:1965PhRv..140.1133K (http://adsabs.harvard.edu/abs/1965PhRv..140.11
33K). doi:10.1103/PhysRev.140.A1133 (https://doi.org/10.1103%2FPhysRev.140.A1133).

https://en.wikipedia.org/wiki/Density_functional_theory 10/13
7/24/2017 Density functional theory - Wikipedia

14. M. Brack (1983), "Virial theorems for relativistic spin-½ and spin-0 particles", Phys. Rev. D, 27: 1950,
doi:10.1103/physrevd.27.1950 (https://doi.org/10.1103%2Fphysrevd.27.1950)
15. K. Koshelev (2015). "About density functional theory interpretation". arXiv:0812.2919 (https://arxiv.org/abs/0812.29
19)  [quant-ph (https://arxiv.org/archive/quant-ph)].
16. K. Koshelev (2007). "Alpha variation problem and q-factor definition". arXiv:0707.1146 (https://arxiv.org/abs/0707.
1146)  [physics.atom-ph (https://arxiv.org/archive/physics.atom-ph)].
17. Kieron Burke; Lucas O. Wagner (2013). "DFT in a nutshell". International Journal of Quantum Chemistry. 113 (2):
96. doi:10.1002/qua.24259 (https://doi.org/10.1002%2Fqua.24259).
18. John P. Perdew; Adrienn Ruzsinszky; Jianmin Tao; Viktor N. Staroverov; Gustavo Scuseria; Gábor I. Csonka (2005).
"Prescriptions for the design and selection of density functional approximations: More constraint satisfaction with
fewer fits". Journal of Chemical Physics. 123 (6): 062201. Bibcode:2005JChPh.123f2201P (http://adsabs.harvard.ed
u/abs/2005JChPh.123f2201P). PMID 16122287 (https://www.ncbi.nlm.nih.gov/pubmed/16122287).
doi:10.1063/1.1904565 (https://doi.org/10.1063%2F1.1904565).
19. Becke, Axel D. (2014-05-14). "Perspective: Fifty years of density-functional theory in chemical physics" (http://scita
tion.aip.org/content/aip/journal/jcp/140/18/10.1063/1.4869598). The Journal of Chemical Physics. 140 (18):
18A301. Bibcode:2014JChPh.140rA301B (http://adsabs.harvard.edu/abs/2014JChPh.140rA301B). ISSN 0021-9606
(https://www.worldcat.org/issn/0021-9606). PMID 24832308 (https://www.ncbi.nlm.nih.gov/pubmed/24832308).
doi:10.1063/1.4869598 (https://doi.org/10.1063%2F1.4869598).
20. Perdew, John P; Chevary, J A; Vosko, S H; Jackson, Koblar, A; Pederson, Mark R; Singh, D J; Fiolhais, Carlos
(1992). "Atoms, molecules, solids, and surfaces: Applications of the generalized gradient approximation for
exchange and correlation". Physical Review B. 46 (11): 6671. Bibcode:1992PhRvB..46.6671P (http://adsabs.harvard.
edu/abs/1992PhRvB..46.6671P). doi:10.1103/physrevb.46.6671 (https://doi.org/10.1103%2Fphysrevb.46.6671).
21. Becke, Axel D (1988). "Density-functional exchange-energy approximation with correct asymptotic behavior".
Physical Review A. 38 (6): 3098. Bibcode:1988PhRvA..38.3098B (http://adsabs.harvard.edu/abs/1988PhRvA..38.30
98B). PMID 9900728 (https://www.ncbi.nlm.nih.gov/pubmed/9900728). doi:10.1103/physreva.38.3098 (https://doi.o
rg/10.1103%2Fphysreva.38.3098).
22. Langreth, David C; Mehl, M J (1983). "Beyond the local-density approximation in calculations of ground-state
electronic properties". Physical Review B. 28 (4): 1809. Bibcode:1983PhRvB..28.1809L (http://adsabs.harvard.edu/a
bs/1983PhRvB..28.1809L). doi:10.1103/physrevb.28.1809 (https://doi.org/10.1103%2Fphysrevb.28.1809).
23. Grayce, Christopher; Robert Harris (1994). "Magnetic-field density-functional theory". Physical Review A. 50 (4):
3089–3095. Bibcode:1994PhRvA..50.3089G (http://adsabs.harvard.edu/abs/1994PhRvA..50.3089G).
PMID 9911249 (https://www.ncbi.nlm.nih.gov/pubmed/9911249). doi:10.1103/PhysRevA.50.3089 (https://doi.org/1
0.1103%2FPhysRevA.50.3089).
24. Viraht, Xiao-Yin (2012). "Hohenberg-Kohn theorem including electron spin". Physical Review A. 86 (4): 042502.
Bibcode:2012PhRvA..86d2502P (http://adsabs.harvard.edu/abs/2012PhRvA..86d2502P).
doi:10.1103/physreva.86.042502 (https://doi.org/10.1103%2Fphysreva.86.042502).
25. Segall, M.D.; Lindan, P.J (2002). "First-principles simulation: ideas, illustrations and the CASTEP code". Journal of
Physics: Condensed Matter. 14 (11): 2717. Bibcode:2002JPCM...14.2717S (http://adsabs.harvard.edu/abs/2002JPC
M...14.2717S). doi:10.1088/0953-8984/14/11/301 (https://doi.org/10.1088%2F0953-8984%2F14%2F11%2F301).
26. Somayeh. F. Rastegar, Hamed Soleymanabadi (2014-01-01). "Theoretical investigation on the selective detection of
SO2 molecule by AlN nanosheets" (http://link.springer.com/article/10.1007/s00894-014-2439-6). Journal of
Molecular Modeling. 20 (9). doi:10.1007/s00894-014-2439-6 (https://doi.org/10.1007%2Fs00894-014-2439-6).
27. Somayeh F. Rastegar, Hamed Soleymanabadi (2013-01-01). "DFT studies of acrolein molecule adsorption on
pristine and Al- doped graphenes" (http://link.springer.com/article/10.1007/s00894-013-1898-5). Journal of
Molecular Modeling. 19 (9): 3733–40. PMID 23793719 (https://www.ncbi.nlm.nih.gov/pubmed/23793719).
doi:10.1007/s00894-013-1898-5 (https://doi.org/10.1007%2Fs00894-013-1898-5).
28. Music, D.; Geyer, R.W.; Schneider, J.M. (2016). "Recent progress and new directions in density functional theory
based design of hard coatings". Surface & Coatings Technology. 286: 178. doi:10.1016/j.surfcoat.2015.12.021 (http
s://doi.org/10.1016%2Fj.surfcoat.2015.12.021).
29. (Parr & Yang 1989, p. 47)
30. March, N. H. (1992). Electron Density Theory of Atoms and Molecules. Academic Press. p. 24. ISBN 0-12-470525-
1.
31. Weizsäcker, C. F. v. (1935). "Zur Theorie der Kernmassen". Zeitschrift für Physik. 96 (7–8): 431–58.
Bibcode:1935ZPhy...96..431W (http://adsabs.harvard.edu/abs/1935ZPhy...96..431W). doi:10.1007/BF01337700 (http
s://doi.org/10.1007%2FBF01337700).
32. (Parr & Yang 1989, p. 127)
33. Topp, William C.; Hopfield, John J. (1973-02-15). "Chemically Motivated Pseudopotential for Sodium" (http://link.a
ps.org/doi/10.1103/PhysRevB.7.1295). Physical Review B. 7 (4): 1295–1303. Bibcode:1973PhRvB...7.1295T (http://
adsabs.harvard.edu/abs/1973PhRvB...7.1295T). doi:10.1103/PhysRevB.7.1295 (https://doi.org/10.1103%2FPhysRev
B.7.1295).

https://en.wikipedia.org/wiki/Density_functional_theory 11/13
7/24/2017 Density functional theory - Wikipedia

34. Michelini, M. C.; Pis Diez, R.; Jubert, A. H. (25 June 1998). "A Density Functional Study of Small Nickel Clusters"
(http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-461X(1998)70:4/5%3C693::AID-QUA15%3E3.0.CO;2-3/ep
df). International Journal of Quantum Chemistry. 70 (4–5): 694. doi:10.1002/(SICI)1097-
461X(1998)70:4/5<693::AID-QUA15>3.0.CO;2-3 (https://doi.org/10.1002%2F%28SICI%291097-461X%281998%
2970%3A4%2F5%3C693%3A%3AAID-QUA15%3E3.0.CO%3B2-3). Retrieved 21 October 2016.
35. "Finite temperature approaches -- smearing methods" (http://cms.mpi.univie.ac.at/vasp/vasp/Finite_temperature_app
roaches_smearing_methods.html). VASP the GUIDE. Retrieved 21 October 2016.
36. Tong, Lianheng. "Methfessel-Paxton Approximation to Step Function" (http://cms.mpi.univie.ac.at/vasp/vasp/Finite_
temperature_approaches_smearing_methods.html). Metal CONQUEST. Retrieved 21 October 2016.

Key papers
Parr, R. G.; Yang, W. (1989). Density-Functional Theory of Atoms and Molecules (https://books.google.c
om/books?id=mGOpScSIwU4C&printsec=frontcover&dq=Density-Functional+Theory+of+Atoms+and
+Molecules&cd=1#v=onepage&q). New York: Oxford University Press. ISBN 0-19-504279-4.
Thomas, L. H. (1927). "The calculation of atomic fields". Proc. Camb. Phil. Soc. 23 (5): 542–548.
Bibcode:1927PCPS...23..542T (http://adsabs.harvard.edu/abs/1927PCPS...23..542T).
doi:10.1017/S0305004100011683 (https://doi.org/10.1017%2FS0305004100011683).
Hohenberg, P.; Kohn, W. (1964). "Inhomogeneous Electron Gas". Physical Review. 136 (3B): B864.
Bibcode:1964PhRv..136..864H (http://adsabs.harvard.edu/abs/1964PhRv..136..864H).
doi:10.1103/PhysRev.136.B864 (https://doi.org/10.1103%2FPhysRev.136.B864).
Kohn, W.; Sham, L. J. (1965). "Self-Consistent Equations Including Exchange and Correlation Effects".
Physical Review. 140 (4A): A1133. Bibcode:1965PhRv..140.1133K (http://adsabs.harvard.edu/abs/1965P
hRv..140.1133K). doi:10.1103/PhysRev.140.A1133 (https://doi.org/10.1103%2FPhysRev.140.A1133).
Becke, Axel D. (1993). "Density-functional thermochemistry. III. The role of exact exchange". The
Journal of Chemical Physics. 98 (7): 5648. Bibcode:1993JChPh..98.5648B (http://adsabs.harvard.edu/ab
s/1993JChPh..98.5648B). doi:10.1063/1.464913 (https://doi.org/10.1063%2F1.464913).
Lee, Chengteh; Yang, Weitao; Parr, Robert G. (1988). "Development of the Colle-Salvetti correlation-
energy formula into a functional of the electron density". Physical Review B. 37 (2): 785.
Bibcode:1988PhRvB..37..785L (http://adsabs.harvard.edu/abs/1988PhRvB..37..785L).
doi:10.1103/PhysRevB.37.785 (https://doi.org/10.1103%2FPhysRevB.37.785).
Burke, Kieron; Werschnik, Jan; Gross, E. K. U. (2005). "Time-dependent density functional theory: Past,
present, and future". The Journal of Chemical Physics. 123 (6): 062206. Bibcode:2005JChPh.123f2206B
(http://adsabs.harvard.edu/abs/2005JChPh.123f2206B). arXiv:cond-mat/0410362 (https://arxiv.org/abs/c
ond-mat/0410362)  . doi:10.1063/1.1904586 (https://doi.org/10.1063%2F1.1904586).

External links
Walter Kohn, Nobel Laureate (http://www.vega.org.uk/video/programme/23) Freeview video interview
with Walter on his work developing density functional theory by the Vega Science Trust.
Capelle, Klaus (2002). "A bird's-eye view of density-functional theory". arXiv:cond-mat/0211443 (http
s://arxiv.org/abs/cond-mat/0211443)  .
Walter Kohn, Nobel Lecture (http://nobelprize.org/chemistry/laureates/1998/kohn-lecture.pdf)
Density functional theory on arxiv.org (http://xstructure.inr.ac.ru/x-bin/theme3.py?level=1&index1=4477
65)
FreeScience Library -> Density Functional Theory (http://freescience.info/books.php?id=30)
Argaman, Nathan; Makov, Guy (1998). "Density Functional Theory -- an introduction". American
Journal of Physics. 68 (2000): 69–79. Bibcode:2000AmJPh..68...69A (http://adsabs.harvard.edu/abs/200
0AmJPh..68...69A). arXiv:physics/9806013 (https://arxiv.org/abs/physics/9806013)  .
doi:10.1119/1.19375 (https://doi.org/10.1119%2F1.19375).
Electron Density Functional Theory – Lecture Notes (http://www.fh.huji.ac.il/~roib/LectureNotes/DFT/D
FT_Course_Roi_Baer.pdf)
Density Functional Theory through Legendre Transformation (http://ptp.ipap.jp/link?PTP/92/833/)pdf (ht
tp://ann.phys.sci.osaka-u.ac.jp/~kotani/pap/924-09.pdf)
Kieron Burke : Book On DFT : " THE ABC OF DFT " http://dft.uci.edu/doc/g1.pdf
Modeling Materials Continuum, Atomistic and Multiscale Techniques, Book (http://www.modelingmater
ials.org/the-books)
https://en.wikipedia.org/wiki/Density_functional_theory 12/13
7/24/2017 Density functional theory - Wikipedia

Retrieved from "https://en.wikipedia.org/w/index.php?title=Density_functional_theory&oldid=790175581"

Categories: Density functional theory Electronic structure methods Physics theorems

This page was last edited on 12 July 2017, at 01:54.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Density_functional_theory 13/13
7/24/2017 Complex systems - Wikipedia

Complex systems
From Wikipedia, the free encyclopedia

Complex systems are systems whose behavior is intrinsically difficult to model due to the dependencies,
relationships, or interactions between their parts or between a given system and its environment. Systems that
are "complex" have distinct properties that arise from these relationships, such as nonlinearity, emergence,
spontaneous order, adaptation, and feedback loops, among others. Because such systems appear in a wide
variety of fields, the commonalities among them have become the topic of their own independent area of
research.

Contents
1 Overview
2 Key concepts
2.1 Systems
2.2 Complexity
2.3 Networks
2.4 Nonlinearity
2.5 Emergence
2.5.1 Spontaneous order and self-organization
2.6 Adaptation
3 History
4 Applications of complex systems
4.1 Complexity in practice
4.2 Complexity management
4.3 Complexity economics
4.4 Complexity and education
4.5 Complexity and modeling
4.6 Complexity and chaos theory
4.7 Complexity and network science
4.8 General form of complexity computation
5 Notable figures
6 See also
7 References
8 Further reading
9 External links

Overview
The term complex systems often refers to the study of complex systems, which is an approach to science that
investigates how relationships between a system's parts give rise to its collective behaviors and how the system
interacts and forms relationships with its environment.[1] The study of complex systems regards collective, or
system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood
as an alternative paradigm to reductionism, which attempts to explain systems in terms of their constituent parts
and the individual interactions between them.

As an interdisciplinary domain, complex systems draws contributions from many different fields, such as the
study of self-organization from physics, that of spontaneous order from the social sciences, chaos from
mathematics, adaptation from biology, and many others. Complex systems is therefore often used as a broad

https://en.wikipedia.org/wiki/Complex_systems 1/10
7/24/2017 Complex systems - Wikipedia

term encompassing a research approach to problems in many diverse disciplines, including statistical physics,
information theory, nonlinear dynamics, anthropology, computer science, meteorology, sociology, economics,
psychology, and biology.

Key concepts
Systems

Complex systems is chiefly concerned with the behaviors and


properties of systems. A system, broadly defined, is a set of entities
that, through their interactions, relationships, or dependencies,
form a unified whole. It is always defined in terms of its boundary,
which determines the entities that are or are not part of the system.
Entities lying outside the system then become part of the system's
environment.

A system can exhibit properties that produce behaviors which are


distinct from the properties and behaviors of its parts; these
system-wide or global properties and behaviors are characteristics
of how the system interacts with or appears to its environment, or
Open systems have input and output flows,
of how its parts behave (say, in response to external stimuli) by
representing exchanges of matter, energy or
virtue of being within the system. The notion of behavior implies
information with their surroundings.
that the study of systems is also concerned with processes that take
place over time (or, in mathematics, some other phase space
parameterization). Because of their broad, interdisciplinary applicability, systems concepts play a central role in
complex systems.

As a field of study, complex systems is a subset of systems theory. General systems theory focuses similarly on
the collective behaviors of interacting entities, but it studies a much broader class of systems, including non-
complex systems where traditional reductionist approaches may remain viable. Indeed, systems theory seeks to
explore and describe all classes of systems, and the invention of categories that are useful to researchers across
widely varying fields is one of systems theory's main objectives.

As it relates to complex systems, systems theory contributes an emphasis on the way relationships and
dependencies between a system's parts can determine system-wide properties. It also contributes the
interdisciplinary perspective of the study of complex systems: the notion that shared properties link systems
across disciplines, justifying the pursuit of modeling approaches applicable to complex systems wherever they
appear. Specific concepts important to complex systems, such as emergence, feedback loops, and adaptation,
also originate in systems theory.

Complexity

Systems exhibit complexity when difficulties with modeling them are endemic. This means their behaviors
cannot be understood apart from the very properties that make them difficult to model, and they are governed
entirely, or almost entirely, by the behaviors those properties produce. Any modeling approach that ignores
such difficulties or characterizes them as noise, then, will necessarily produce models that are neither accurate
nor useful. As yet no fully general theory of complex systems has emerged for addressing these problems, so
researchers must solve them in domain-specific contexts. Researchers in complex systems address these
problems by viewing the chief task of modeling to be capturing, rather than reducing, the complexity of their
respective systems of interest.

While no generally accepted exact definition of complexity exists yet, there are many archetypal examples of
complexity. Systems can be complex if, for instance, they have chaotic behavior (behavior that exhibits extreme
sensitivity to initial conditions), or if they have emergent properties (properties that are not apparent from their

https://en.wikipedia.org/wiki/Complex_systems 2/10
7/24/2017 Complex systems - Wikipedia

components in isolation but which result from the relationships and dependencies they form when placed
together in a system), or if they are computationally intractable to model (if they depend on a number of
parameters that grows too rapidly with respect to the size of the system).

Networks

The interacting components of a complex system form a network, which is a collection of discrete objects and
relationships between them, usually depicted as a graph of vertices connected by edges. Networks can describe
the relationships between individuals within an organization, between logic gates in a circuit, between genes in
gene regulatory networks, or between any other set of related entities.

Networks often describe the sources of complexity in complex systems. Studying complex systems as networks
therefore enables many useful applications of graph theory and network science. Some complex systems, for
example, are also complex networks, which have properties such as power-law degree distributions that readily
lend themselves to emergent or chaotic behavior. The fact that the number of edges in a complete graph grows
quadratically in the number of vertices sheds additional light on the source of complexity in large networks: as
a network grows, the number of relationships between entities quickly dwarfs the number of entities in the
network.

Nonlinearity

Complex systems often have nonlinear behavior, meaning they may


respond in different ways to the same input depending on their state or
context. In mathematics and physics, nonlinearity describes systems in
which a change in the size of the input does not produce a proportional
change in the size of the output. For a given change in input, such systems
may yield significantly greater than or less than proportional changes in
output, or even no output at all, depending on the current state of the
system or its parameter values.

Of particular interest to complex systems are nonlinear dynamical systems,


which are systems of differential equations that have one or more nonlinear
terms. Some nonlinear dynamical systems, such as the Lorenz system, can
A sample solution in the Lorenz
produce a mathematical phenomenon known as chaos. Chaos as it applies
attractor when ρ = 28, σ = 10, and
to complex systems refers to the sensitive dependence on initial conditions,
β = 8/3
or "butterfly effect," that a complex system can exhibit. In such a system,
small changes to initial conditions can lead to dramatically different
outcomes. Chaotic behavior can therefore be extremely hard to model numerically, because small rounding
errors at an intermediate stage of computation can cause the model to generate completely inaccurate output.
Furthermore, if a complex system returns to a state similar to one it held previously, it may behave completely
differently in response to exactly the same stimuli, so chaos also poses challenges for extrapolating from past
experience.

Emergence

Another common feature of complex systems is the presence of emergent behaviors and properties: these are
traits of a system which are not apparent from its components in isolation but which result from the
interactions, dependencies, or relationships they form when placed together in a system. Emergence broadly
describes the appearance of such behaviors and properties, and has applications to systems studied in both the
social and physical sciences. While emergence is often used to refer only to the appearance of unplanned
organized behavior in a complex system, emergence can also refer to the breakdown of organization; it
describes any phenomena which are difficult or even impossible to predict from the smaller entities that make
up the system.

https://en.wikipedia.org/wiki/Complex_systems 3/10
7/24/2017 Complex systems - Wikipedia

One example of complex system whose emergent properties have


been studied extensively is cellular automata. In a cellular
automaton, a grid of cells, each having one of finitely many states,
evolves over time according to a simple set of rules. These rules
guide the "interactions" of each cell with its neighbors. Although
the rules are only defined locally, they have been shown capable of
producing globally interesting behavior, for example in Conway's
Game of Life.

Spontaneous order and self-organization


Gosper's Glider Gun creating "gliders" in
When emergence describes the appearance of unplanned order, it is the cellular automaton Conway's Game of
spontaneous order (in the social sciences) or self-organization (in Life[2]
physical sciences). Spontaneous order can be seen in herd
behavior, whereby a group of individuals coordinates their actions
without centralized planning. Self-organization can be seen in the global symmetry of certain crystals, for
instance the apparent radial symmetry of snowflakes, which arises from purely local attractive and repulsive
forces both between water molecules and between water molecules and their surrounding environment.

Adaptation

Complex adaptive systems are special cases of complex systems that are adaptive in that they have the capacity
to change and learn from experience. Examples of complex adaptive systems include the stock market, social
insect and ant colonies, the biosphere and the ecosystem, the brain and the immune system, the cell and the
developing embryo, manufacturing businesses and any human social group-based endeavor in a cultural and
social system such as political parties or communities.

History
The earliest precursor to modern complex systems
theory can be found in the classical political
economy of the Scottish Enlightenment, later
developed by the Austrian school of economics,
which argues that order in market systems is
spontaneous (or emergent) in that it is the result of
human action, but not the execution of any human
design.[3][4]

Upon this the Austrian school developed from the


19th to the early 20th century the economic
calculation problem, along with the concept of
dispersed knowledge, which were to fuel debates
against the then-dominant Keynesian economics. A perspective on the development of complexity science:
This debate would notably lead economists, http://www.art-sciencefactory.com/complexity-
politicians and other parties to explore the map_feb09.html
question of computational complexity.

A pioneer in the field, and inspired by Karl Popper's and Warren Weaver's works, Nobel prize economist and
philosopher Friedrich Hayek dedicated much of his work, from early to the late 20th century, to the study of
complex phenomena,[5] not constraining his work to human economies but venturing into other fields such as
psychology,[6] biology and cybernetics. Gregory Bateson played a key role in establishing the connection
between anthropology and systems theory; he recognized that the interactive parts of cultures function much
like ecosystems.

https://en.wikipedia.org/wiki/Complex_systems 4/10
7/24/2017 Complex systems - Wikipedia

In mathematics, arguably the largest contribution to the study of complex systems was the discovery of chaos in
deterministic systems, a feature of certain dynamical systems that is strongly related to nonlinearity.[7]

The notion of self-organizing systems is tied to work in nonequilibrium thermodynamics, including that
pioneered by chemist and Nobel laureate Ilya Prigogine in his study of dissipative structures. Even older is the
work by Hartree-Fock c.s. on the quantum-chemistry equations and later calculations of the structure of
molecules which can be regarded as one of the earliest examples of emergence and emergent wholes in science.

The first research institute focused on complex systems, the Santa Fe Institute, was founded in 1984.[8] Early
Santa Fe Institute participants included physics Nobel laureates Murray Gell-Mann and Philip Anderson,
economics Nobel laureate Kenneth Arrow, and Manhattan Project scientists George Cowan and Herb
Anderson.[9] Today, there are over 50 institutes and research centers focusing on complex systems.

Applications of complex systems


Complexity in practice

The traditional approach to dealing with complexity is to reduce or constrain it. Typically, this involves
compartmentalisation: dividing a large system into separate parts. Organizations, for instance, divide their work
into departments that each deal with separate issues. Engineering systems are often designed using modular
components. However, modular designs become susceptible to failure when issues arise that bridge the
divisions.

Complexity management

As projects and acquisitions become increasingly complex, companies and governments are challenged to find
effective ways to manage mega-acquisitions such as the Army Future Combat Systems. Acquisitions such as
the FCS rely on a web of interrelated parts which interact unpredictably. As acquisitions become more network-
centric and complex, businesses will be forced to find ways to manage complexity while governments will be
challenged to provide effective governance to ensure flexibility and resiliency.[10]

Complexity economics

Over the last decades, within the emerging field of complexity economics new predictive tools have been
developed to explain economic growth. Such is the case with the models built by the Santa Fe Institute in 1989
and the more recent economic complexity index (ECI), introduced by the MIT physicist Cesar A. Hidalgo and
the Harvard economist Ricardo Hausmann. Based on the ECI, Hausmann, Hidalgo and their team of The
Observatory of Economic Complexity have produced GDP forecasts for the year 2020.

Complexity and education

Focusing on issues of student persistence with their studies, Forsman, Moll and Linder explore the "viability of
using complexity science as a frame to extend methodological applications for physics education research,"
finding that "framing a social network analysis within a complexity science perspective offers a new and
powerful applicability across a broad range of PER topics."[11]

Complexity and modeling

One of Friedrich Hayek's main contributions to early complexity theory is his distinction between the human
capacity to predict the behaviour of simple systems and its capacity to predict the behaviour of complex
systems through modeling. He believed that economics and the sciences of complex phenomena in general,
which in his view included biology, psychology, and so on, could not be modeled after the sciences that deal

https://en.wikipedia.org/wiki/Complex_systems 5/10
7/24/2017 Complex systems - Wikipedia

with essentially simple phenomena like physics.[12] Hayek would notably explain that complex phenomena,
through modeling, can only allow pattern predictions, compared with the precise predictions that can be made
out of non-complex phenomena.[13]

Complexity and chaos theory

Complexity theory is rooted in chaos theory, which in turn has its origins more than a century ago in the work
of the French mathematician Henri Poincaré. Chaos is sometimes viewed as extremely complicated
information, rather than as an absence of order.[14] Chaotic systems remain deterministic, though their long-
term behavior can be difficult to predict with any accuracy. With perfect knowledge of the initial conditions and
of the relevant equations describing the chaotic system's behavior, one can theoretically make perfectly
accurate predictions about the future of the system, though in practice this is impossible to do with arbitrary
accuracy. Ilya Prigogine argued[15] that complexity is non-deterministic, and gives no way whatsoever to
precisely predict the future.[16]

The emergence of complexity theory shows a domain between deterministic order and randomness which is
complex.[17] This is referred as the "edge of chaos".[18]

When one analyzes complex systems, sensitivity to initial conditions, for


example, is not an issue as important as it is within chaos theory, in which
it prevails. As stated by Colander,[19] the study of complexity is the
opposite of the study of chaos. Complexity is about how a huge number of
extremely complicated and dynamic sets of relationships can generate
some simple behavioral patterns, whereas chaotic behavior, in the sense of
deterministic chaos, is the result of a relatively small number of non-linear
interactions.[17]

Therefore, the main difference between chaotic systems and complex


systems is their history.[20] Chaotic systems do not rely on their history as
A plot of the Lorenz attractor. complex ones do. Chaotic behaviour pushes a system in equilibrium into
chaotic order, which means, in other words, out of what we traditionally
define as 'order'. On the other hand, complex systems evolve far from
equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and
unexpected events, which physicist Murray Gell-Mann called "an accumulation of frozen accidents."[21] In a
sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence
of historical dependence. Many real complex systems are, in practice and over long but finite time periods,
robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic
integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations.

Complexity and network science

A complex system is usually composed of many components and their interactions. Such a system can be
represented by a network where nodes represent the components and links represent their interactions.[22]
[23][24] for example, the INTERNET can be represented as a network composed of nodes (computers) and links

(direct connections between computers). Its resilience to failures was studied using percolation theory in.[25]
Other examples are social networks, airline networks,[26] biological networks and climate networks.[27]
Networks can also fail and recover spontaneously. For modeling this phenomenon see ref.[28] Interacting
complex systems can be modeled as networks of networks. For their breakdown and recovery properties see
[29] [30]

General form of complexity computation


https://en.wikipedia.org/wiki/Complex_systems 6/10
7/24/2017 Complex systems - Wikipedia

The computational law of reachable optimality[31] is established as a general form of computation for ordered
system and it reveals complexity computation is a compound computation of optimal choice and optimality
driven reaching pattern overtime underlying a specific and any experience path of ordered system within the
general limitation of system integrity.

The computational law of reachable optimality has four key components as described below.

1. Reachability of Optimality: Any intended optimality shall be reachable. Unreachable optimality has no
meaning for a member in the ordered system and even for the ordered system itself.

2. Prevailing and Consistency: Maximizing reachability to explore best available optimality is the prevailing
computation logic for all members in the ordered system and is accommodated by the ordered system.

3. Conditionality: Realizable tradeoff between reachability and optimality depends primarily upon the initial
bet capacity and how the bet capacity evolves along with the payoff table update path triggered by bet behavior
and empowered by the underlying law of reward and punishment. Precisely, it is a sequence of conditional
events where the next event happens upon reached status quo from experience path.

4. Robustness: The more challenge a reachable optimality can accommodate, the more robust it is in term of
path integrity.

There are also four computation features in the law of reachable optimality.

1. Optimal Choice: Computation in realizing Optimal Choice can be very simple or very complex. A simple
rule in Optimal Choice is to accept whatever is reached, Reward As You Go (RAYG). A Reachable Optimality
computation reduces into optimizing reachability when RAYG is adopted. The Optimal Choice computation
can be more complex when multiple NE strategies present in a reached game.

2. Initial Status: Computation is assumed to start at an interested beginning even the absolute beginning of an
ordered system in nature may not and need not present. An assumed neutral Initial Status facilitates an artificial
or a simulating computation and is not expected to change the prevalence of any findings.

3. Territory: An ordered system shall have a territory where the universal computation sponsored by the
system will produce an optimal solution still within the territory.

4. Reaching Pattern: The forms of Reaching Pattern in the computation space, or the Optimality Driven
Reaching Pattern in the computation space, primarily depend upon the nature and dimensions of measure space
underlying a computation space and the law of punishment and reward underlying the realized experience path
of reaching. There are five basic forms of experience path we are interested in, persistently positive
reinforcement experience path, persistently negative reinforcement experience path, mixed persistent pattern
experience path, decaying scale experience path and selection experience path.

The compound computation in selection experience path includes current and lagging interaction, dynamic
topological transformation and implies both invariance and variance characteristics in an ordered system's
experience path.

In addition, the computation law of reachable optimality gives out the boundary between complexity model,
chaotic model and determination model. When RAYG is the Optimal Choice computation, and the reaching
pattern is a persistently positive experience path, persistently negative experience path, or mixed persistent
pattern experience path, the underlying computation shall be a simple system computation adopting
determination rules. If the reaching pattern has no persistent pattern experienced in RAYG regime, the
underlying computation hints there is a chaotic system. When the optimal choice computation involves non-
RAYG computation, it's a complexity computation driving the compound effect.

Notable figures
https://en.wikipedia.org/wiki/Complex_systems 7/10
7/24/2017 Complex systems - Wikipedia

Christopher Alexander
Gregory Bateson
Ludwig von Bertalanffy
Samuel Bowles
Paul Cilliers
Murray Gell-Mann
Arthur Iberall
Stuart Kauffman
Cris Moore
Bill McKelvey (http://www.billmckelvey.org/)
Jerry Sabloff
Geoffrey West
Yaneer Bar-Yam
Walter Clemens, Jr.

See also

Chaos theory Emergence Systems theory


Cybernetics Enterprise systems in anthropology
Cognitive modeling engineering Self-organization
Cognitive Science Generative sciences Sociology and complexity
Complex adaptive Homeokinetics science
system Interdependent networks Volatility, uncertainty,
Complex networks Invisible hand complexity
Complexity Mixed reality
and ambiguity
Complexity economics Multi-agent system
Decision engineering Network Science
Dual-phase evolution Nonlinearity
Dynamical system Pattern-oriented modeling
Dynamical systems Percolation Theory
theory Process architecture

References

1. Bar-Yam, Yaneer (2002). "General Features of Complex Systems" (http://www.eolss.net/sample-chapters/c15/E1-29-


01-00.pdf) (PDF). Encyclopedia of Life Support Systems. EOLSS UNESCO Publishers, Oxford, UK. Retrieved
16 September 2014.
2. Daniel Dennett (1995), Darwin's Dangerous Idea, Penguin Books, London, ISBN 978-0-14-016734-4, ISBN 0-14-
016734-X
3. Ferguson, Adam (1767). An Essay on the History of Civil Society (http://oll.libertyfund.org/index.php?option=com_s
taticxt&staticfile=show.php%3Ftitle=1428&Itemid=28). London: T. Cadell. Part the Third, Section II, p. 205.
4. Friedrich Hayek, "The Results of Human Action but Not of Human Design" in New Studies in Philosophy, Politics,
Economics, Chicago: University of Chicago Press, 1978, pp. 96–105.
5. Bruce J. Caldwell, Popper and Hayek: Who influenced whom? (http://www.unites.uqam.ca/philo/pdf/Caldwell_2003
-01.pdf), Karl Popper 2002 Centenary Congress, 2002.
6. Friedrich von Hayek, The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology, The
University of Chicago Press, 1952.
7. History of Complex Systems (http://www.irit.fr/COSI/training/complexity-tutorial/history-of-complex-systems.htm)
8. Ledford, H (2015). "How to solve the world's biggest problems" (http://www.nature.com/news/how-to-solve-the-wor
ld-s-biggest-problems-1.18367). Nature. 525 (7569): 308–311. doi:10.1038/525308a (https://doi.org/10.1038%2F525
308a).
9. Waldrop, M. M. (1993). Complexity: The emerging science at the edge of order and chaos. (https://books.google.co
m/books/about/Complexity.html?id=JTRJxYK_tZsC) Simon and Schuster.

https://en.wikipedia.org/wiki/Complex_systems 8/10
7/24/2017 Complex systems - Wikipedia

10. CSIS paper: "Organizing for a Complex World: The Way Ahead (http://csis.org/files/publication/090410_Organizing
_for_a_Complex_World_The_Way_Ahead_0.pdf)
11. Forsman, Jonas; Moll, Rachel; Linder, Cedric (2014). "Extending the theoretical framing for physics education
research: An illustrative application of complexity science" (http://link.aps.org/doi/10.1103/PhysRevSTPER.10.0201
22). Physical Review Special Topics - Physics Education Research. 10 (2). doi:10.1103/PhysRevSTPER.10.020122
(https://doi.org/10.1103%2FPhysRevSTPER.10.020122). http://hdl.handle.net/10613/2583.
12. Reason Magazine - The Road from Serfdom (http://www.reason.com/news/show/33304.html)
13. Friedrich August von Hayek - Prize Lecture (http://nobelprize.org/nobel_prizes/economics/laureates/1974/hayek-lect
ure.html)
14. Hayles, N. K. (1991). Chaos Bound: Orderly Disorder in Contemporary Literature and Science. Cornell University
Press, Ithaca, NY.
15. Prigogine, I. (1997). The End of Certainty, The Free Press, New York.
16. See also D. Carfì (2008). "Superpositions in Prigogine approach to irreversibility" (http://cab.unime.it/journals/index.
php/AAPP/article/view/384/0). AAPP: Physical, Mathematical, and Natural Sciences. 86 (1): 1–13..
17. Cilliers, P. (1998). Complexity and Postmodernism: Understanding Complex Systems, Routledge, London.
18. Per Bak (1996). How Nature Works: The Science of Self-Organized Criticality, Copernicus, New York, U.S.
19. Colander, D. (2000). The Complexity Vision and the Teaching of Economics, E. Elgar, Northampton, Massachusetts.
20. Buchanan, M. (2000). Ubiquity : Why catastrophes happen, three river press, New-York.
21. Gell-Mann, M. (1995). What is Complexity? Complexity 1/1, 16-19
22. Dorogovtsev, S.N.; Mendes, J.F.F. (2003). "Evolution of Networks".
doi:10.1093/acprof:oso/9780198515906.001.0001 (https://doi.org/10.1093%2Facprof%3Aoso%2F9780198515906.0
01.0001).
23. Fortunato, Santo (2011). "Reuven Cohen and Shlomo Havlin: Complex Networks". Journal of Statistical Physics.
142 (3): 640–641. ISSN 0022-4715 (https://www.worldcat.org/issn/0022-4715). doi:10.1007/s10955-011-0129-7 (htt
ps://doi.org/10.1007%2Fs10955-011-0129-7).
24. Newman, Mark (2010). "Networks". doi:10.1093/acprof:oso/9780199206650.001.0001 (https://doi.org/10.1093%2F
acprof%3Aoso%2F9780199206650.001.0001).
25. Cohen, Reuven; Erez, Keren; ben-Avraham, Daniel; Havlin, Shlomo (2001). "Cohen, Erez, ben-Avraham, and
Havlin Reply:". Physical Review Letters. 87 (21). Bibcode:2001PhRvL..87u9802C (http://adsabs.harvard.edu/abs/20
01PhRvL..87u9802C). ISSN 0031-9007 (https://www.worldcat.org/issn/0031-9007).
doi:10.1103/PhysRevLett.87.219802 (https://doi.org/10.1103%2FPhysRevLett.87.219802).
26. Barrat, A.; Barthelemy, M.; Pastor-Satorras, R.; Vespignani, A. (2004). "The architecture of complex weighted
networks" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC374315). Proceedings of the National Academy of
Sciences. 101 (11): 3747–3752. ISSN 0027-8424 (https://www.worldcat.org/issn/0027-8424). PMC 374315 (https://
www.ncbi.nlm.nih.gov/pmc/articles/PMC374315)  . PMID 15007165 (https://www.ncbi.nlm.nih.gov/pubmed/15007
165). doi:10.1073/pnas.0400087101 (https://doi.org/10.1073%2Fpnas.0400087101).
27. Yamasaki, K.; Gozolchiani, A.; Havlin, S. (2008). "Climate Networks around the Globe are Significantly Affected by
El Niño". Physical Review Letters. 100 (22): 228501. ISSN 0031-9007 (https://www.worldcat.org/issn/0031-9007).
PMID 18643467 (https://www.ncbi.nlm.nih.gov/pubmed/18643467). doi:10.1103/PhysRevLett.100.228501 (https://d
oi.org/10.1103%2FPhysRevLett.100.228501).
28. Majdandzic, Antonio; Podobnik, Boris; Buldyrev, Sergey V.; Kenett, Dror Y.; Havlin, Shlomo; Eugene Stanley, H.
(2013). "Spontaneous recovery in dynamical networks". Nature Physics. 10 (1): 34–38. ISSN 1745-2473 (https://ww
w.worldcat.org/issn/1745-2473). doi:10.1038/nphys2819 (https://doi.org/10.1038%2Fnphys2819).
29. Gao, Jianxi; Buldyrev, Sergey V.; Stanley, H. Eugene; Havlin, Shlomo (2011). "Networks formed from
interdependent networks". Nature Physics. 8 (1): 40–48. Bibcode:2012NatPh...8...40G (http://adsabs.harvard.edu/ab
s/2012NatPh...8...40G). ISSN 1745-2473 (https://www.worldcat.org/issn/1745-2473). doi:10.1038/nphys2180 (http
s://doi.org/10.1038%2Fnphys2180).
30. Majdandzic, Antonio; Braunstein, Lidia A.; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Eugene Stanley,
H.; Havlin, Shlomo (2016). "Multiple tipping points and optimal repairing in interacting networks". Nature
Communications. 7: 10850. ISSN 2041-1723 (https://www.worldcat.org/issn/2041-1723).
doi:10.1038/ncomms10850 (https://doi.org/10.1038%2Fncomms10850).
31. Wenliang Wang (2015). Pooling Game Theory and Public Pension Plan. ISBN 978-1507658246. Chapter 4.

Further reading
Bazin, A. (2014). (https://www.academia.edu/attachments/34737324/download_file?st=MTQxNzA5Mzg
yNywxMDguMjYuMTIzLjE2MSwxMzMzMjk5MA%3D%3D&s=swp-toolbar&ct=MTQxNzA5MzgyN
yw2OTU5MCwxMzMzMjk5MA==)Defeating ISIS and Their Complex Way of War Small Wars Journal.
Syed M. Mehmud (2011), A Healthcare Exchange Complexity Model (http://predictivemodeler.com/sitec
ontent/book/Ch06_Applications/Actuarial/HEC_Model/Healthcare%20Exchange%20Complexity%20M
https://en.wikipedia.org/wiki/Complex_systems 9/10
7/24/2017 Complex systems - Wikipedia

odel%20-%20Report%20-%20Aug2011.pdf)
Chu, D.; Strand, R.; Fjelland, R. (2003). "Theories of complexity". Complexity. 8 (3): 19–30.
doi:10.1002/cplx.10059 (https://doi.org/10.1002%2Fcplx.10059).
L.A.N. Amaral and J.M. Ottino, Complex networks — augmenting the framework for the study of
complex system (http://amaral-lab.org/media/publication_pdfs/Amaral-2004-Eur.Phys.J.B-38-147.pdf),
2004.
Gell-Mann, Murray (1995). "Let's Call It Plectics" (http://www.santafe.edu/~mgm/Site/Publications_file
s/MGM%20118.pdf) (PDF). Complexity. 1 (5).
Nigel Goldenfeld and Leo P. Kadanoff, Simple Lessons from Complexity (http://guava.physics.uiuc.edu/~
nigel/articles/complexity.html), 1999
A. Gogolin, A. Nersesyan and A. Tsvelik, Theory of strongly correlated systems (http://www.cmth.bnl.go
v/~tsvelik/theory.html) , Cambridge University Press, 1999.
Kelly, K. (1995). Out of Control (http://www.kk.org/outofcontrol/contents.php), Perseus Books Group.
Donald Snooks, Graeme (2008). "A general theory of complex living systems: Exploring the demand
side of dynamics". Complexity. 13 (6): 12–20. doi:10.1002/cplx.20225 (https://doi.org/10.1002%2Fcplx.
20225).
Sorin Solomon and Eran Shir, Complexity; a science at 30 (http://www.europhysicsnews.org/index.php?o
ption=article&access=standard&Itemid=129&url=/articles/epn/abs/2003/02/epn03204/epn03204.html),
2003.
Preiser-Kapeller, Johannes, "Calculating Byzantium. Social Network Analysis and Complexity Sciences
as tools for the exploration of medieval social dynamics". August 2010 (http://www.oeaw.ac.at/byzanz/re
pository/Preiser_WorkingPapers_Calculating_I.pdf)
Walter Clemens, Jr., Complexity Science and World Affairs (https://web.archive.org/web/2015021922163
3/http://www.sunypress.edu/p-5782-complexity-science-and-world-af.aspx), SUNY Press, 2013.

External links
"The Open Agent-Based Modeling Consortium" (http://www.ope
Wikimedia Commons has
nabm.org).
media related to Complex
"Complexity Science Focus" (http://www.complexity.ecs.soton.a systems.
c.uk/).
"Santa Fe Institute" (http://www.santafe.edu/).
Look up complex systems
"The Center for the Study of Complex Systems, Univ. of in Wiktionary, the free
Michigan Ann Arbor" (http://www.lsa.umich.edu/cscs/). dictionary.
"INDECS" (http://indecs.eu/). (Interdisciplinary Description of
Complex Systems)
"Center for Complex Systems Research, Univ. of Illinois" (http://www.ccsr.uiuc.edu/).
"Introduction to complex systems - Short course by Shlomo Havlin" (http://havlin.biu.ac.il/course3.php).
Jessie Henshaw (October 24, 2013). "Complex Systems" (http://www.eoearth.org/view/article/51cbed507
896bb431f69154d/?topic=51cbfc79f702fc2ba8129ed7). Encyclopedia of Earth.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Complex_systems&oldid=789198918"

Categories: Complex systems theory Cybernetics Systems Systems science Mathematical modeling

This page was last edited on 5 July 2017, at 23:43.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Complex_systems 10/10
7/25/2017 Chaos theory - Wikipedia

Chaos theory
From Wikipedia, the free encyclopedia

Chaos theory is a branch of mathematics focused on the behavior of


dynamical systems that are highly sensitive to initial conditions. 'Chaos'
is an interdisciplinary theory stating that within the apparent
randomness of chaotic complex systems, there are underlying patterns,
constant feedback loops, repetition, self-similarity, fractals, self-
organization, and reliance on programming at the initial point known as
sensitive dependence on initial conditions. The butterfly effect describes
how a small change in one state of a deterministic nonlinear system can
result in large differences in a later state, e.g. a butterfly flapping its
wings in Brazil can cause a tornado in Texas.[1]

Small differences in initial conditions (such as those due to rounding


errors in numerical computation) yield widely diverging outcomes for
A plot of the Lorenz attractor for
such dynamical systems — a response popularly referred to as the
values r = 28, σ = 10, b = 8/3
butterfly effect — rendering long-term prediction of their behavior
impossible in general.[2][3] This happens even though these systems are
deterministic, meaning that their future behavior is fully determined by their
initial conditions, with no random elements involved.[4] In other words, the
deterministic nature of these systems does not make them predictable.[5][6] This
behavior is known as deterministic chaos, or simply chaos. The theory was
summarized by Edward Lorenz as:[7]

A double rod pendulum


Chaos: When the present determines the future, but the approximate animation showing chaotic
present does not approximately determine the future. behavior. Starting the
pendulum from a slightly
different initial condition
Chaotic behavior exists in many natural systems, such as weather and
would result in a
climate.[8][9] It also occurs spontaneously in some systems with artificial completely different
components, such as road traffic.[10] This behavior can be studied through trajectory. The double rod
analysis of a chaotic mathematical model, or through analytical techniques such pendulum is one of the
as recurrence plots and Poincaré maps. Chaos theory has applications in several simplest dynamical
disciplines, including meteorology, sociology, physics,[11] environmental systems that has chaotic
science, computer science, engineering, economics, biology, ecology, and solutions.
philosophy. The theory formed the basis for such fields of study as complex
dynamical systems, edge of chaos theory, self-assembly process.

Contents
1 Introduction
2 Chaotic dynamics
2.1 Chaos as a spontaneous breakdown of topological supersymmetry
2.2 Sensitivity to initial conditions
2.3 Topological mixing
2.4 Density of periodic orbits
2.5 Strange attractors
2.6 Minimum complexity of a chaotic system
2.7 Jerk systems

https://en.wikipedia.org/wiki/Chaos_theory 1/19
7/25/2017 Chaos theory - Wikipedia

3 Spontaneous order
4 History
5 Applications
5.1 Cryptography
5.2 Robotics
5.3 Biology
5.4 Other areas
6 See also
7 References
8 Scientific literature
8.1 Articles
8.2 Textbooks
8.3 Semitechnical and popular works
9 External links

Introduction
Chaos theory concerns deterministic systems whose behavior can in principle be predicted. Chaotic systems are
predictable for a while and then 'appear' to become random.[3] The amount of time that the behavior of a
chaotic system can be effectively predicted depends on three things: How much uncertainty can be tolerated in
the forecast, how accurately its current state can be measured and a time scale depending on the dynamics of
the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about
1 millisecond; weather systems, a few days (unproven); the solar system, 50 million years. In chaotic systems,
the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the
forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a
meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time.
When meaningful predictions cannot be made, the system appears random.[12]

Chaotic dynamics
In common usage, "chaos" means "a state of disorder".[13] However, in chaos theory, the term is defined more
precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used
definition originally formulated by Robert L. Devaney says that, to classify a dynamical system as chaotic, it
must have these properties:[14]

1. it must be sensitive to initial conditions


2. it must be topologically mixing
3. it must have dense periodic orbits

In some cases, the last two properties in the above have been shown to actually imply sensitivity to initial
conditions.[15][16] In these cases, while it is often the most practically significant property, "sensitivity to initial
conditions" need not be stated in the definition.

If attention is restricted to intervals, the second property implies the other two.[17] An alternative, and in
general weaker, definition of chaos uses only the first two properties in the above list.[18]

Chaos as a spontaneous breakdown of topological supersymmetry

In continuous time dynamical systems, chaos is the phenomenon of the spontaneous breakdown of topological
supersymmetry which is an intrinsic property of evolution operators of all stochastic and deterministic (partial)
differential equations.[19][20] This picture of dynamical chaos works not only for deterministic models but also
for models with external noise, which is an important generalization from the physical point of view because in
https://en.wikipedia.org/wiki/Chaos_theory 2/19
7/25/2017 Chaos theory - Wikipedia

reality all dynamical systems experience influence from their stochastic


environments. Within this picture, the long-range dynamical behavior
associated with chaotic dynamics, e.g., the butterfly effect, is a
consequence of the Goldstone's theorem in the application to the
spontaneous topological supersymmetry breaking.

Sensitivity to initial conditions

Sensitivity to initial conditions means that each point in a chaotic


system is arbitrarily closely approximated by other points with
significantly different future paths, or trajectories. Thus, an arbitrarily
small change, or perturbation, of the current trajectory may lead to
significantly different future behavior.
The map defined by x → 4 x (1 – x)
Sensitivity to initial conditions is popularly known as the "butterfly and y → (x + y) mod 1 displays
effect", so-called because of the title of a paper given by Edward Lorenz sensitivity to initial x positions. Here,
in 1972 to the American Association for the Advancement of Science in two series of x and y values diverge
Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's markedly over time from a tiny initial
Wings in Brazil set off a Tornado in Texas?. The flapping wing difference. Note, however, that the y
represents a small change in the initial condition of the system, which coordinate is defined modulo one at
causes a chain of events leading to large-scale phenomena. Had the each step, so the square region is
butterfly not flapped its wings, the trajectory of the system might have actually depicting a cylinder, and the
been vastly different. two points are closer than they look
A consequence of sensitivity to initial conditions is that if we start with
a limited amount of information about the system (as is usually the case
in practice), then beyond a certain time the system is no longer
predictable. This is most familiar in the case of weather, which is
generally predictable only about a week ahead.[21] Of course, this does
not mean that we cannot say anything about events far in the future;
some restrictions on the system are present. With weather, we know that
the temperature will not naturally reach 100 °C or fall to -130 °C on
earth (during the current geologic era), but we can't say exactly what
day will have the hottest temperature of the year.
Lorenz equations used to generate
In more mathematical terms, the Lyapunov exponent measures the plots for the y variable. The initial
sensitivity to initial conditions. Given two starting trajectories in the conditions for x and z were kept the
phase space that are infinitesimally close, with initial separation , same but those for y were changed
the two trajectories end up diverging at a rate given by between 1.001, 1.0001 and 1.00001.
The values for , and were
45.92,16 and 4 respectively. As can
be seen, even the slightest difference
where t is the time and λ is the Lyapunov exponent. The rate of in initial values causes significant
separation depends on the orientation of the initial separation vector, so changes after about 12 seconds of
a whole spectrum of Lyapunov exponents exist. The number of evolution in the three cases. This is an
Lyapunov exponents is equal to the number of dimensions of the phase example of sensitive dependence on
space, though it is common to just refer to the largest one. For example, initial conditions.
the maximal Lyapunov exponent (MLE) is most often used because it
determines the overall predictability of the system. A positive MLE is
usually taken as an indication that the system is chaotic.

Also, other properties relate to sensitivity of initial conditions, such as measure-theoretical mixing (as discussed
in ergodic theory) and properties of a K-system.[6]

Topological mixing
https://en.wikipedia.org/wiki/Chaos_theory 3/19
7/25/2017 Chaos theory - Wikipedia

Topological mixing (or topological transitivity) means that the system


evolves over time so that any given region or open set of its phase space
eventually overlaps with any other given region. This mathematical
concept of "mixing" corresponds to the standard intuition, and the
mixing of colored dyes or fluids is an example of a chaotic system.

Topological mixing is often omitted from popular accounts of chaos,


which equate chaos with only sensitivity to initial conditions. However,
sensitive dependence on initial conditions alone does not give chaos.
For example, consider the simple dynamical system produced by
repeatedly doubling an initial value. This system has sensitive
dependence on initial conditions everywhere, since any pair of nearby Six iterations of a set of states
points eventually becomes widely separated. However, this example has passed through the logistic map. (a)
no topological mixing, and therefore has no chaos. Indeed, it has the blue plot (legend 1) shows the
extremely simple behavior: all points except 0 tend to positive or first iterate (initial condition), which
negative infinity. essentially forms a circle. Animation
shows the first to the sixth iteration of
the circular initial conditions. It can
Density of periodic orbits
be seen that mixing occurs as we
progress in iterations. The sixth
For a chaotic system to have dense periodic orbits means that every
iteration shows that the points are
point in the space is approached arbitrarily closely by periodic
almost completely scattered in the
orbits.[22] The one-dimensional logistic map defined by x → 4 x (1 – x) phase space. Had we progressed
is one of the simplest systems with density of periodic orbits. For further in iterations, the mixing would
example, → → (or approximately 0.3454915 → have been homogeneous and
0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar irreversible. The logistic map has
orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified equation . To
expand the state-space of the logistic
by Sharkovskii's theorem).[23]
map into two dimensions, a second
state, , was created as
Sharkovskii's theorem is the basis of the Li and Yorke[24] (1975) proof
, if and
that any one-dimensional system that exhibits a regular cycle of period
otherwise.
three will also display regular cycles of every other length, as well as
completely chaotic orbits.

Strange attractors

Some dynamical systems, like the one-dimensional logistic map defined


by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic
behavior is found only in a subset of phase space. The cases of most
interest arise when the chaotic behavior takes place on an attractor,
since then a large set of initial conditions leads to orbits that converge to
this chaotic region.[25]

An easy way to visualize a chaotic attractor is to start with a point in the


basin of attraction of the attractor, and then simply plot its subsequent
The map defined by x → 4 x (1 – x)
orbit. Because of the topological transitivity condition, this is likely to
and y → (x + y) mod 1 also displays
produce a picture of the entire final attractor, and indeed both orbits
topological mixing. Here, the blue
shown in the figure on the right give a picture of the general shape of
region is transformed by the
the Lorenz attractor. This attractor results from a simple three-
dynamics first to the purple region,
dimensional model of the Lorenz weather system. The Lorenz attractor
then to the pink and red regions, and
is perhaps one of the best-known chaotic system diagrams, probably
eventually to a cloud of vertical lines
because it was not only one of the first, but it is also one of the most
scattered across the space.
complex and as such gives rise to a very interesting pattern, that with a
little imagination, looks like the wings of a butterfly.

https://en.wikipedia.org/wiki/Chaos_theory 4/19
7/25/2017 Chaos theory - Wikipedia

Unlike fixed-point attractors and limit cycles, the attractors that arise
from chaotic systems, known as strange attractors, have great detail and
complexity. Strange attractors occur in both continuous dynamical
systems (such as the Lorenz system) and in some discrete systems (such
as the Hénon map). Other discrete dynamical systems have a repelling
structure called a Julia set, which forms at the boundary between basins
of attraction of fixed points. Julia sets can be thought of as strange
repellers. Both strange attractors and Julia sets typically have a fractal
structure, and the fractal dimension can be calculated for them.
The Lorenz attractor displays chaotic
Minimum complexity of a chaotic system behavior. These two plots
demonstrate sensitive dependence on
Discrete chaotic systems, such as the logistic map, can exhibit strange initial conditions within the region of
attractors whatever their dimensionality. In contrast, for continuous phase space occupied by the attractor.
dynamical systems, the Poincaré–Bendixson theorem shows that a
strange attractor can only arise in three or more dimensions. Finite-
dimensional linear systems are never chaotic; for a dynamical system to
display chaotic behavior, it must be either nonlinear or infinite-
dimensional.

The Poincaré–Bendixson theorem states that a two-dimensional


differential equation has very regular behavior. The Lorenz attractor
discussed below is generated by a system of three differential equations
such as:

Bifurcation diagram of the logistic


map x → r x (1 – x). Each vertical
slice shows the attractor for a specific
value of r. The diagram displays
period-doubling as r increases,
eventually producing chaos.

where , , and make up the system state, is time, and , , are the system parameters. Five of the terms
on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic
attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott[26]
found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos
for certain parameter values. Zhang and Heidel[27][28] showed that, at least for dissipative and conservative
quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side
cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a
two-dimensional surface and therefore solutions are well behaved.

While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane
cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can exhibit chaotic
behavior.[29] Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite
dimensional.[30] A theory of linear chaos is being developed in a branch of mathematical analysis known as
functional analysis.

Jerk systems

In physics, jerk is the third derivative of position, with respect to time. As such, differential equations of the
form

https://en.wikipedia.org/wiki/Chaos_theory 5/19
7/25/2017 Chaos theory - Wikipedia

are sometimes called Jerk equations. It has been shown that a jerk equation, which is equivalent to a system of
three first order, ordinary, non-linear differential equations, is in a certain sense the minimal setting for
solutions showing chaotic behaviour. This motivates mathematical interest in jerk systems. Systems involving a
fourth or higher derivative are called accordingly hyperjerk systems.[31]

A jerk system's behavior is described by a jerk equation, and for certain jerk equations, simple electronic
circuits can model solutions. These circuits are known as jerk circuits.

One of the most interesting properties of jerk circuits is the possibility of chaotic behavior. In fact, certain well-
known chaotic systems, such as the Lorenz attractor and the Rössler map, are conventionally described as a
system of three first-order differential equations that can combine into a single (although rather complicated)
jerk equation. Nonlinear jerk systems are in a sense minimally complex systems to show chaotic behaviour;
there is no chaotic system involving only two first-order, ordinary differential equations (the system resulting in
an equation of second order only).

An example of a jerk equation with nonlinearity in the magnitude of is:

Here, A is an adjustable parameter. This equation has a chaotic solution for A=3/5 and can be implemented with
the following jerk circuit; the required nonlinearity is brought about by the two diodes:

In the above circuit, all resistors are of equal value, except , and all capacitors are of equal
size. The dominant frequency is . The output of op amp 0 will correspond to the x variable, the output
of 1 corresponds to the first derivative of x and the output of 2 corresponds to the second derivative.

Spontaneous order
Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four
conditions suffice to produce synchronization in a chaotic system. Examples include the coupled oscillation of
Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays
of Josephson junctions.[32]

History
An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem,
he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed
point.[33][34] In 1898 Jacques Hadamard published an influential study of the chaotic motion of a free particle
gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards".[35] Hadamard
was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one
another, with a positive Lyapunov exponent.
https://en.wikipedia.org/wiki/Chaos_theory 6/19
7/25/2017 Chaos theory - Wikipedia

Chaos theory began in the field of ergodic theory. Later studies, also on the
topic of nonlinear differential equations, were carried out by George David
Birkhoff,[36] Andrey Nikolaevich Kolmogorov,[37][38][39] Mary Lucy
Cartwright and John Edensor Littlewood,[40] and Stephen Smale.[41] Except for
Smale, these studies were all directly inspired by physics: the three-body
problem in the case of Birkhoff, turbulence and astronomical problems in the
case of Kolmogorov, and radio engineering in the case of Cartwright and
Littlewood. Although chaotic planetary motion had not been observed,
experimentalists had encountered turbulence in fluid motion and nonperiodic
oscillation in radio circuits without the benefit of a theory to explain what they Barnsley fern created using
were seeing. the chaos game. Natural
forms (ferns, clouds,
Despite initial insights in the first half of the twentieth century, chaos theory mountains, etc.) may be
became formalized as such only after mid-century, when it first became evident recreated through an iterated
to some scientists that linear theory, the prevailing system theory at that time, function system (IFS).
simply could not explain the observed behavior of certain experiments like that
of the logistic map. What had been attributed to measure imprecision and
simple "noise" was considered by chaos theorists as a full component of the studied systems.

The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics
of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to
do by hand. Electronic computers made these repeated calculations practical, while figures and images made it
possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University,
Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called
"randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not
allow him to report his findings until 1970.[42][43]

Edward Lorenz was an early pioneer of the theory. His interest in chaos
came about accidentally through his work on weather prediction in
1961.[8] Lorenz was using a simple digital computer, a Royal McBee
LGP-30, to run his weather simulation. He wanted to see a sequence of
data again, and to save time he started the simulation in the middle of its
course. He did this by entering a printout of the data that corresponded
to conditions in the middle of the original simulation. To his surprise,
the weather the machine began to predict was completely different from
the previous calculation. Lorenz tracked this down to the computer
printout. The computer worked with 6-digit precision, but the printout
Turbulence in the tip vortex from an rounded variables off to a 3-digit number, so a value like 0.506127
airplane wing. Studies of the critical printed as 0.506. This difference is tiny, and the consensus at the time
point beyond which a system creates would have been that it should have no practical effect. However,
turbulence were important for chaos Lorenz discovered that small changes in initial conditions produced
theory, analyzed for example by the large changes in long-term outcome.[44] Lorenz's discovery, which gave
Soviet physicist Lev Landau, who its name to Lorenz attractors, showed that even detailed atmospheric
developed the Landau-Hopf theory of modelling cannot, in general, make precise long-term weather
turbulence. David Ruelle and Floris predictions.
Takens later predicted, against
Landau, that fluid turbulence could In 1963, Benoit Mandelbrot found recurring patterns at every scale in
develop through a strange attractor, adata on cotton prices.[45] Beforehand he had studied information theory
main concept of chaos theory. and concluded noise was patterned like a Cantor set: on any scale the
proportion of noise-containing periods to error-free periods was a
constant – thus errors were inevitable and must be planned for by incorporating redundancy.[46] Mandelbrot
described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in
which persistence of a value can occur for a while, yet suddenly change afterwards).[47][48] This challenged the
idea that changes in price were normally distributed. In 1967, he published "How long is the coast of Britain?
https://en.wikipedia.org/wiki/Chaos_theory 7/19
7/25/2017 Chaos theory - Wikipedia

Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the
measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small
measuring device.[49] Arguing that a ball of twine appears as a point when viewed from far away (0-
dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he
argued that the dimensions of an object are relative to the observer and may be fractional. An object whose
irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge,
the Sierpiński gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and
has a fractal dimension of circa 1.2619). In 1982 Mandelbrot published The Fractal Geometry of Nature, which
became a classic of chaos theory.[50] Biological systems such as the branching of the circulatory and bronchial
systems proved to fit a fractal model.[51]

In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by
David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw,
and the meteorologist Edward Lorenz. The following year, independently Pierre Coullet and Charles Tresser
with the article "Iterations d'endomorphismes et groupe de renormalisation" and Mitchell Feigenbaum with the
article "Quantitative Universality for a Class of Nonlinear Transformations" described logistic maps.[52][53]
They notably discovered the universality in chaos, permitting the application of chaos theory to many different
phenomena.

In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his
experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard
convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for
their inspiring achievements.[54]

In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the
Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo
Huberman presented a mathematical model of the eye tracking disorder among schizophrenics.[55] This led to a
renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of
pathological cardiac cycles.

In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters[56] describing
for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity
arises in nature.

Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations
have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant
behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects
examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural
phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-
invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes,
and the Omori law[57] describing the frequency of aftershocks), solar flares, fluctuations in economic systems
such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires,
landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical
mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay
Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that
another phenomenon that should be considered an example of SOC is the occurrence of wars. These
investigations of SOC have included both attempts at modelling (either developing new models or adapting
existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence
and/or characteristics of natural scaling laws.

In the same year, James Gleick published Chaos: Making a New Science, which became a best-seller and
introduced the general principles of chaos theory as well as its history to the broad public, though his history
under-emphasized important Soviet contributions.[58] Initially the domain of a few, isolated individuals, chaos
theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of
https://en.wikipedia.org/wiki/Chaos_theory 8/19
7/25/2017 Chaos theory - Wikipedia

nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of
Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory
was an example of such a shift, a thesis upheld by Gleick.

The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently,
chaos theory remains an active area of research,[59] involving many different disciplines (mathematics,
topology, physics,[60] social systems, population modeling, biology, meteorology, astrophysics, information
theory, computational neuroscience, etc.).

Applications
Chaos theory was born from observing weather patterns, but it has
become applicable to a variety of other situations. Some areas
benefiting from chaos theory today are geology, mathematics,
microbiology, biology, computer science, economics,[62][63][64]
engineering,[65] finance,[66][67] algorithmic trading,[68][69][70]
meteorology, philosophy, physics,[71][72][73] politics, population
dynamics,[74] psychology,[10] and robotics. A few categories are listed
below with examples, but this is by no means a comprehensive list as
new applications are appearing.
A conus textile shell, similar in
appearance to Rule 30, a cellular Cryptography
automaton with chaotic behaviour.[61]
Chaos theory has been used for many years in cryptography. In the past
few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic
primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random
number generators, stream ciphers, watermarking and steganography.[75] The majority of these algorithms are
based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the
initial condition of the chaotic maps as their keys.[76] From a wider perspective, without loss of generality, the
similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of
chaos based cryptographic algorithms.[77] One type of encryption, secret key or symmetric key, relies on
diffusion and confusion, which is modeled well by chaos theory.[78] Another type of computing, DNA
computing, when paired with chaos theory, offers a way to encrypt images and other information.[79] Many of
the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is
suggested to be not efficient.[80][81][82]

Robotics

Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-
error type of refinement to interact with their environment, chaos theory has been used to build a predictive
model.[83] Chaotic dynamics have been exhibited by passive walking biped robots.[84]

Biology

For over a hundred years, biologists have been keeping track of populations of different species with population
models. Most models are continuous, but recently scientists have been able to implement chaotic models in
certain populations.[85] For example, a study on models of Canadian lynx showed there was chaotic behavior in
the population growth.[86] Chaos can also be found in ecological systems, such as hydrology. While a chaotic
model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens

https://en.wikipedia.org/wiki/Chaos_theory 9/19
7/25/2017 Chaos theory - Wikipedia

of chaos theory.[87] Another biological application is found in cardiotocography. Fetal surveillance is a delicate
balance of obtaining accurate information while being as noninvasive as possible. Better models of warning
signs of fetal hypoxia can be obtained through chaotic modeling.[88]

Other areas

In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm
optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by
introducing chaos, which keeps the simulations from getting stuck.[89] In celestial mechanics, especially when
observing asteroids, applying chaos theory leads to better predictions about when these objects will approach
Earth and other planets.[90] Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical
engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory.[91] Closer to
home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths.
Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic
tendencies that, when properly modeled, can be predicted fairly accurately.[92]

Chaos theory can be applied outside of the natural sciences. By adapting a model of career counseling to
include a chaotic interpretation of the relationship between employees and the job market, better suggestions
can be made to people struggling with career decisions.[93] Modern organizations are increasingly seen as open
complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces
that may contribute chaos. The chaos metaphor—used in verbal theories—grounded on mathematical models
and psychological aspects of human behavior provides helpful insights to describing the complexity of small
work groups, that go beyond the metaphor itself.[94]

It is possible that economic models can also


be improved through an application of chaos
theory, but predicting the health of an
economic system and what factors influence
it most is an extremely complex task.[95]
Economic and financial systems are
fundamentally different from those in the
classical natural sciences since the former are
inherently stochastic in nature, as they result
from the interactions of people, and thus pure
deterministic models are unlikely to provide
accurate representations of the data. The
empirical literature that tests for chaos in
economics and finance presents very mixed
results, in part due to confusion between
specific tests for chaos and more general tests
for non-linear relationships.[96]

Traffic forecasting also benefits from applications of chaos theory. Better predictions of when traffic will occur
lets measures be taken to disperse it before it would have occurred. Combining chaos theory principles with a
few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic
model at right).[97]

Chaos theory can be applied in psychology. For example, in modeling group behavior in which heterogeneous
members may behave as if sharing to different degrees what in Wilfred Bion's theory is a basic assumption, the
group dynamics is the result of the individual dynamics of the members: each individual reproduces the group
dynamics in a different scale, and the chaotic behavior of the group is reflected in each member.[98]

https://en.wikipedia.org/wiki/Chaos_theory 10/19
7/25/2017 Chaos theory - Wikipedia

Chaos theory has been applied to environmental water cycle data (aka hydrological data), such as rainfall and
streamflow[99]. These studies have yielded controversial results, because the methods for detecting a chaotic
signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent
studies and meta-analyses called those studies into question and provided explanations for why these datasets
are not likely to have low-dimension chaotic dynamics[100].

See also
Examples of chaotic systems

Advected contours Gaspard-Rice system


Arnold's cat map Hénon map
Bouncing ball dynamics Horseshoe map
Chua's circuit List of chaotic maps
Cliodynamics Logistic map
Coupled map lattice Rössler attractor
Double pendulum Standard map
Duffing equation Swinging Atwood's machine
Dynamical billiards Tilt A Whirl
Economic bubble
Gaspard-Rice system
Other related topics

Amplitude death Fractal


Anosov diffeomorphism Julia set
Bifurcation theory Mandelbrot set
Butterfly effect Kolmogorov–Arnold–Moser theorem
Catastrophe theory Ill-conditioning
Chaos theory in organizational development Ill-posedness
Chaos machine Nonlinear system
Chaotic mixing Patterns in nature
Chaotic scattering Predictability
Complexity Quantum chaos
Control of chaos Santa Fe Institute
Edge of chaos Synchronization of chaos
Emergence Unintended consequence

Fractal
People

Ralph Abraham Ian Malcolm (Jurassic Park character)


Michael Berry Benoit Mandelbrot
Leon O. Chua Norman Packard
Ivar Ekeland Henri Poincaré
Doyne Farmer Otto Rössler
Mitchell Feigenbaum David Ruelle
Martin Gutzwiller Oleksandr Mikolaiovich Sharkovsky
Brosl Hasslacher Robert Shaw
Michel Hénon Floris Takens
Andrey Nikolaevich Kolmogorov James A. Yorke
Edward Lorenz George M. Zaslavsky
Aleksandr Lyapunov
Ian Malcolm (Jurassic Park character)
References
1. Boeing (2015). "Chaos Theory and the Logistic Map" (http://geoffboeing.com/2015/03/chaos-theory-logistic-map/).

https://en.wikipedia.org/wiki/Chaos_theory 11/19
7/25/2017 Chaos theory - Wikipedia

Retrieved 2015-07-16.
2. Kellert, Stephen H. (1993). In the Wake of Chaos: Unpredictable Order in Dynamical Systems. University of
Chicago Press. p. 32. ISBN 0-226-42976-8.
3. Boeing, G. (2016). "Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity and the
Limits of Prediction" (http://geoffboeing.com/publications/nonlinear-chaos-fractals-prediction/). Systems. 4 (4): 37.
doi:10.3390/systems4040037 (https://doi.org/10.3390%2Fsystems4040037). Retrieved 2016-12-02.
4. Kellert 1993, p. 56
5. Kellert 1993, p. 62
6. Werndl, Charlotte (2009). "What are the New Implications of Chaos for Unpredictability?" (http://bjps.oxfordjournal
s.org/cgi/content/abstract/60/1/195). The British Journal for the Philosophy of Science. 60 (1): 195–220.
doi:10.1093/bjps/axn053 (https://doi.org/10.1093%2Fbjps%2Faxn053).
7. Danforth, Christopher M. (April 2013). "Chaos in an Atmosphere Hanging on a Wall" (http://mpe2013.org/2013/03/
17/chaos-in-an-atmosphere-hanging-on-a-wall). Mathematics of Planet Earth 2013. Retrieved 4 April 2013.
8. Lorenz, Edward N. (1963). "Deterministic non-periodic flow". Journal of the Atmospheric Sciences. 20 (2): 130–
141. Bibcode:1963JAtS...20..130L (http://adsabs.harvard.edu/abs/1963JAtS...20..130L). doi:10.1175/1520-
0469(1963)020<0130:DNF>2.0.CO;2 (https://doi.org/10.1175%2F1520-0469%281963%29020%3C0130%3ADN
F%3E2.0.CO%3B2).
9. Ivancevic, Vladimir G.; Tijana T. Ivancevic (2008). Complex nonlinearity: chaos, phase transitions, topology
change, and path integrals. Springer. ISBN 978-3-540-79356-4.
10. Safonov, Leonid A.; Tomer, Elad; Strygin, Vadim V.; Ashkenazy, Yosef; Havlin, Shlomo (2002). "Multifractal
chaotic attractors in a system of delay-differential equations modeling road traffic". Chaos: An Interdisciplinary
Journal of Nonlinear Science. 12 (4): 1006. Bibcode:2002Chaos..12.1006S (http://adsabs.harvard.edu/abs/2002Chao
s..12.1006S). ISSN 1054-1500 (https://www.worldcat.org/issn/1054-1500). doi:10.1063/1.1507903 (https://doi.org/1
0.1063%2F1.1507903).
11. Hubler, A (1989). "Adaptive control of chaotic systems". Swiss Physical Society. Helvetica Physica Acta 62: 339–
342.
12. Sync: The Emerging Science of Spontaneous Order, Steven Strogatz, Hyperion, New York, 2003, pages 189-190.
13. Definition of chaos at Wiktionary;
14. Hasselblatt, Boris; Anatole Katok (2003). A First Course in Dynamics: With a Panorama of Recent Developments.
Cambridge University Press. ISBN 0-521-58750-6.
15. Elaydi, Saber N. (1999). Discrete Chaos. Chapman & Hall/CRC. p. 117. ISBN 1-58488-002-3.
16. Basener, William F. (2006). Topology and its applications. Wiley. p. 42. ISBN 0-471-68755-3.
17. Vellekoop, Michel; Berglund, Raoul (April 1994). "On Intervals, Transitivity = Chaos". The American Mathematical
Monthly. 101 (4): 353–5. JSTOR 2975629 (https://www.jstor.org/stable/2975629). doi:10.2307/2975629 (https://doi.
org/10.2307%2F2975629).
18. Medio, Alfredo; Lines, Marji (2001). Nonlinear Dynamics: A Primer. Cambridge University Press. p. 165. ISBN 0-
521-55874-3.
19. Ovchinnikov, I.V. (March 2016). "Introduction to Supersymmetric Theory of Stochastics". Entropy. 18 (4): 108.
doi:10.3390/e18040108 (https://doi.org/10.3390%2Fe18040108).
20. Ovchinnikov, I.V.; Schwartz, R. N.; Wang, K. L. (2016). "Topological supersymmetry breaking: Definition and
stochastic generalization of chaos and the limit of applicability of statistics". Modern Physics Letters B. 30: 1650086.
doi:10.1142/S021798491650086X (https://doi.org/10.1142%2FS021798491650086X).
21. Watts, Robert G. (2007). Global Warming and the Future of the Earth. Morgan & Claypool. p. 17.
22. Devaney 2003
23. Alligood, Sauer & Yorke 1997
24. Li, T.Y.; Yorke, J.A. (1975). "Period Three Implies Chaos" (https://web.archive.org/web/20091229042210/http://pb.
math.univ.gda.pl/chaos/pdf/li-yorke.pdf) (PDF). American Mathematical Monthly. 82 (10): 985–92.
Bibcode:1975AmMM...82..985L (http://adsabs.harvard.edu/abs/1975AmMM...82..985L). doi:10.2307/2318254 (http
s://doi.org/10.2307%2F2318254). Archived from the original (http://pb.math.univ.gda.pl/chaos/pdf/li-yorke.pdf)
(PDF) on 2009-12-29.
25. Strelioff, Christopher; et., al. (2006). "Medium-Term Prediction of Chaos". Phys.Rev.Lett. 96.
doi:10.1103/PhysRevLett.96.044101 (https://doi.org/10.1103%2FPhysRevLett.96.044101).
26. Sprott, J.C. (1997). "Simplest dissipative chaotic flow". Physics Letters A. 228 (4–5): 271–274.
Bibcode:1997PhLA..228..271S (http://adsabs.harvard.edu/abs/1997PhLA..228..271S). doi:10.1016/S0375-
9601(97)00088-1 (https://doi.org/10.1016%2FS0375-9601%2897%2900088-1).
27. Fu, Z.; Heidel, J. (1997). "Non-chaotic behaviour in three-dimensional quadratic systems". Nonlinearity. 10 (5):
1289–1303. Bibcode:1997Nonli..10.1289F (http://adsabs.harvard.edu/abs/1997Nonli..10.1289F). doi:10.1088/0951-
7715/10/5/014 (https://doi.org/10.1088%2F0951-7715%2F10%2F5%2F014).
28. Heidel, J.; Fu, Z. (1999). "Nonchaotic behaviour in three-dimensional quadratic systems II. The conservative case".
Nonlinearity. 12 (3): 617–633. Bibcode:1999Nonli..12..617H (http://adsabs.harvard.edu/abs/1999Nonli..12..617H).
doi:10.1088/0951-7715/12/3/012 (https://doi.org/10.1088%2F0951-7715%2F12%2F3%2F012).
https://en.wikipedia.org/wiki/Chaos_theory 12/19
7/25/2017 Chaos theory - Wikipedia

29. Rosario, Pedro (2006). Underdetermination of Science: Part I. Lulu.com. ISBN 1411693914.
30. Bonet, J.; Martínez-Giménez, F.; Peris, A. (2001). "A Banach space which admits no chaotic operator". Bulletin of
the London Mathematical Society. 33 (2): 196–8. doi:10.1112/blms/33.2.196 (https://doi.org/10.1112%2Fblms%2F3
3.2.196).
31. K. E. Chlouverakis and J. C. Sprott, Chaos Solitons & Fractals 28, 739-746 (2005), Chaotic Hyperjerk Systems,
http://sprott.physics.wisc.edu/pubs/paper297.htm
32. Steven Strogatz, Sync: The Emerging Science of Spontaneous Order, Hyperion, 2003.
33. Poincaré, Jules Henri (1890). "Sur le problème des trois corps et les équations de la dynamique. Divergence des
séries de M. Lindstedt". Acta Mathematica. 13: 1–270. doi:10.1007/BF02392506 (https://doi.org/10.1007%2FBF023
92506).
34. Diacu, Florin; Holmes, Philip (1996). Celestial Encounters: The Origins of Chaos and Stability. Princeton University
Press.
35. Hadamard, Jacques (1898). "Les surfaces à courbures opposées et leurs lignes géodesiques". Journal de
Mathématiques Pures et Appliquées. 4: 27–73.
36. George D. Birkhoff, Dynamical Systems, vol. 9 of the American Mathematical Society Colloquium Publications
(Providence, Rhode Island: American Mathematical Society, 1927)
37. Kolmogorov, Andrey Nikolaevich (1941). "Local structure of turbulence in an incompressible fluid for very large
Reynolds numbers". Doklady Akademii Nauk SSSR. 30 (4): 301–5. Bibcode:1941DoSSR..30..301K (http://adsabs.har
vard.edu/abs/1941DoSSR..30..301K). Reprinted in: Kolmogorov, A. N. (1991). "The Local Structure of Turbulence
in Incompressible Viscous Fluid for Very Large Reynolds Numbers". Proceedings of the Royal Society A. 434
(1890): 9–13. Bibcode:1991RSPSA.434....9K (http://adsabs.harvard.edu/abs/1991RSPSA.434....9K).
doi:10.1098/rspa.1991.0075 (https://doi.org/10.1098%2Frspa.1991.0075).
38. Kolmogorov, A. N. (1941). "On degeneration of isotropic turbulence in an incompressible viscous liquid". Doklady
Akademii Nauk SSSR. 31 (6): 538–540. Reprinted in: Kolmogorov, A. N. (1991). "Dissipation of Energy in the
Locally Isotropic Turbulence". Proceedings of the Royal Society A. 434 (1890): 15–17.
Bibcode:1991RSPSA.434...15K (http://adsabs.harvard.edu/abs/1991RSPSA.434...15K). doi:10.1098/rspa.1991.0076
(https://doi.org/10.1098%2Frspa.1991.0076).
39. Kolmogorov, A. N. (1954). "Preservation of conditionally periodic movements with small change in the Hamiltonian
function". Doklady Akademii Nauk SSSR. Lecture Notes in Physics. 98: 527–530. Bibcode:1979LNP....93...51K (htt
p://adsabs.harvard.edu/abs/1979LNP....93...51K). ISBN 3-540-09120-3. doi:10.1007/BFb0021737 (https://doi.org/1
0.1007%2FBFb0021737). See also Kolmogorov–Arnold–Moser theorem
40. Cartwright, Mary L.; Littlewood, John E. (1945). "On non-linear differential equations of the second order, I: The
equation y" + k(1−y2)y' + y = bλkcos(λt + a), k large". Journal of the London Mathematical Society. 20 (3): 180–9.
doi:10.1112/jlms/s1-20.3.180 (https://doi.org/10.1112%2Fjlms%2Fs1-20.3.180). See also: Van der Pol oscillator
41. Smale, Stephen (January 1960). "Morse inequalities for a dynamical system". Bulletin of the American Mathematical
Society. 66: 43–49. doi:10.1090/S0002-9904-1960-10386-2 (https://doi.org/10.1090%2FS0002-9904-1960-10386-2).
42. Abraham & Ueda 2001, See Chapters 3 and 4
43. Sprott 2003, p. 89 (https://books.google.com/books?id=SEDjdjPZ158C&pg=PA89)
44. Gleick, James (1987). Chaos: Making a New Science. London: Cardinal. p. 17. ISBN 0-434-29554-X.
45. Mandelbrot, Benoît (1963). "The variation of certain speculative prices". Journal of Business. 36 (4): 394–419.
JSTOR 2350970 (https://www.jstor.org/stable/2350970). doi:10.1086/294632 (https://doi.org/10.1086%2F294632).
46. Berger J.M.; Mandelbrot B. (1963). "A new model for error clustering in telephone circuits". IBM Journal of
Research and Development. 7: 224–236. doi:10.1147/rd.73.0224 (https://doi.org/10.1147%2Frd.73.0224).
47. Mandelbrot, B. (1977). The Fractal Geometry of Nature. New York: Freeman. p. 248.
48. See also: Mandelbrot, Benoît B.; Hudson, Richard L. (2004). The (Mis)behavior of Markets: A Fractal View of Risk,
Ruin, and Reward. New York: Basic Books. p. 201.
49. Mandelbrot, Benoît (5 May 1967). "How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional
Dimension". Science. 156 (3775): 636–8. Bibcode:1967Sci...156..636M (http://adsabs.harvard.edu/abs/1967Sci...15
6..636M). PMID 17837158 (https://www.ncbi.nlm.nih.gov/pubmed/17837158). doi:10.1126/science.156.3775.636 (h
ttps://doi.org/10.1126%2Fscience.156.3775.636).
50. Mandelbrot, B. (1982). The Fractal Geometry of Nature. New York: Macmillan. ISBN 0716711869.
51. Buldyrev, S.V.; Goldberger, A.L.; Havlin, S.; Peng, C.K.; Stanley, H.E. (1994). "Fractals in Biology and Medicine:
From DNA to the Heartbeat". In Bunde, Armin; Havlin, Shlomo. Fractals in Science. Springer. pp. 49–89. ISBN 3-
540-56220-6.
52. Feigenbaum, Mitchell (July 1978). "Quantitative universality for a class of nonlinear transformations". Journal of
Statistical Physics. 19 (1): 25–52. Bibcode:1978JSP....19...25F (http://adsabs.harvard.edu/abs/1978JSP....19...25F).
doi:10.1007/BF01020332 (https://doi.org/10.1007%2FBF01020332).
53. Coullet, Pierre, and Charles Tresser. "Iterations d'endomorphismes et groupe de renormalisation." Le Journal de
Physique Colloques 39.C5 (1978): C5-25
54. "The Wolf Prize in Physics in 1986." (http://www.wolffund.org.il/cat.asp?id=25&cat_title=PHYSICS).

https://en.wikipedia.org/wiki/Chaos_theory 13/19
7/25/2017 Chaos theory - Wikipedia

55. Huberman, B.A. (July 1987). "A Model for Dysfunctions in Smooth Pursuit Eye Movement". Annals of the New
York Academy of Sciences. 504 Perspectives in Biological Dynamics and Theoretical Medicine: 260–273.
Bibcode:1987NYASA.504..260H (http://adsabs.harvard.edu/abs/1987NYASA.504..260H). doi:10.1111/j.1749-
6632.1987.tb48737.x (https://doi.org/10.1111%2Fj.1749-6632.1987.tb48737.x).
56. Bak, Per; Tang, Chao; Wiesenfeld, Kurt; Tang; Wiesenfeld (27 July 1987). "Self-organized criticality: An
explanation of the 1/f noise". Physical Review Letters. 59 (4): 381–4. Bibcode:1987PhRvL..59..381B (http://adsabs.h
arvard.edu/abs/1987PhRvL..59..381B). doi:10.1103/PhysRevLett.59.381 (https://doi.org/10.1103%2FPhysRevLett.5
9.381). However, the conclusions of this article have been subject to dispute. "?" (https://web.archive.org/web/20071
214033929/https://www.nslij-genetics.org/wli/1fnoise/1fnoise_square.html). Archived from the original (http://www.
nslij-genetics.org/wli/1fnoise/1fnoise_square.html) on 2007-12-14.. See especially: Laurson, Lasse; Alava, Mikko J.;
Zapperi, Stefano (15 September 2005). "Letter: Power spectra of self-organized critical sand piles". Journal of
Statistical Mechanics: Theory and Experiment. 0511. L001.
57. Omori, F. (1894). "On the aftershocks of earthquakes". Journal of the College of Science, Imperial University of
Tokyo. 7: 111–200.
58. Gleick, James (August 26, 2008). Chaos: Making a New Science. Penguin Books. ISBN 0143113453.
59. Motter, A. E.; Campbell, D. K. (2013). "Chaos at fifty" (http://www.physicstoday.org/resource/1/phtoad/v66/i5/p27_
s1?bypassSSO=1). Phys. Today. 66 (5): 27–33. doi:10.1063/pt.3.1977 (https://doi.org/10.1063%2Fpt.3.1977).
60. Hubler, A.; Foster, G.; Phelps, K. (2007). "Managing chaos: Thinking out of the box". Complexity.
doi:10.1002/cplx.20159 (https://doi.org/10.1002%2Fcplx.20159).
61. Stephen Coombes (February 2009). "The Geometry and Pigmentation of Seashells" (https://www.maths.nottingham.
ac.uk/personal/sc/pdfs/Seashells09.pdf) (PDF). www.maths.nottingham.ac.uk. University of Nottingham. Retrieved
2013-04-10.
62. Kyrtsou C.; Labys W. (2006). "Evidence for chaotic dependence between US inflation and commodity prices".
Journal of Macroeconomics. 28 (1): 256–266. doi:10.1016/j.jmacro.2005.10.019 (https://doi.org/10.1016%2Fj.jmacr
o.2005.10.019).
63. Kyrtsou C., Labys W.; Labys (2007). "Detecting positive feedback in multivariate time series: the case of metal
prices and US inflation". Physica A. 377 (1): 227–229. Bibcode:2007PhyA..377..227K (http://adsabs.harvard.edu/ab
s/2007PhyA..377..227K). doi:10.1016/j.physa.2006.11.002 (https://doi.org/10.1016%2Fj.physa.2006.11.002).
64. Kyrtsou, C.; Vorlow, C. (2005). "Complex dynamics in macroeconomics: A novel approach". In Diebolt, C.;
Kyrtsou, C. New Trends in Macroeconomics. Springer Verlag.
65. Applying Chaos Theory to Embedded Applications (http://www.dspdesignline.com/218101444;jsessionid=Y0BSVT
QJJTBACQSNDLOSKH0CJUNN2JVN?pgno=1)
66. Hristu-Varsakelis, D.; Kyrtsou, C. (2008). "Evidence for nonlinear asymmetric causality in US inflation, metal and
stock returns". Discrete Dynamics in Nature and Society. 2008: 1–7. doi:10.1155/2008/138547 (https://doi.org/10.11
55%2F2008%2F138547). 138547.
67. Kyrtsou, C.; M. Terraza, (2003). "Is it possible to study chaotic and ARCH behaviour jointly? Application of a noisy
Mackey-Glass equation with heteroskedastic errors to the Paris Stock Exchange returns series". Computational
Economics. 21 (3): 257–276. doi:10.1023/A:1023939610962 (https://doi.org/10.1023%2FA%3A1023939610962).
68. Williams, Bill Williams, Justine (2004). Trading chaos : maximize profits with proven technical techniques (2nd ed.).
New York: Wiley. ISBN 9780471463085.
69. Peters, Edgar E. (1994). Fractal market analysis : applying chaos theory to investment and economics (2. print. ed.).
New York u.a.: Wiley. ISBN 978-0471585244.
70. Peters, / Edgar E. (1996). Chaos and order in the capital markets : a new view of cycles, prices, and market volatility
(2nd ed.). New York: John Wiley & Sons. ISBN 978-0471139386.
71. Hubler, A.; Phelps, K. (2007). "Guiding a self-adjusting system through chaos". Complexity. doi:10.1002/cplx.20204
(https://doi.org/10.1002%2Fcplx.20204).
72. Gerig, A. (2007). "Chaos in a one-dimensional compressible flow". Physical Review E. arXiv:nlin/0701050 (https://a
rxiv.org/abs/nlin/0701050)  . doi:10.1103/PhysRevE.75.045202 (https://doi.org/10.1103%2FPhysRevE.75.045202).
73. Wotherspoon, T.; Hubler, A. (2009). "Adaptation to the Edge of Chaos in the Self-Adjusting Logistic Map". The
Journal of Physical Chemistry A. doi:10.1021/jp804420g (https://doi.org/10.1021%2Fjp804420g).
74. Dilão, R.; Domingos, T. (2001). "Periodic and Quasi-Periodic Behavior in Resource Dependent Age Structured
Population Models". Bulletin of Mathematical Biology. 63 (2): 207–230. PMID 11276524 (https://www.ncbi.nlm.ni
h.gov/pubmed/11276524). doi:10.1006/bulm.2000.0213 (https://doi.org/10.1006%2Fbulm.2000.0213).
75. Akhavan, A.; Samsudin, A.; Akhshani, A. (2011-10-01). "A symmetric image encryption scheme based on
combination of nonlinear chaotic maps" (http://www.sciencedirect.com/science/article/pii/S0016003211001104).
Journal of the Franklin Institute. 348 (8): 1797–1813. doi:10.1016/j.jfranklin.2011.05.001 (https://doi.org/10.1016%
2Fj.jfranklin.2011.05.001).
76. Behnia, S.; Akhshani, A.; Mahmodi, H.; Akhavan, A. (2008-01-01). "A novel algorithm for image encryption based
on mixture of chaotic maps" (http://www.sciencedirect.com/science/article/pii/S0960077906004681). Chaos,
Solitons & Fractals. 35 (2): 408–419. doi:10.1016/j.chaos.2006.05.011 (https://doi.org/10.1016%2Fj.chaos.2006.05.
011).
https://en.wikipedia.org/wiki/Chaos_theory 14/19
7/25/2017 Chaos theory - Wikipedia

77. Akhavan, A.; Samsudin, A.; Akhshani, A. (2011-10-01). "A symmetric image encryption scheme based on
combination of nonlinear chaotic maps" (http://www.sciencedirect.com/science/article/pii/S0016003211001104).
Journal of the Franklin Institute. 348 (8): 1797–1813. doi:10.1016/j.jfranklin.2011.05.001 (https://doi.org/10.1016%
2Fj.jfranklin.2011.05.001).
78. Wang, Xingyuan; Zhao, Jianfeng (2012). "An improved key agreement protocol based on chaos". Commun.
Nonlinear Sci. Numer. Simul. 15 (12): 4052–4057. Bibcode:2010CNSNS..15.4052W (http://adsabs.harvard.edu/abs/2
010CNSNS..15.4052W). doi:10.1016/j.cnsns.2010.02.014 (https://doi.org/10.1016%2Fj.cnsns.2010.02.014).
79. Babaei, Majid (2013). "A novel text and image encryption method based on chaos theory and DNA computing".
Natural Computing. an International Journal. 12 (1): 101–107. doi:10.1007/s11047-012-9334-9 (https://doi.org/10.1
007%2Fs11047-012-9334-9).
80. Akhavan, A.; Samsudin, A.; Akhshani, A. (2017-10-01). "Cryptanalysis of an image encryption algorithm based on
DNA encoding" (http://www.sciencedirect.com/science/article/pii/S0030399216308635). Optics & Laser
Technology. 95: 94–99. doi:10.1016/j.optlastec.2017.04.022 (https://doi.org/10.1016%2Fj.optlastec.2017.04.022).
81. Xu, Ming (2017-06-01). "Cryptanalysis of an Image Encryption Algorithm Based on DNA Sequence Operation and
Hyper-chaotic System" (https://link.springer.com/article/10.1007/s13319-017-0126-y). 3D Research. 8 (2): 15.
ISSN 2092-6731 (https://www.worldcat.org/issn/2092-6731). doi:10.1007/s13319-017-0126-y (https://doi.org/10.100
7%2Fs13319-017-0126-y).
82. Liu, Yuansheng; Tang, Jie; Xie, Tao (2014-08-01). "Cryptanalyzing a RGB image encryption algorithm based on
DNA encoding and chaos map" (http://www.sciencedirect.com/science/article/pii/S0030399214000255). Optics &
Laser Technology. 60: 111–115. arXiv:1307.4279 (https://arxiv.org/abs/1307.4279)  .
doi:10.1016/j.optlastec.2014.01.015 (https://doi.org/10.1016%2Fj.optlastec.2014.01.015).
83. Nehmzow, Ulrich; Keith Walker (Dec 2005). "Quantitative description of robot–environment interaction using chaos
theory". Robotics and Autonomous Systems. 53 (3–4): 177–193. doi:10.1016/j.robot.2005.09.009 (https://doi.org/10.
1016%2Fj.robot.2005.09.009).
84. Goswami, Ambarish; Thuilot, Benoit; Espiau, Bernard (1998). "A Study of the Passive Gait of a Compass-Like
Biped Robot: Symmetry and Chaos". The International Journal of Robotics Research. 17 (12): 1282–1301.
doi:10.1177/027836499801701202 (https://doi.org/10.1177%2F027836499801701202).
85. Eduardo, Liz; Ruiz-Herrera, Alfonso (2012). "Chaos in discrete structured population models". SIAM Journal on
Applied Dynamical Systems. 11 (4): 1200–1214. doi:10.1137/120868980 (https://doi.org/10.1137%2F120868980).
86. Lai, Dejian (1996). "Comparison study of AR models on the Canadian lynx data: a close look at BDS statistic".
Computational Statistics \& Data Analysis. 22 (4): 409–423. doi:10.1016/0167-9473(95)00056-9 (https://doi.org/10.
1016%2F0167-9473%2895%2900056-9).
87. Sivakumar, B (31 January 2000). "Chaos theory in hydrology: important issues and interpretations". Journal of
Hydrology. 227 (1–4): 1–20. Bibcode:2000JHyd..227....1S (http://adsabs.harvard.edu/abs/2000JHyd..227....1S).
doi:10.1016/S0022-1694(99)00186-9 (https://doi.org/10.1016%2FS0022-1694%2899%2900186-9).
88. Bozóki, Zsolt (February 1997). "Chaos theory and power spectrum analysis in computerized cardiotocography".
European Journal of Obstetrics & Gynecology and Reproductive Biology. 71 (2): 163–168. doi:10.1016/s0301-
2115(96)02628-0 (https://doi.org/10.1016%2Fs0301-2115%2896%2902628-0).
89. Li, Mengshan; Xingyuan Huanga; Hesheng Liua; Bingxiang Liub; Yan Wub; Aihua Xiongc; Tianwen Dong (25
October 2013). "Prediction of gas solubility in polymers by back propagation artificial neural network based on self-
adaptive particle swarm optimization algorithm and chaos theory". Fluid Phase Equilibria. 356: 11–17.
doi:10.1016/j.fluid.2013.07.017 (https://doi.org/10.1016%2Fj.fluid.2013.07.017).
90. Morbidelli, A. (2001). "Chaotic diffusion in celestial mechanics". Regular & Chaotic Dynamics. 6 (4): 339–353.
doi:10.1070/rd2001v006n04abeh000182 (https://doi.org/10.1070%2Frd2001v006n04abeh000182).
91. Steven Strogatz, Sync: The Emerging Science of Spontaneous Order, Hyperion, 2003
92. Dingqi, Li; Yuanping Chenga; Lei Wanga; Haifeng Wanga; Liang Wanga; Hongxing Zhou (May 2011). "Prediction
method for risks of coal and gas outbursts based on spatial chaos theory using gas desorption index of drill cuttings".
Mining Science and Technology. 21 (3): 439–443.
93. Pryor, Robert G. L.; Norman E. Aniundson; Jim E. H. Bright (June 2008). "Probabilities and Possibilities: The
Strategic Counseling Implications of the Chaos Theory of Careers". The Career Development Quarterly. 56: 309–
318. doi:10.1002/j.2161-0045.2008.tb00096.x (https://doi.org/10.1002%2Fj.2161-0045.2008.tb00096.x).
94. Dal Forno, Arianna; Merlone, Ugo (2013). "Chaotic Dynamics in Organization Theory". In Bischi, Gian Italo;
Chiarella, Carl; Shusko, Irina. Global Analysis of Dynamic Models in Economics and Finance. Springer-Verlag.
pp. 185–204. ISBN 978-3-642-29503-4.
95. Juárez, Fernando (2011). "Applying the theory of chaos and a complex model of health to establish relations among
financial indicators". Procedia Computer Science. 3: 982–986. doi:10.1016/j.procs.2010.12.161 (https://doi.org/10.1
016%2Fj.procs.2010.12.161).
96. Brooks, Chris (1998). "Chaos in foreign exchange markets: a sceptical view". Computational Economics. 11: 265–
281. ISSN 1572-9974 (https://www.worldcat.org/issn/1572-9974). doi:10.1023/A:1008650024944 (https://doi.org/1
0.1023%2FA%3A1008650024944).

https://en.wikipedia.org/wiki/Chaos_theory 15/19
7/25/2017 Chaos theory - Wikipedia

97. Wang, Jin; Qixin Shi (February 2013). "Short-term traffic speed forecasting hybrid model based on Chaos–Wavelet
Analysis-Support Vector Machine theory". Transportation Research Part C: Emerging Technologies. 27: 219–232.
doi:10.1016/j.trc.2012.08.004 (https://doi.org/10.1016%2Fj.trc.2012.08.004).
98. Dal Forno, Arianna; Merlone, Ugo (2013). "Nonlinear dynamics in work groups with Bion's basic assumptions".
Nonlinear Dynamics, Psychology, and Life Sciences. 17 (2): 295–315. ISSN 1090-0578 (https://www.worldcat.org/is
sn/1090-0578).
99. "Dr. Gregory B. Pasternack - Watershed Hydrology, Geomorphology, and Ecohydraulics :: Chaos in Hydrology" (htt
p://pasternack.ucdavis.edu/research/projects/chaos-hydrology/). pasternack.ucdavis.edu. Retrieved 2017-06-12.
100. Pasternack, Gregory B. (1999-11-01). "Does the river run wild? Assessing chaos in hydrological systems" (http://ww
w.sciencedirect.com/science/article/pii/S0309170899000081). Advances in Water Resources. 23 (3): 253–260.
doi:10.1016/s0309-1708(99)00008-1 (https://doi.org/10.1016%2Fs0309-1708%2899%2900008-1).

Scientific literature
Articles

Sharkovskii, A.N. (1964). "Co-existence of cycles of a continuous mapping of the line into itself".
Ukrainian Math. J. 16: 61–71.
Li, T.Y.; Yorke, J.A. (1975). "Period Three Implies Chaos". American Mathematical Monthly. 82 (10):
985–92. Bibcode:1975AmMM...82..985L (http://adsabs.harvard.edu/abs/1975AmMM...82..985L).
doi:10.2307/2318254 (https://doi.org/10.2307%2F2318254).
Crutchfield; Tucker; Morrison; J.D.; Packard; N.H.; Shaw; R.S (December 1986). "Chaos". Scientific
American. 255 (6): 38–49 (bibliography p.136). Bibcode:1986SciAm.255...38T (http://adsabs.harvard.ed
u/abs/1986SciAm.255...38T). Online version (http://cse.ucdavis.edu/~chaos/courses/ncaso/Readings/Cha
os_SciAm1986/Chaos_SciAm1986.html) (Note: the volume and page citation cited for the online text
differ from that cited here. The citation here is from a photocopy, which is consistent with other citations
found online that don't provide article views. The online content is identical to the hardcopy text. Citation
variations are related to country of publication).
Kolyada, S.F. (2004). "Li-Yorke sensitivity and other concepts of chaos" (http://www.springerlink.com/c
ontent/q00627510552020g/?p=93e1f3daf93549d1850365a8800afb30&pi=3). Ukrainian Math. J. 56 (8):
1242–57. doi:10.1007/s11253-005-0055-4 (https://doi.org/10.1007%2Fs11253-005-0055-4).
Day, R.H.; Pavlov, O.V. (2004). "Computing Economic Chaos". Computational Economics. 23 (4): 289–
301. SSRN 806124 (https://ssrn.com/abstract=806124)  . doi:10.1023/B:CSEM.0000026787.81469.1f (h
ttps://doi.org/10.1023%2FB%3ACSEM.0000026787.81469.1f).
Strelioff, C.; Hübler, A. (2006). "Medium-Term Prediction of Chaos" (https://web.archive.org/web/20130
426201635/http://www.ccsr.illinois.edu/web/Techreports/2005-08/CCSR-05-4.pdf) (PDF). Phys. Rev. Lett.
96 (4): 044101. Bibcode:2006PhRvL..96d4101S (http://adsabs.harvard.edu/abs/2006PhRvL..96d4101S).
PMID 16486826 (https://www.ncbi.nlm.nih.gov/pubmed/16486826).
doi:10.1103/PhysRevLett.96.044101 (https://doi.org/10.1103%2FPhysRevLett.96.044101). 044101.
Archived from the original (http://www.ccsr.illinois.edu/web/Techreports/2005-08/CCSR-05-4.pdf) (PDF)
on 2013-04-26.
Hübler, A.; Foster, G.; Phelps, K. (2007). "Managing Chaos: Thinking out of the Box" (http://server17.ho
w-why.com/blog/ManagingChaos.pdf) (PDF). Complexity. 12 (3): 10–13. doi:10.1002/cplx.20159 (https://
doi.org/10.1002%2Fcplx.20159).
Motter, Adilson E.; Campbell, David K. (2013). "Chaos at 50" (http://scitation.aip.org/content/aip/magazi
ne/physicstoday/article/66/5/10.1063/PT.3.1977). Physics Today. 66: 27. doi:10.1063/PT.3.1977 (https://d
oi.org/10.1063%2FPT.3.1977).
Boeing, G. (2016). "Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity
and the Limits of Prediction" (http://dx.doi.org/10.3390/systems4040037). Systems. 4 (4): 37.
doi:10.3390/systems4040037 (https://doi.org/10.3390%2Fsystems4040037).

Textbooks

Alligood, K.T.; Sauer, T.; Yorke, J.A. (1997). Chaos: an introduction to dynamical systems (https://book
s.google.com/books?id=48YHnbHGZAgC). Springer-Verlag. ISBN 0-387-94677-2.
Baker, G. L. (1996). Chaos, Scattering and Statistical Mechanics. Cambridge University Press. ISBN 0-
521-39511-9.
https://en.wikipedia.org/wiki/Chaos_theory 16/19
7/25/2017 Chaos theory - Wikipedia

Badii, R.; Politi A. (1997). Complexity: hierarchical structures and scaling in physics (http://www.cambr
idge.org/gb/academic/subjects/physics/statistical-physics/complexity-hierarchical-structures-and-scaling-
physics). Cambridge University Press. ISBN 0-521-66385-7.
Bunde; Havlin, Shlomo, eds. (1996). Fractals and Disordered Systems. Springer. ISBN 3642848702. and
Bunde; Havlin, Shlomo, eds. (1994). Fractals in Science. Springer. ISBN 3-540-56220-6.
Collet, Pierre, and Eckmann, Jean-Pierre (1980). Iterated Maps on the Interval as Dynamical Systems.
Birkhauser. ISBN 0-8176-4926-3.
Devaney, Robert L. (2003). An Introduction to Chaotic Dynamical Systems (https://books.google.com/bo
oks?id=CjAnY99LwTgC) (2nd ed.). Westview Press. ISBN 0-8133-4085-3.
Feldman, D. P. (2012). Chaos and Fractals: An Elementary Introduction (http://chaos.coa.edu/index.htm
l). Oxford University Press. ISBN 978-0-19-956644-0.
Gollub, J. P.; Baker, G. L. (1996). Chaotic dynamics (https://books.google.com/books?id=n1qnekRPKto
C). Cambridge University Press. ISBN 0-521-47685-2.
Guckenheimer, John; Holmes, Philip (1983). Nonlinear Oscillations, Dynamical Systems, and
Bifurcations of Vector Fields. Springer-Verlag. ISBN 0-387-90819-6.
Gulick, Denny (1992). Encounters with Chaos. McGraw-Hill. ISBN 0-07-025203-3.
Gutzwiller, Martin (1990). Chaos in Classical and Quantum Mechanics (https://books.google.com/book
s?id=fnO3XYYpU54C). Springer-Verlag. ISBN 0-387-97173-4.
Hoover, William Graham (2001) [1999]. Time Reversibility, Computer Simulation, and Chaos (https://bo
oks.google.com/books?id=24kEKsdl0psC). World Scientific. ISBN 981-02-4073-2.
Kautz, Richard (2011). Chaos: The Science of Predictable Random Motion (https://books.google.com/bo
oks?id=x5YbNZjulN0C). Oxford University Press. ISBN 978-0-19-959458-0.
Kiel, L. Douglas; Elliott, Euel W. (1997). Chaos Theory in the Social Sciences (https://books.google.co
m/books?id=K46kkMXnKfcC). Perseus Publishing. ISBN 0-472-08472-0.
Moon, Francis (1990). Chaotic and Fractal Dynamics (https://books.google.com/books?id=Ddz-CI-nSK
YC). Springer-Verlag. ISBN 0-471-54571-6.
Ott, Edward (2002). Chaos in Dynamical Systems (https://books.google.com/books?id=nOLx--zzHSgC).
Cambridge University Press. ISBN 0-521-01084-5.
Strogatz, Steven (2000). Nonlinear Dynamics and Chaos. Perseus Publishing. ISBN 0-7382-0453-6.
Sprott, Julien Clinton (2003). Chaos and Time-Series Analysis (https://books.google.com/books?id=SEDj
djPZ158C). Oxford University Press. ISBN 0-19-850840-9.
Tél, Tamás; Gruiz, Márton (2006). Chaotic dynamics: An introduction based on classical mechanics (htt
ps://books.google.com/books?id=P2JL7s2IvakC). Cambridge University Press. ISBN 0-521-83912-2.
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems (http://www.mat.univie.a
c.at/~gerald/ftp/book-ode/). Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
Thompson JM, Stewart HB (2001). Nonlinear Dynamics And Chaos. John Wiley and Sons Ltd. ISBN 0-
471-87645-3.
Tufillaro; Reilly (1992). An experimental approach to nonlinear dynamics and chaos. Addison-Wesley.
ISBN 0-201-55441-0.
Wiggins, Stephen (2003). Introduction to Applied Dynamical Systems and Chaos. Springer. ISBN 0-387-
00177-8.
Zaslavsky, George M. (2005). Hamiltonian Chaos and Fractional Dynamics. Oxford University Press.
ISBN 0-19-852604-0.

Semitechnical and popular works


Christophe Letellier, Chaos in Nature, World Scientific Publishing Company, 2012, ISBN 978-981-4374-
42-2.
Abraham, Ralph H.; Ueda, Yoshisuke, eds. (2000). The Chaos Avant-Garde: Memoirs of the Early Days
of Chaos Theory (https://books.google.com/books?id=0E667XpBq1UC). World Scientific. ISBN 978-
981-238-647-2.
Barnsley, Michael F. (2000). Fractals Everywhere (https://books.google.com/books?
id=oh7NoePgmOIC). Morgan Kaufmann. ISBN 978-0-12-079069-2.
Bird, Richard J. (2003). Chaos and Life: Complexit and Order in Evolution and Thought (https://books.g
oogle.com/books?id=fv3sltQBS54C). Columbia University Press. ISBN 978-0-231-12662-5.
John Briggs and David Peat, Turbulent Mirror: : An Illustrated Guide to Chaos Theory and the Science
of Wholeness, Harper Perennial 1990, 224 pp.

https://en.wikipedia.org/wiki/Chaos_theory 17/19
7/25/2017 Chaos theory - Wikipedia

John Briggs and David Peat, Seven Life Lessons of Chaos: Spiritual Wisdom from the Science of Change,
Harper Perennial 2000, 224 pp.
Cunningham, Lawrence A. (1994). "From Random Walks to Chaotic Crashes: The Linear Genealogy of
the Efficient Capital Market Hypothesis". George Washington Law Review. 62: 546.
Predrag Cvitanović, Universality in Chaos, Adam Hilger 1989, 648 pp.
Leon Glass and Michael C. Mackey, From Clocks to Chaos: The Rhythms of Life, Princeton University
Press 1988, 272 pp.
James Gleick, Chaos: Making a New Science, New York: Penguin, 1988. 368 pp.
John Gribbin. Deep Simplicity. Penguin Press Science. Penguin Books.
L Douglas Kiel, Euel W Elliott (ed.), Chaos Theory in the Social Sciences: Foundations and
Applications, University of Michigan Press, 1997, 360 pp.
Arvind Kumar, Chaos, Fractals and Self-Organisation; New Perspectives on Complexity in Nature ,
National Book Trust, 2003.
Hans Lauwerier, Fractals, Princeton University Press, 1991.
Edward Lorenz, The Essence of Chaos, University of Washington Press, 1996.
Alan Marshall (2002) The Unity of Nature: Wholeness and Disintegration in Ecology and Science,
Imperial College Press: London
Heinz-Otto Peitgen and Dietmar Saupe (Eds.), The Science of Fractal Images, Springer 1988, 312 pp.
Clifford A. Pickover, Computers, Pattern, Chaos, and Beauty: Graphics from an Unseen World , St
Martins Pr 1991.
Ilya Prigogine and Isabelle Stengers, Order Out of Chaos, Bantam 1984.
Heinz-Otto Peitgen and P. H. Richter, The Beauty of Fractals : Images of Complex Dynamical Systems,
Springer 1986, 211 pp.
David Ruelle, Chance and Chaos, Princeton University Press 1993.
Ivars Peterson, Newton's Clock: Chaos in the Solar System, Freeman, 1993.
Ian Roulstone; John Norbury (2013). Invisible in the Storm: the role of mathematics in understanding
weather (https://books.google.com/?id=qnMrFEHMrWwC). Princeton University Press.
ISBN 0691152721.
David Ruelle, Chaotic Evolution and Strange Attractors, Cambridge University Press, 1989.
Peter Smith, Explaining Chaos, Cambridge University Press, 1998.
Ian Stewart, Does God Play Dice?: The Mathematics of Chaos , Blackwell Publishers, 1990.
Steven Strogatz, Sync: The emerging science of spontaneous order, Hyperion, 2003.
Yoshisuke Ueda, The Road To Chaos, Aerial Pr, 1993.
M. Mitchell Waldrop, Complexity : The Emerging Science at the Edge of Order and Chaos, Simon &
Schuster, 1992.
Sawaya, Antonio (2010). Financial time series analysis : Chaos and neurodynamics approach.

External links
Hazewinkel, Michiel, ed. (2001), "Chaos" (https://www.encyclopediaofmath.org/index.php?title=p/c0214
80), Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
Nonlinear Dynamics Research Group (http://lagrange.physics.drexel.edu/) with Animations in Flash
The Chaos group at the University of Maryland (http://www.chaos.umd.edu/)
The Chaos Hypertextbook (http://hypertextbook.com/chaos/). An introductory primer on chaos and
fractals
ChaosBook.org (http://chaosbook.org/) An advanced graduate textbook on chaos (no fractals)
Society for Chaos Theory in Psychology & Life Sciences (http://www.societyforchaostheory.org/)
Nonlinear Dynamics Research Group at CSDC (http://www.csdc.unifi.it/mdswitch.html?newlang=eng),
Florence Italy
Interactive live chaotic pendulum experiment (http://physics.mercer.edu/pendulum/), allows users to
interact and sample data from a real working damped driven chaotic pendulum
Nonlinear dynamics: how science comprehends chaos (http://www.creatingtechnology.org/papers/chaos.h
tm), talk presented by Sunny Auyang, 1998.
Nonlinear Dynamics (http://www.egwald.ca/nonlineardynamics/index.php). Models of bifurcation and
chaos by Elmer G. Wiens
Gleick's Chaos (excerpt) (http://www.around.com/chaos.html)

https://en.wikipedia.org/wiki/Chaos_theory 18/19
7/25/2017 Chaos theory - Wikipedia

Systems Analysis, Modelling and Prediction Group (https://web.archive.org/web/20070428110552/http://


www.eng.ox.ac.uk/samp/) at the University of Oxford
A page about the Mackey-Glass equation (http://www.mgix.com/snippets/?MackeyGlass)
High Anxieties — The Mathematics of Chaos (https://www.youtube.com/user/thedebtgeneration?feature
=mhum) (2008) BBC documentary directed by David Malone
The chaos theory of evolution (http://www.newscientist.com/article/mg20827821.000-the-chaos-theory-o
f-evolution.html) - article published in Newscientist featuring similarities of evolution and non-linear
systems including fractal nature of life and chaos.
Jos Leys, Étienne Ghys et Aurélien Alvarez, Chaos, A Mathematical Adventure (http://www.chaos-math.
org/en). Nine films about dynamical systems, the butterfly effect and chaos theory, intended for a wide
audience.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Chaos_theory&oldid=790697346"

Categories: Chaos theory Complex systems theory Computational fields of study

This page was last edited on 15 July 2017, at 13:51.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Chaos_theory 19/19
7/25/2017 Fractal - Wikipedia

Fractal
From Wikipedia, the free encyclopedia

In mathematics a fractal is an abstract object used to describe and simulate


naturally occurring objects. Artificially created fractals commonly exhibit
similar patterns at increasingly small scales.[1] It is also known as
expanding symmetry or evolving symmetry. If the replication is exactly
the same at every scale, it is called a self-similar pattern. An example of
this is the Menger Sponge.[2] Fractals can also be nearly the same at
different levels. This latter pattern is illustrated in small magnifications of
the Mandelbrot set.[3][4][5][6] Fractals also include the idea of a detailed Mandelbrot set: Self-similarity
pattern that repeats itself.[3]:166; 18[4][7] illustrated by image enlargements.
This panel, no magnification.
Fractals are different from other geometric figures because of the way in
which they scale. Doubling the edge lengths of a polygon multiplies its
area by four, which is two (the ratio of the new to the old side length)
raised to the power of two (the dimension of the space the polygon resides
in). Likewise, if the radius of a sphere is doubled, its volume scales by
eight, which is two (the ratio of the new to the old radius) to the power of
three (the dimension that the sphere resides in). But if a fractal's one-
dimensional lengths are all doubled, the spatial content of the fractal scales The same fractal as above,
by a power that is not necessarily an integer.[3] This power is called the magnified 6-fold. Same patterns
fractal dimension of the fractal, and it usually exceeds the fractal's reappear, making the exact scale
topological dimension.[8] being examined difficult to
determine.
As mathematical equations, fractals are usually nowhere
differentiable.[3][6][9] An infinite fractal curve can be conceived of as
winding through space differently from an ordinary line, still being a 1-
dimensional line yet having a fractal dimension indicating it also
resembles a surface.[3]:15[8]:48

The mathematical roots of the idea of fractals have been traced throughout
the years as a formal path of published works, starting in the 17th century The same fractal as above,
with notions of recursion, then moving through increasingly rigorous magnified 100-fold.
mathematical treatment of the concept to the study of continuous but not
differentiable functions in the 19th century by the seminal work of Bernard
Bolzano, Bernhard Riemann, and Karl Weierstrass,[10] and on to the
coining of the word fractal in the 20th century with a subsequent
burgeoning of interest in fractals and computer-based modelling in the
20th century.[11][12] The term "fractal" was first used by mathematician
Benoit Mandelbrot in 1975. Mandelbrot based it on the Latin frāctus
meaning "broken" or "fractured", and used it to extend the concept of The same fractal as above,
magnified 2000-fold, where the
theoretical fractional dimensions to geometric patterns in nature.[3]:405[7]
Mandelbrot set fine detail
There is some disagreement amongst authorities about how the concept of resembles the detail at low
a fractal should be formally defined. Mandelbrot himself summarized it as magnification.
"beautiful, damn hard, increasingly useful. That's fractals."[13] The general
consensus is that theoretical fractals are infinitely self-similar, iterated, and detailed mathematical constructs
having fractal dimensions, of which many examples have been formulated and studied in great depth.[3][4][5]
Fractals are not limited to geometric patterns, but can also describe processes in time.[2][6][14][15][16][17] Fractal
patterns with various degrees of self-similarity have been rendered or studied in images, structures and

https://en.wikipedia.org/wiki/Fractal 1/15
7/25/2017 Fractal - Wikipedia

sounds[18] and found in nature,[19][20][21][22][23]


technology,[24][25][26][27] art,[28][29] and law.[30] Fractals are of
particular relevance in the field of chaos theory, since the graphs of
most chaotic processes are fractal.[31]

Contents
1 Introduction
2 History
3 Characteristics
4 Brownian motion Sierpinski carpet (to level 6), a two-
5 Common techniques for generating fractals dimensional fractal
6 Simulated fractals
7 Natural phenomena with fractal features
8 In creative works
9 Physiological responses
10 Ion Production Capabilities
11 Applications in technology
12 See also
13 Notes
14 References
15 Further reading
16 External links

Introduction
The word "fractal" often has different connotations for laypeople than for mathematicians, where the layperson
is more likely to be familiar with fractal art than a mathematical conception. The mathematical concept is
difficult to define formally even for mathematicians, but key features can be understood with little
mathematical background.

The feature of "self-similarity", for instance, is easily understood by analogy to zooming in with a lens or other
device that zooms in on digital images to uncover finer, previously invisible, new structure. If this is done on
fractals, however, no new detail appears; nothing changes and the same pattern repeats over and over, or for
some fractals, nearly the same pattern reappears over and over.[1] Self-similarity itself is not necessarily
counter-intuitive (e.g., people have pondered self-similarity informally such as in the infinite regress in parallel
mirrors or the homunculus, the little man inside the head of the little man inside the head...). The difference for
fractals is that the pattern reproduced must be detailed.[3]:166; 18[4][7]

This idea of being detailed relates to another feature that can be understood without mathematical background:
Having a fractional or fractal dimension greater than its topological dimension, for instance, refers to how a
fractal scales compared to how geometric shapes are usually perceived. A regular line, for instance, is
conventionally understood to be 1-dimensional; if such a curve is divided into pieces each 1/3 the length of the
original, there are always 3 equal pieces. In contrast, consider the Koch snowflake. It is also 1-dimensional for
the same reason as the ordinary line, but it has, in addition, a fractal dimension greater than 1 because of how
its detail can be measured. The fractal curve divided into parts 1/3 the length of the original line becomes 4
pieces rearranged to repeat the original detail, and this unusual relationship is the basis of its fractal dimension.

This also leads to understanding a third feature, that fractals as mathematical equations are "nowhere
differentiable". In a concrete sense, this means fractals cannot be measured in traditional ways.[3][6][9] To
elaborate, in trying to find the length of a wavy non-fractal curve, one could find straight segments of some

https://en.wikipedia.org/wiki/Fractal 2/15
7/25/2017 Fractal - Wikipedia

measuring tool small enough to lay end to end over the waves, where the pieces could get small enough to be
considered to conform to the curve in the normal manner of measuring with a tape measure. But in measuring a
wavy fractal curve such as the Koch snowflake, one would never find a small enough straight segment to
conform to the curve, because the wavy pattern would always re-appear, albeit at a smaller size, essentially
pulling a little more of the tape measure into the total length measured each time one attempted to fit it tighter
and tighter to the curve.[3]

History
The history of fractals traces a path from chiefly theoretical studies to modern
applications in computer graphics, with several notable people contributing
canonical fractal forms along the way.[11][12] According to Pickover, the
mathematics behind fractals began to take shape in the 17th century when the
mathematician and philosopher Gottfried Leibniz pondered recursive self-
similarity (although he made the mistake of thinking that only the straight line
was self-similar in this sense).[32] In his writings, Leibniz used the term
"fractional exponents", but lamented that "Geometry" did not yet know of
them.[3]:405 Indeed, according to various historical accounts, after that point few
mathematicians tackled the issues, and the work of those who did remained
A Koch snowflake is a
obscured largely because of resistance to such unfamiliar emerging concepts,
fractal that begins with an
which were sometimes referred to as mathematical "monsters".[9][11][12] Thus, equilateral triangle and then
it was not until two centuries had passed that on July 18, 1872 Karl Weierstrass replaces the middle third of
presented the first definition of a function with a graph that would today be every line segment with a
considered a fractal, having the non-intuitive property of being everywhere pair of line segments that
continuous but nowhere differentiable at the Royal Prussian Academy of form an equilateral bump
Sciences.[11]:7[12] In addition, the quotient difference becomes arbitrarily large
as the summation index increases.[33] Not long after that, in 1883, Georg
Cantor, who attended lectures by Weierstrass,[12] published examples of subsets of the real line known as
Cantor sets, which had unusual properties and are now recognized as fractals.[11]:11–24 Also in the last part of
that century, Felix Klein and Henri Poincaré introduced a category of fractal that has come to be called "self-
inverse" fractals.[3]:166

One of the next milestones came in 1904, when Helge von Koch, extending
ideas of Poincaré and dissatisfied with Weierstrass's abstract and analytic
definition, gave a more geometric definition including hand drawn images of a
similar function, which is now called the Koch snowflake.[11]:25[12] Another
milestone came a decade later in 1915, when Wacław Sierpiński constructed his
famous triangle then, one year later, his carpet. By 1918, two French
mathematicians, Pierre Fatou and Gaston Julia, though working independently,
arrived essentially simultaneously at results describing what are now seen as
A Julia set, a fractal related
fractal behaviour associated with mapping complex numbers and iterative
to the Mandelbrot set
functions and leading to further ideas about attractors and repellors (i.e., points
that attract or repel other points), which have become very important in the
study of fractals.[6][11][12] Very shortly after that work was submitted, by March 1918, Felix Hausdorff
expanded the definition of "dimension", significantly for the evolution of the definition of fractals, to allow for
sets to have noninteger dimensions.[12] The idea of self-similar curves was taken further by Paul Lévy, who, in
his 1938 paper Plane or Space Curves and Surfaces Consisting of Parts Similar to the Whole described a new
fractal curve, the Lévy C curve.[notes 1]

https://en.wikipedia.org/wiki/Fractal 3/15
7/25/2017 Fractal - Wikipedia

Different researchers have postulated that without the aid of modern computer
graphics, early investigators were limited to what they could depict in manual
drawings, so lacked the means to visualize the beauty and appreciate some of
the implications of many of the patterns they had discovered (the Julia set, for
instance, could only be visualized through a few iterations as very simple
drawings]).[3]:179[9][12] That changed, however, in the 1960s, when Benoit
Mandelbrot started writing about self-similarity in papers such as How Long Is
the Coast of Britain? Statistical Self-Similarity and Fractional
Dimension,[34][35] which built on earlier work by Lewis Fry Richardson. In
1975[7] Mandelbrot solidified hundreds of years of thought and mathematical A strange attractor that
development in coining the word "fractal" and illustrated his mathematical exhibits multifractal scaling
definition with striking computer-constructed visualizations. These images,
such as of his canonical Mandelbrot set, captured the popular imagination;
many of them were based on recursion, leading to the popular meaning of the
term "fractal".[36] Currently, fractal studies are essentially exclusively
computer-based.[9][11][32]

In 1980, Loren Carpenter gave a presentation at the SIGGRAPH where he


introduced his software for generating and rendering fractally generated
landscapes.[37]

Characteristics Uniform Mass Center


Triangle Fractal
One often cited description that Mandelbrot published to describe geometric
fractals is "a rough or fragmented geometric shape that can be split into parts,
each of which is (at least approximately) a reduced-size copy of the whole";[3] this is generally helpful but
limited. Authors disagree on the exact definition of fractal, but most usually elaborate on the basic ideas of
self-similarity and an unusual relationship with the space a fractal is embedded in.[2][3][4][6][38] One point
agreed on is that fractal patterns are characterized by fractal dimensions, but whereas these numbers quantify
complexity (i.e., changing detail with changing scale), they neither uniquely describe nor specify details of how
to construct particular fractal patterns.[39] In 1975 when Mandelbrot coined the word "fractal", he did so to
denote an object whose Hausdorff–Besicovitch dimension is greater than its topological dimension.[7] It has
been noted that this dimensional requirement is not met by fractal space-filling curves such as the Hilbert
curve.[notes 2]

According to Falconer, rather than being strictly defined, fractals should, in addition to being nowhere
differentiable and able to have a fractal dimension, be generally characterized by a gestalt of the following
features;[4]

Self-similarity, which may be manifested as:

Exact self-similarity: identical at all scales; e.g. Koch snowflake


Quasi self-similarity: approximates the same pattern at different scales; may contain small copies
of the entire fractal in distorted and degenerate forms; e.g., the Mandelbrot set's satellites are
approximations of the entire set, but not exact copies.
Statistical self-similarity: repeats a pattern stochastically so numerical or statistical measures are
preserved across scales; e.g., randomly generated fractals; the well-known example of the coastline
of Britain, for which one would not expect to find a segment scaled and repeated as neatly as the
repeated unit that defines, for example, the Koch snowflake[6]
Qualitative self-similarity: as in a time series[14]
Multifractal scaling: characterized by more than one fractal dimension or scaling rule

https://en.wikipedia.org/wiki/Fractal 4/15
7/25/2017 Fractal - Wikipedia

Fine or detailed structure at arbitrarily small scales. A consequence of this structure is fractals may have
emergent properties[40] (related to the next criterion in this list).
Irregularity locally and globally that is not easily described in traditional Euclidean geometric language.
For images of fractal patterns, this has been expressed by phrases such as "smoothly piling up surfaces"
and "swirls upon swirls".[8]
Simple and "perhaps recursive" definitions see Common techniques for generating fractals

As a group, these criteria form guidelines for excluding certain cases, such as those that may be self-similar
without having other typically fractal features. A straight line, for instance, is self-similar but not fractal
because it lacks detail, is easily described in Euclidean language, has the same Hausdorff dimension as
topological dimension, and is fully defined without a need for recursion.[3][6]

Brownian motion
A path generated by a one dimensional Wiener process is a fractal curve of dimension 1.5, and Brownian
motion is a finite version of this.[41]

Common techniques for generating fractals


Images of fractals can be created by fractal generating programs. Because of the
butterfly effect a small change in a single variable can have a unpredictable
outcome.

Iterated function systems – use fixed geometric replacement rules; may


be stochastic or deterministic;[42] e.g., Koch snowflake, Cantor set,
Haferman carpet,[43] Sierpinski carpet, Sierpinski gasket, Peano curve,
Harter-Heighway dragon curve, T-Square, Menger sponge
Strange attractors – use iterations of a map or solutions of a system of
initial-value differential or difference equations that exhibit chaos (e.g.,
see multifractal image, or the logistic map) Self-similar branching
L-systems – use string rewriting; may resemble branching patterns, such pattern modeled in silico
as in plants, biological cells (e.g., neurons and immune system cells[23]), using L-systems
blood vessels, pulmonary structure, [44] etc. or turtle graphics patterns principles[23]
such as space-filling curves and tilings
Escape-time fractals – use a formula or recurrence relation at each point
in a space (such as the complex plane); usually quasi-self-similar; also known as "orbit" fractals; e.g., the
Mandelbrot set, Julia set, Burning Ship fractal, Nova fractal and Lyapunov fractal. The 2d vector fields
that are generated by one or two iterations of escape-time formulae also give rise to a fractal form when
points (or pixel data) are passed through this field repeatedly.
Random fractals – use stochastic rules; e.g., Lévy flight, percolation clusters, self avoiding walks, fractal
landscapes, trajectories of Brownian motion and the Brownian tree (i.e., dendritic fractals generated by
modeling diffusion-limited aggregation or reaction-limited aggregation clusters).[6]

Finite subdivision rules use a recursive topological algorithm for refining tilings[45] and they are similar
to the process of cell division.[46] The iterative processes used in creating the Cantor set and the
Sierpinski carpet are examples of finite subdivision rules, as is barycentric subdivision.

Simulated fractals
Fractal patterns have been modeled extensively, albeit within a range of scales rather than infinitely, owing to
the practical limits of physical time and space. Models may simulate theoretical fractals or natural phenomena
with fractal features. The outputs of the modelling process may be highly artistic renderings, outputs for
investigation, or benchmarks for fractal analysis. Some specific applications of fractals to technology are listed
https://en.wikipedia.org/wiki/Fractal 5/15
7/25/2017 Fractal - Wikipedia

elsewhere. Images and other outputs of modelling are normally referred to as


being "fractals" even if they do not have strictly fractal characteristics, such as
when it is possible to zoom into a region of the fractal image that does not
exhibit any fractal properties. Also, these may include calculation or display
artifacts which are not characteristics of true fractals.

Modeled fractals may be sounds,[18] digital images, electrochemical patterns,


circadian rhythms,[47] etc. Fractal patterns have been reconstructed in physical
3-dimensional space[26]:10 and virtually, often called "in silico" modeling.[44]
Models of fractals are generally created using fractal-generating software that
A fractal generated by a
implements techniques such as those outlined above.[6][14][26] As one finite subdivision rule for an
illustration, trees, ferns, cells of the nervous system,[23] blood and lung alternating link
vasculature,[44] and other branching patterns in nature can be modeled on a
computer by using recursive algorithms and L-systems techniques.[23] The
recursive nature of some patterns is obvious in certain examples—a branch
from a tree or a frond from a fern is a miniature replica of the whole: not
identical, but similar in nature. Similarly, random fractals have been used to
describe/create many highly irregular real-world objects. A limitation of
modeling fractals is that resemblance of a fractal model to a natural
phenomenon does not prove that the phenomenon being modeled is formed by a
process similar to the modeling algorithms.
A fractal flame

Natural phenomena with fractal features


Approximate fractals found in nature display self-similarity over extended, but finite, scale ranges. The
connection between fractals and leaves, for instance, is currently being used to determine how much carbon is
contained in trees.[48] Phenomena known to have fractal features include:

River networks Earthquakes[27][50]


Fault lines Snowflakes[51]
Mountain ranges
Psychological subjective perception[52]
Craters
Lightning bolts Crystals[53]
Coastlines Blood vessels and pulmonary vessels[44]
Mountain goat horns Ocean waves[54]
Trees DNA
Algae Soil pores [55]
Geometrical optics[49] Rings of Saturn [56][57]
Animal coloration patterns Proteins[58]
Romanesco broccoli
Surfaces in turbulent flows [59][60]
Pineapple
Heart rates[19]
Heart sounds[20]
Earthquakes[27][50]

Frost crystals occurring Fractal basin boundary A fractal is formed


naturally on cold glass in a geometrical optical when pulling apart two
https://en.wikipedia.org/wiki/Fractal 6/15
7/25/2017 Fractal - Wikipedia

form fractal patterns system[49] glue-covered acrylic


sheets

High voltage Romanesco broccoli, Fractal defrosting Slime mold Brefeldia


breakdown within a 4 in showing self-similar patterns, polar Mars. maxima growing
(100 mm) block of form approximating a The patterns are formed fractally on wood
acrylic creates a fractal natural fractal by sublimation of
Lichtenberg figure frozen CO2. Width of
image is about a
kilometer.

In creative works
Since 1999, more than 10 scientific groups have performed fractal analysis on over 50 of Jackson Pollock’s
(1912–1956) paintings which were created by pouring paint directly onto his horizontal canvases
[61][62][63][64][65][66][67][68][69][70][71][72][73] Recently, fractal analysis has been used to achieve a 93% success

rate in distinguishing real from imitation Pollocks.[74] Cognitive neuroscientists have shown that Pollock's
fractals induce the same stress-reduction in observers as computer-generated fractals and Nature's fractals.[75]

Decalcomania, a technique used by artists such as Max Ernst, can produce fractal-like patterns.[76] It involves
pressing paint between two surfaces and pulling them apart.

Cyberneticist Ron Eglash has suggested that fractal geometry and mathematics are prevalent in African art,
games, divination, trade, and architecture. Circular houses appear in circles of circles, rectangular houses in
rectangles of rectangles, and so on. Such scaling patterns can also be found in African textiles, sculpture, and
even cornrow hairstyles.[29][77] Hokky Situngkir also suggested the similar properties in Indonesian traditional
art, batik, and ornaments found in traditional houses.[78][79]

In a 1996 interview with Michael Silverblatt, David Foster Wallace admitted that the structure of the first draft
of Infinite Jest he gave to his editor Michael Pietsch was inspired by fractals, specifically the Sierpinski triangle
(a.k.a. Sierpinski gasket), but that the edited novel is "more like a lopsided Sierpinsky Gasket".[28]

A fractal that models the 3d recursive image recursive fractal image


https://en.wikipedia.org/wiki/Fractal 7/15
7/25/2017 Fractal - Wikipedia

surface of a mountain butterfly


(animation)

Physiological responses
Humans appear to be especially well-adapted to processing fractal patterns with D values between 1.3–1.5.[80]
When humans view fractal patterns with D values between 1.3–1.5, this tends to reduce physiological
stress.[81][82]

Ion Production Capabilities


If a circle boundary is drawn around the two-dimensional view of a fractal, the fractal will never cross the
boundary, this is due to the scaling of each successive iteration of the fractal being smaller. When fractals are
iterated many times, the perimeter of the fractal increases, while the area will never exceed a certain value. A
fractal in three-dimensional space is similar, however, a difference between fractals in two dimensions and
three dimensions, is that a three dimensional fractal will increase in surface area, but never exceed a certain
volume.[83] This can be utilized to maximize the efficiency of ion propulsion, when choosing electron emitter
construction and material. If done correctly, the efficiency of the emission process can be maximized.[84]

Applications in technology
Fractal Antennas[85]
Fractal transistor[86]
Fractal heat exchangers[87]
Digital imaging
Urban growth[88][89]
Classification of histopathology slides
Fractal landscape or Coastline complexity
Detecting ‘life as we don't know it’ by fractal analysis[90]
Enzymes (Michaelis-Menten kinetics)
Generation of new music
Signal and image compression
Creation of digital photographic enlargements
Fractal in soil mechanics
Computer and video game design
Computer Graphics
Organic environments
Procedural generation
Fractography and fracture mechanics
Small angle scattering theory of fractally rough systems
T-shirts and other fashion
Generation of patterns for camouflage, such as MARPAT
Digital sundial
Technical analysis of price series
Fractals in networks
Medicine[26]
Neuroscience[21][22]
Diagnostic Imaging[25]
Pathology[91][92]
https://en.wikipedia.org/wiki/Fractal 8/15
7/25/2017 Fractal - Wikipedia

Geology[93]
Geography[94]
Archaeology[95][96]
Soil mechanics[24]
Seismology[27]
Search and rescue[97]
Technical analysis[98]
Morton order space filling curves for GPU cache coherency in texture mapping[99][100][101], rasterisation
[102][103] and indexing of turbulence data (http://turbulence.pha.jhu.edu).[104]

See also
Banach fixed point theorem Lacunarity
Bifurcation theory List of fractals by Hausdorff dimension
Box counting Mandelbulb
Constructal theory Mandelbox
Cymatics Macrocosm and microcosm
Diamond-square algorithm Multifractal system
Droste effect Newton fractal
Feigenbaum function Percolation
Form constant Power law
Fractal cosmology Publications in fractal geometry
Fractal derivative Random walk
Fractalgrid Self-reference
Fractal string Strange loop
Fracton Turbulence
Graftal Wiener process
Greeble
Lacunarity
Notes
1. The original paper, Lévy, Paul (1938). "Les Courbes planes ou gauches et les surfaces composées de parties
semblables au tout". Journal de l'École Polytechnique: 227–247, 249–291., is translated in Edgar, pages 181–239.
2. The Hilbert curve map is not a homeomorphism, so it does not preserve topological dimension. The topological
dimension and Hausdorff dimension of the image of the Hilbert map in R2 are both 2. Note, however, that the
topological dimension of the graph of the Hilbert map (a set in R3) is 1.

References
1. Boeing, G. (2016). "Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity and the
Limits of Prediction" (http://geoffboeing.com/publications/nonlinear-chaos-fractals-prediction/). Systems. 4 (4): 37.
doi:10.3390/systems4040037 (https://doi.org/10.3390%2Fsystems4040037). Retrieved 2016-12-02.
2. Gouyet, Jean-François (1996). Physics and fractal structures. Paris/New York: Masson Springer. ISBN 978-0-387-
94153-0.
3. Mandelbrot, Benoît B. (1983). The fractal geometry of nature (https://books.google.com/books?id=0R2LkE3N7-oC).
Macmillan. ISBN 978-0-7167-1186-5.
4. Falconer, Kenneth (2003). Fractal Geometry: Mathematical Foundations and Applications. John Wiley & Sons. xxv.
ISBN 0-470-84862-6.
5. Briggs, John (1992). Fractals:The Patterns of Chaos. London: Thames and Hudson. p. 148. ISBN 0-500-27693-5.
6. Vicsek, Tamás (1992). Fractal growth phenomena. Singapore/New Jersey: World Scientific. pp. 31; 139–146.
ISBN 978-981-02-0668-0.
7. Albers, Donald J.; Alexanderson, Gerald L. (2008). "Benoît Mandelbrot: In his own words". Mathematical people :
profiles and interviews. Wellesley, MA: AK Peters. p. 214. ISBN 978-1-56881-340-0.

https://en.wikipedia.org/wiki/Fractal 9/15
7/25/2017 Fractal - Wikipedia

8. Mandelbrot, Benoît B. (2004). Fractals and Chaos. Berlin: Springer. p. 38. ISBN 978-0-387-20158-0. "A fractal set
is one for which the fractal (Hausdorff-Besicovitch) dimension strictly exceeds the topological dimension"
9. Gordon, Nigel (2000). Introducing fractal geometry. Duxford: Icon. p. 71. ISBN 978-1-84046-123-7.
10. Segal, S.L. (June 1978). "Riemann’s example of a continuous "nondifferentiable" function continued". The
Mathematical Intelligencer. 1 (2).
11. Edgar, Gerald (2004). Classics on Fractals. Boulder, CO: Westview Press. ISBN 978-0-8133-4153-8.
12. Trochet, Holly (2009). "A History of Fractal Geometry" (http://www.webcitation.org/65DCT2znx?url=http://www-g
roups.dcs.st-and.ac.uk/~history/HistTopics/fractals.html). MacTutor History of Mathematics. Archived from the
original (http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/fractals.html) on 4 February 2012.
13. Mandelbrot, Benoit. "24/7 Lecture on Fractals" (https://www.youtube.com/watch?feature=player_embedded&v=5e7
HB5Oze4g#t=70). 2006 Ig Nobel Awards. Improbable Research.
14. Peters, Edgar (1996). Chaos and order in the capital markets : a new view of cycles, prices, and market volatility.
New York: Wiley. ISBN 0-471-13938-6.
15. Krapivsky, P. L.; Ben-Naim, E. (1994). "Multiscaling in Stochastic Fractals". Phys. Lett. A. 196 (3–4): 168.
Bibcode:1994PhLA..196..168K (http://adsabs.harvard.edu/abs/1994PhLA..196..168K). doi:10.1016/0375-
9601(94)91220-3 (https://doi.org/10.1016%2F0375-9601%2894%2991220-3).
16. Hassan, M. K.; Rodgers, G. J. (1995). "Models of fragmentation and stochastic fractals". Physics Letters A. 208: 95.
Bibcode:1995PhLA..208...95H (http://adsabs.harvard.edu/abs/1995PhLA..208...95H). doi:10.1016/0375-
9601(95)00727-k (https://doi.org/10.1016%2F0375-9601%2895%2900727-k).
17. Hassan, M. K.; Pavel, N. I.; Pandit, R. K.; Kurths, J. (2014). "Dyadic Cantor set and its kinetic and stochastic
counterpart". Chaos, Solitons & Fractals. 60: 31–39. doi:10.1016/j.chaos.2013.12.010 (https://doi.org/10.1016%2Fj.
chaos.2013.12.010).
18. Brothers, Harlan J. (2007). "Structural Scaling in Bach's Cello Suite No. 3". Fractals. 15: 89–95.
doi:10.1142/S0218348X0700337X (https://doi.org/10.1142%2FS0218348X0700337X).
19. Tan, Can Ozan; Cohen, Michael A.; Eckberg, Dwain L.; Taylor, J. Andrew (2009). "Fractal properties of human
heart period variability: Physiological and methodological implications" (https://www.ncbi.nlm.nih.gov/pmc/articles/
PMC2746620). The Journal of Physiology. 587 (15): 3929. PMC 2746620 (https://www.ncbi.nlm.nih.gov/pmc/articl
es/PMC2746620)  . PMID 19528254 (https://www.ncbi.nlm.nih.gov/pubmed/19528254).
doi:10.1113/jphysiol.2009.169219 (https://doi.org/10.1113%2Fjphysiol.2009.169219).
20. Buldyrev, Sergey V.; Goldberger, Ary L.; Havlin, Shlomo; Peng, Chung-Kang; Stanley, H. Eugene (1995). Bunde,
Armin; Havlin, Shlomo, eds. "Fractals in Science" (http://havlin.biu.ac.il/Shlomo%20Havlin%20books_f_in_s.php).
Springer.
21. Liu, Jing Z.; Zhang, Lu D.; Yue, Guang H. (2003). "Fractal Dimension in Human Cerebellum Measured by Magnetic
Resonance Imaging" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1303704). Biophysical Journal. 85 (6): 4041–
4046. Bibcode:2003BpJ....85.4041L (http://adsabs.harvard.edu/abs/2003BpJ....85.4041L). PMC 1303704 (https://ww
w.ncbi.nlm.nih.gov/pmc/articles/PMC1303704)  . PMID 14645092 (https://www.ncbi.nlm.nih.gov/pubmed/1464509
2). doi:10.1016/S0006-3495(03)74817-6 (https://doi.org/10.1016%2FS0006-3495%2803%2974817-6).
22. Karperien, Audrey L.; Jelinek, Herbert F.; Buchan, Alastair M. (2008). "Box-Counting Analysis of Microglia Form
in Schizophrenia, Alzheimer's Disease and Affective Disorder". Fractals. 16 (2): 103.
doi:10.1142/S0218348X08003880 (https://doi.org/10.1142%2FS0218348X08003880).
23. Jelinek, Herbert F.; Karperien, Audrey; Cornforth, David; Cesar, Roberto; Leandro, Jorge de Jesus Gomes (2002).
"MicroMod-an L-systems approach to neural modelling". In Sarker, Ruhul. Workshop proceedings: the Sixth
Australia-Japan Joint Workshop on Intelligent and Evolutionary Systems, University House, ANU, (https://books.goo
gle.com/books?id=FFSUGQAACAAJ). University of New South Wales. ISBN 9780731705054. OCLC 224846454
(https://www.worldcat.org/oclc/224846454). http://researchoutput.csu.edu.au/R/-?func=dbin-jump-
full&object_id=6595&local_base=GEN01-CSU01. Retrieved 3 February 2012. "Event location: Canberra,
Australia"
24. Hu, Shougeng; Cheng, Qiuming; Wang, Le; Xie, Shuyun (2012). "Multifractal characterization of urban residential
land price in space and time". Applied Geography. 34: 161. doi:10.1016/j.apgeog.2011.10.016 (https://doi.org/10.101
6%2Fj.apgeog.2011.10.016).
25. Karperien, Audrey; Jelinek, Herbert F.; Leandro, Jorge de Jesus Gomes; Soares, João V. B.; Cesar Jr, Roberto M.;
Luckie, Alan (2008). "Automated detection of proliferative retinopathy in clinical practice" (https://www.ncbi.nlm.ni
h.gov/pmc/articles/PMC2698675). Clinical ophthalmology (Auckland, N.Z.). 2 (1): 109–122. PMC 2698675 (https://
www.ncbi.nlm.nih.gov/pmc/articles/PMC2698675)  . PMID 19668394 (https://www.ncbi.nlm.nih.gov/pubmed/1966
8394). doi:10.2147/OPTH.S1579 (https://doi.org/10.2147%2FOPTH.S1579).
26. Losa, Gabriele A.; Nonnenmacher, Theo F. (2005). Fractals in biology and medicine (https://books.google.com/book
s?id=t9l9GdAt95gC). Springer. ISBN 978-3-7643-7172-2.
27. Vannucchi, Paola; Leoni, Lorenzo (2007). "Structural characterization of the Costa Rica décollement: Evidence for
seismically-induced fluid pulsing". Earth and Planetary Science Letters. 262 (3–4): 413.
Bibcode:2007E&PSL.262..413V (http://adsabs.harvard.edu/abs/2007E&PSL.262..413V).
doi:10.1016/j.epsl.2007.07.056 (https://doi.org/10.1016%2Fj.epsl.2007.07.056).
https://en.wikipedia.org/wiki/Fractal 10/15
7/25/2017 Fractal - Wikipedia

28. Wallace, David Foster. "Bookworm on KCRW" (http://www.kcrw.com/etc/programs/bw/bw960411david_foster_wall


ace). Kcrw.com. Retrieved 2010-10-17.
29. Eglash, Ron (1999). "African Fractals: Modern Computing and Indigenous Design" (http://www.rpi.edu/~eglash/egla
sh.dir/afractal/afractal.htm). New Brunswick: Rutgers University Press. Retrieved 2010-10-17.
30. Stumpff, Andrew (2013). "The Law is a Fractal: The Attempt to Anticipate Everything". Loyola University Chicago
Law Journal. p. 649. SSRN 2157804 (https://ssrn.com/abstract=2157804)  .
31. Baranger, Michael. "Chaos, Complexity, and Entropy: A physics talk for non-physicists" (http://necsi.edu/projects/ba
ranger/cce.pdf) (PDF).
32. Pickover, Clifford A. (2009). The Math Book: From Pythagoras to the 57th Dimension, 250 Milestones in the
History of Mathematics (https://books.google.com/?id=JrslMKTgSZwC&pg=PA310&dq=fractal+koch+curve+book
#v=onepage&q=fractal%20koch%20curve%20book&f=false). Sterling. p. 310. ISBN 978-1-4027-5796-9.
33. "Fractal Geometry" (http://www-history.mcs.st-and.ac.uk/HistTopics/fractals.html). www-history.mcs.st-and.ac.uk.
Retrieved 2017-04-11.
34. Mandelbrot, B. (1967). "How Long Is the Coast of Britain?". Science. 156 (3775): 636–638.
Bibcode:1967Sci...156..636M (http://adsabs.harvard.edu/abs/1967Sci...156..636M). PMID 17837158 (https://www.n
cbi.nlm.nih.gov/pubmed/17837158). doi:10.1126/science.156.3775.636 (https://doi.org/10.1126%2Fscience.156.377
5.636).
35. Batty, Michael (1985-04-04). "Fractals – Geometry Between Dimensions" (https://books.google.com/?id=sz6GAq_P
smMC&pg=PA31). New Scientist. Holborn. 105 (1450): 31.
36. Russ, John C. (1994). Fractal surfaces (https://books.google.com/?id=qDQjyuuDRxUC&pg=PA1&lpg=PA1). 1.
Springer. p. 1. ISBN 978-0-306-44702-0. Retrieved 2011-02-05.
37. kottke.org. 2009. Vol Libre, an amazing CG film from 1980. [online] Available at: http://kottke.org/09/07/vol-libre-
an-amazing-cg-film-from-1980
38. Edgar, Gerald (2008). Measure, topology, and fractal geometry. New York: Springer-Verlag. p. 1. ISBN 978-0-387-
74748-4.
39. Karperien, Audrey (2004). http://www.webcitation.org/65DyLbmF1?
url=http://bilby.unilinc.edu.au/view/action/error.do%3Bjsessionid%3DA158232FB19100D27065524902BDA108
Defining microglial morphology: Form, Function, and Fractal Dimension (http://bilby.unilinc.edu.au/view/action/sin
gleViewer.do?dvs=1328442053986~182&locale=en_US&VIEWER_URL=/view/action/singleViewer.do?&DELIVE
RY_RULE_ID=10&search_terms=karperien&adjacency=N&application=DIGITOOL-3&frameId=1&usePid1=true
&usePid2=true,) Check |url= value (help). Charles Sturt University. Retrieved 2012-02-05.
40. Spencer, John; Thomas, Michael S. C.; McClelland, James L. (2009). Toward a unified theory of development :
connectionism and dynamic systems theory re-considered. Oxford/New York: Oxford University Press. ISBN 978-0-
19-530059-8.
41. Falconer, Kenneth (2013). Fractals, A Very Short Introduction. Oxford University Press.
42. Frame, Angus (3 August 1998). "Iterated Function Systems". In Pickover, Clifford A. Chaos and fractals: a
computer graphical journey : ten year compilation of advanced research (https://books.google.com/books?id=A51A
RsapVuUC). Elsevier. pp. 349–351. ISBN 978-0-444-50002-1. Retrieved 4 February 2012.
43. "Haferman Carpet" (http://www.wolframalpha.com/input/?i=Haferman+carpet). WolframAlpha. Retrieved
18 October 2012.
44. Hahn, Horst K.; Georg, Manfred; Peitgen, Heinz-Otto (2005). "Fractal aspects of three-dimensional vascular
constructive optimization". In Losa, Gabriele A.; Nonnenmacher, Theo F. Fractals in biology and medicine (https://b
ooks.google.com/books?id=t9l9GdAt95gC). Springer. pp. 55–66. ISBN 978-3-7643-7172-2.
45. J. W. Cannon, W. J. Floyd, W. R. Parry. Finite subdivision rules. Conformal Geometry and Dynamics, vol. 5 (2001),
pp. 153–196.
46. J. W. Cannon, W. Floyd and W. Parry. Crystal growth, biological cell growth and geometry. (https://books.google.co
m/books?id=qZHyqUli9y8C&pg=PA65&lpg=PA65&dq=%22james+w.+cannon%22+maths&source=web&ots=RP1
svsBqga&sig=kxyEXFBqOG5NnJncng9HHniHyrc&hl=en&sa=X&oi=book_result&resnum=1&ct=result#PPA65,M
1) Pattern Formation in Biology, Vision and Dynamics, pp. 65–82. World Scientific, 2000. ISBN 981-02-3792-
8,ISBN 978-981-02-3792-9.
47. Fathallah-Shaykh, Hassan M. (2011). "Fractal Dimension of the Drosophila Circadian Clock". Fractals. 19 (4): 423–
430. doi:10.1142/S0218348X11005476 (https://doi.org/10.1142%2FS0218348X11005476).
48. "Hunting the Hidden Dimension." Nova. PBS. WPMB-Maryland. 28 October 2008.
49. Sweet, D.; Ott, E.; Yorke, J. A. (1999), "Complex topology in Chaotic scattering: A Laboratory Observation",
Nature, 399 (6734): 315, Bibcode:1999Natur.399..315S (http://adsabs.harvard.edu/abs/1999Natur.399..315S),
doi:10.1038/20573 (https://doi.org/10.1038%2F20573)
50. Sornette, Didier (2004). Critical phenomena in natural sciences: chaos, fractals, selforganization, and disorder :
concepts and tools. Springer. pp. 128–140. ISBN 978-3-540-40754-6.

https://en.wikipedia.org/wiki/Fractal 11/15
7/25/2017 Fractal - Wikipedia

51. Meyer, Yves; Roques, Sylvie (1993). Progress in wavelet analysis and applications: proceedings of the International
Conference "Wavelets and Applications," Toulouse, France – June 1992 (https://books.google.com/?id=aHux78oQbb
kC&pg=PA25&dq=snowflake+fractals+book#v=onepage&q=snowflake%20fractals%20book&f=false). Atlantica
Séguier Frontières. p. 25. ISBN 978-2-86332-130-0. Retrieved 2011-02-05.
52. Pincus, David (September 2009). "The Chaotic Life: Fractal Brains Fractal Thoughts" (http://www.psychologytoday.
com/blog/the-chaotic-life/200909/fractal-brains-fractal-thoughts). psychologytoday.com.
53. Carbone, Alessandra; Gromov, Mikhael; Prusinkiewicz, Przemyslaw (2000). Pattern formation in biology, vision and
dynamics (https://books.google.com/?id=qZHyqUli9y8C&pg=PA78&dq=crystal+fractals+book#v=onepage&q=crys
tal%20fractals%20book&f=false). World Scientific. p. 78. ISBN 978-981-02-3792-9.
54. Addison, Paul S. (1997). Fractals and chaos: an illustrated course (https://books.google.com/?id=l2E4ciBQ9qEC&
pg=PA45&dq=lightning+fractals+book#v=onepage&q=lightning%20fractals%20book&f=false). CRC Press. pp. 44–
46. ISBN 978-0-7503-0400-9. Retrieved 2011-02-05.
55. Ozhovan M.I., Dmitriev I.E., Batyukhnova O.G. Fractal structure of pores of clay soil. Atomic Energy, 74, 241–243
(1993)
56. Takayasu, H. (1990). Fractals in the physical sciences. Manchester: Manchester University Press. p. 36.
ISBN 9780719034343.
57. Jun, Li; Ostoja-Starzewski, Martin (1 April 2015). "Edges of Saturn's Rings are Fractal". SpringerPlus. 4,158.
doi:10.1186/s40064-015-0926-6 (https://doi.org/10.1186%2Fs40064-015-0926-6).
58. Enright, Matthew B.; Leitner, David M. (27 January 2005). "Mass fractal dimension and the compactness of
proteins". Physical Review E. 71 (1): 011912. Bibcode:2005PhRvE..71a1912E (http://adsabs.harvard.edu/abs/2005P
hRvE..71a1912E). doi:10.1103/PhysRevE.71.011912 (https://doi.org/10.1103%2FPhysRevE.71.011912).
59. Sreenivasan, K.R.; Meneveau, C. (1986). "The Fractal Facets of Turbulence". Journal of Fluid Mechanics. 173: 357–
386. doi:10.1017/S0022112086001209 (https://doi.org/10.1017%2FS0022112086001209).
60. de Silva, C.M.; Philip, J.; Chauhan, K.; Meneveau, C.; Marusic, I. (2013). "Multiscale Geometry and Scaling of the
Turbulent-Nonturbulent Interface in High Reynolds Number Boundary Layers". Phys. Rev. Lett. 111: 044501.
doi:10.1126/science.1203223 (https://doi.org/10.1126%2Fscience.1203223).
61. R.P. Taylor et al, “Fractal Analysis of Pollock’s Drip Paintings”, Nature, vol. 399, 422 (1999).
62. J.R. Mureika, C.C. Dyer, G.C. Cupchik, “Multifractal Structure in Nonrepresentational Art”, Physical Review E, vol.
72, 046101-1-15 (2005).
63. C. Redies, J. Hasenstein and J. Denzler, “Fractal-Like Image Statistics in Visual Art: Similar to Natural Scenes”,
Spatial Vision, vol. 21, 137–148 (2007).
64. S. Lee, S. Olsen and B. Gooch, “Simulating and Analyzing Jackson Pollock’s Paintings” Journal of Mathematics and
the Arts, vol.1, 73–83 (2007).
65. J. Alvarez-Ramirez, C. Ibarra-Valdez, E. Rodriguez and L. Dagdug, “1/f-Noise Structure in Pollock’s Drip
Paintings”, Physica A, vol. 387, 281–295 (2008).
66. D.J. Graham and D.J. Field, “Variations in Intensity for Representative and Abstract Art, and for Art from Eastern
and Western Hemispheres” Perception, vol. 37, 1341–1352 (2008).
67. J. Alvarez-Ramirez, J. C. Echeverria, E. Rodriguez “Performance of a High-Dimensional R/S Analyis Method for
Hurst Exponent Estimation” Physica A, vol. 387, 6452–6462 (2008).
68. J. Coddington, J. Elton, D. Rockmore and Y. Wang, “Multi-fractal Analysis and Authentication of Jackson Pollock
Paintings”, Proceedings SPIE, vol. 6810, 68100F 1–12 (2008).
69. M. Al-Ayyoub, M. T. Irfan and D.G. Stork, “Boosting Multi-Feature Visual Texture Classifiers for the
Authentification of Jackson Pollock’s Drip Paintings”, SPIE proceedings on Computer Vision and Image Analysis of
Art II, vol. 7869, 78690H (2009)
70. J.R. Mureika and R.P. Taylor, “The Abstract Expressionists and Les Automatistes: multi-fractal depth”, Signal
Processing, vol. 93 573 (2013).
71. R.P. Taylor et al, “Authenticating Pollock Paintings Using Fractal Geometry”, Pattern Recognition Letters, vol. 28,
695–702 (2005).
72. K. Jones-Smith et al, “Fractal Analysis: Revisiting Pollock’s Paintings” Nature, Brief Communication Arising, vol.
444, E9-10, (2006).
73. R.P. Taylor et al, “Fractal Analysis: Revisiting Pollock’s Paintings” Nature, Brief Communication Arising, vol. 444,
E10-11, (2006).
74. L. Shamar, “What Makes a Pollock Pollock: A Machine Vision Approach”, International Journal of Arts and
Technology, vol. 8, 1–10, (2015).
75. R.P. Taylor, B. Spehar, P. Van Donkelaar and C.M. Hagerhall, “Perceptual and Physiological Responses to Jackson
Pollock’s Fractals,” Frontiers in Human Neuroscience, vol. 5 1- 13 (2011).
76. Frame, Michael; and Mandelbrot, Benoît B.; A Panorama of Fractals and Their Uses (http://classes.yale.edu/Fractal
s/Panorama/)
77. Nelson, Bryn; Sophisticated Mathematics Behind African Village Designs Fractal patterns use repetition on large,
small scale (http://www.sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/2000/02/23/MN36684.DTL), San
Francisco Chronicle, Wednesday, February 23, 2009
https://en.wikipedia.org/wiki/Fractal 12/15
7/25/2017 Fractal - Wikipedia

78. Situngkir, Hokky; Dahlan, Rolan (2009). Fisika batik: implementasi kreatif melalui sifat fraktal pada batik secara
komputasional. Jakarta: Gramedia Pustaka Utama. ISBN 978-979-22-4484-7
79. Rulistia, Novia D. (2015-10-06). "Application maps out nation's batik story" (http://www.thejakartapost.com/news/2
015/10/06/application-maps-out-nation-s-batik-story.html). The Jakarta Post. Retrieved 2016-09-25.
80. Taylor, Richard P. (2016). "Fractal Fluency: An Intimate Relationship Between the Brain and Processing of Fractal
Stimuli". In Di Ieva, Antonio. The Fractal Geometry of the Brain. Springer Series in Computational Neuroscience.
Springer. pp. 485–496. ISBN 978-1-4939-3995-4.
81. Taylor, Richard P. (2006). "Reduction of Physiological Stress Using Fractal Art and Architecture" (https://doi.org/10.
1162%2Fleon.2006.39.3.245). Leonardo. MIT Press – Journals. 39 (3): 245–251. doi:10.1162/leon.2006.39.3.245 (ht
tps://doi.org/10.1162%2Fleon.2006.39.3.245). Retrieved 25 March 2017.
82. For further discussion of this effect, see Taylor, Richard P.; Spehar, Branka; Donkelaar, Paul Van; Hagerhall,
Caroline M. (2011). "Perceptual and Physiological Responses to Jackson Pollock's Fractals" (https://doi.org/10.338
9%2Ffnhum.2011.00060). Frontiers in Human Neuroscience. Frontiers Media SA. 5.
doi:10.3389/fnhum.2011.00060 (https://doi.org/10.3389%2Ffnhum.2011.00060). Retrieved 25 March 2017.
83. "Introduction to Fractal Geometry" (http://www.fractal.org/Bewustzijns-Besturings-Model/Fractals-Useful-Beauty.ht
m). www.fractal.org. Retrieved 2017-04-11.
84. DeFelice, David (2015-08-18). "NASA – Ion Propulsion" (https://www.nasa.gov/centers/glenn/about/fs21grc.html).
NASA. Retrieved 2017-04-11.
85. Hohlfeld, Robert G.; Cohen, Nathan (1999). "Self-similarity and the geometric requirements for frequency
independence in Antennae". Fractals. 7 (1): 79–84. doi:10.1142/S0218348X99000098 (https://doi.org/10.1142%2FS
0218348X99000098).
86. Reiner, Richard; Waltereit, Patrick; Benkhelifa, Fouad; Müller, Stefan; Walcher, Herbert; Wagner, Sandrine; Quay,
Rüdiger; Schlechtweg, Michael; Ambacher, Oliver; Ambacher, O. (2012). "Fractal structures for low-resistance large
area AlGaN/GaN power transistors" (http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6229091&url=http%3
A%2F%2Fieeexplore.ieee.org%2Fiel5%2F6222389%2F6228999%2F06229091.pdf%3Farnumber%3D6229091)
(PDF). Proceedings of ISPSD: 341. ISBN 978-1-4577-1596-9. doi:10.1109/ISPSD.2012.6229091 (https://doi.org/10.1
109%2FISPSD.2012.6229091).
87. Zhiwei Huang; Yunho Hwang; Vikrant Aute; Reinhard Radermacher (2016). "Review of Fractal Heat Exchangers"
(http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=2724&context=iracc) (PDF) International Refrigeration and
Air Conditioning Conference. Paper 1725
88. Chen, Yanguang (2011). "Modeling Fractal Structure of City-Size Distributions Using Correlation Functions" (http
s://www.ncbi.nlm.nih.gov/pmc/articles/PMC3176775). PLoS ONE. 6 (9): e24791. Bibcode:2011PLoSO...624791C
(http://adsabs.harvard.edu/abs/2011PLoSO...624791C). PMC 3176775 (https://www.ncbi.nlm.nih.gov/pmc/articles/P
MC3176775)  . PMID 21949753 (https://www.ncbi.nlm.nih.gov/pubmed/21949753). arXiv:1104.4682 (https://arxiv.
org/abs/1104.4682)  . doi:10.1371/journal.pone.0024791 (https://doi.org/10.1371%2Fjournal.pone.0024791).
89. "Applications" (http://library.thinkquest.org/26242/full/ap/ap.html). Retrieved 2007-10-21.
90. Detecting ‘life as we don't know it’ by fractal analysis (http://journals.cambridge.org/action/displayAbstract?fromPa
ge=online&aid=9012687&fileId=S1473550413000177)
91. Smith, Robert F.; Mohr, David N.; Torres, Vicente E.; Offord, Kenneth P.; Melton III, L. Joseph (1989). "Renal
insufficiency in community patients with mild asymptomatic microhematuria". Mayo Clinic proceedings. Mayo
Clinic. 64 (4): 409–414. PMID 2716356 (https://www.ncbi.nlm.nih.gov/pubmed/2716356). doi:10.1016/s0025-
6196(12)65730-9 (https://doi.org/10.1016%2Fs0025-6196%2812%2965730-9).
92. Landini, Gabriel (2011). "Fractals in microscopy". Journal of Microscopy. 241 (1): 1–8. PMID 21118245 (https://ww
w.ncbi.nlm.nih.gov/pubmed/21118245). doi:10.1111/j.1365-2818.2010.03454.x (https://doi.org/10.1111%2Fj.1365-2
818.2010.03454.x).
93. Cheng, Qiuming (1997). "Multifractal Modeling and Lacunarity Analysis". Mathematical Geology. 29 (7): 919–932.
doi:10.1023/A:1022355723781 (https://doi.org/10.1023%2FA%3A1022355723781).
94. Chen, Yanguang (2011). "Modeling Fractal Structure of City-Size Distributions Using Correlation Functions" (http
s://www.ncbi.nlm.nih.gov/pmc/articles/PMC3176775). PLoS ONE. 6 (9): e24791. Bibcode:2011PLoSO...624791C
(http://adsabs.harvard.edu/abs/2011PLoSO...624791C). PMC 3176775 (https://www.ncbi.nlm.nih.gov/pmc/articles/P
MC3176775)  . PMID 21949753 (https://www.ncbi.nlm.nih.gov/pubmed/21949753). arXiv:1104.4682 (https://arxiv.
org/abs/1104.4682)  . doi:10.1371/journal.pone.0024791 (https://doi.org/10.1371%2Fjournal.pone.0024791).
95. Burkle-Elizondo, Gerardo; Valdéz-Cepeda, Ricardo David (2006). "Fractal analysis of Mesoamerican pyramids".
Nonlinear dynamics, psychology, and life sciences. 10 (1): 105–122. PMID 16393505 (https://www.ncbi.nlm.nih.go
v/pubmed/16393505).
96. Brown, Clifford T.; Witschey, Walter R. T.; Liebovitch, Larry S. (2005). "The Broken Past: Fractals in Archaeology".
Journal of Archaeological Method and Theory. 12: 37. doi:10.1007/s10816-005-2396-6 (https://doi.org/10.1007%2F
s10816-005-2396-6).
97. Saeedi, Panteha; Sorensen, Soren A. "An Algorithmic Approach to Generate After-disaster Test Fields for Search
and Rescue Agents" (http://www.iaeng.org/publication/WCE2009/WCE2009_pp93-98.pdf) (PDF). Proceedings of the
World Congress on Engineering 2009: 93–98. ISBN 978-988-17-0125-1.
https://en.wikipedia.org/wiki/Fractal 13/15
7/25/2017 Fractal - Wikipedia

98. Bunde, A.; Havlin, S. (2009). "Fractal Geometry, A Brief Introduction to". Encyclopedia of Complexity and Systems
Science. p. 3700. ISBN 978-0-387-75888-6. doi:10.1007/978-0-387-30440-3_218 (https://doi.org/10.1007%2F978-0
-387-30440-3_218).
99. "gpu internals" (http://fileadmin.cs.lth.se/cs/Personal/Michael_Doggett/pubs/doggett12-tc.pdf) (PDF).
100. "sony patents" (https://www.google.ch/patents/US20150287166?dq=morton+order+texture+swizzling&hl=de&sa=X
&ved=0ahUKEwjd8dT35PnMAhUnKcAKHTLCBvcQ6AEILjAC).
101. "description of swizzled and hybrid tiled swizzled textures" (https://news.ycombinator.com/item?id=2239173).
102. "nvidia patent" (http://www.google.com/patents/US8773422).
103. "nvidia patent" (http://www.google.ch/patents/US20110227921).
104. Li, Y.; Perlman, E.; Wang, M.; Yang, y.; Meneveau, C.; Burns, R.; Chen, S.; Szalay, A.; Eyink, G. (2008). "A pPublic
Turbulence Database Cluster and Applications to Study Lagrangian Evolution of Velocity Increments in
Turbulence". Journal of Turbulence. 9: N31. doi:10.1080/14685240802376389 (https://doi.org/10.1080%2F1468524
0802376389).

Further reading
Barnsley, Michael F.; and Rising, Hawley; Fractals Everywhere. Boston: Academic Press Professional,
1993. ISBN 0-12-079061-0
Duarte, German A.; Fractal Narrative. About the Relationship Between Geometries and Technology and
Its Impact on Narrative Spaces. Bielefeld: Transcript, 2014. ISBN 978-3-8376-2829-6
Falconer, Kenneth; Techniques in Fractal Geometry. John Wiley and Sons, 1997. ISBN 0-471-92287-0
Jürgens, Hartmut; Peitgen, Heins-Otto; and Saupe, Dietmar; Chaos and Fractals: New Frontiers of
Science. New York: Springer-Verlag, 1992. ISBN 0-387-97903-4
Mandelbrot, Benoit B.; The Fractal Geometry of Nature. New York: W. H. Freeman and Co., 1982.
ISBN 0-7167-1186-9
Peitgen, Heinz-Otto; and Saupe, Dietmar; eds.; The Science of Fractal Images. New York: Springer-
Verlag, 1988. ISBN 0-387-96608-0
Pickover, Clifford A.; ed.; Chaos and Fractals: A Computer Graphical Journey – A 10 Year Compilation
of Advanced Research. Elsevier, 1998. ISBN 0-444-50002-2
Jones, Jesse; Fractals for the Macintosh, Waite Group Press, Corte Madera, CA, 1993. ISBN 1-878739-
46-8.
Lauwerier, Hans; Fractals: Endlessly Repeated Geometrical Figures, Translated by Sophia Gill-
Hoffstadt, Princeton University Press, Princeton NJ, 1991. ISBN 0-691-08551-X, cloth. ISBN 0-691-
02445-6 paperback. "This book has been written for a wide audience..." Includes sample BASIC
programs in an appendix.
Sprott, Julien Clinton (2003). Chaos and Time-Series Analysis. Oxford University Press. ISBN 978-0-19-
850839-7.
Wahl, Bernt; Van Roy, Peter; Larsen, Michael; and Kampman, Eric; Exploring Fractals on the Macintosh
(http://www.fractalexplorer.com), Addison Wesley, 1995. ISBN 0-201-62630-6
Lesmoir-Gordon, Nigel; "The Colours of Infinity: The Beauty, The Power and the Sense of Fractals."
ISBN 1-904555-05-5 (The book comes with a related DVD of the Arthur C. Clarke documentary
introduction to the fractal concept and the Mandelbrot set).
Liu, Huajie; Fractal Art, Changsha: Hunan Science and Technology Press, 1997, ISBN 9787535722348.
Gouyet, Jean-François; Physics and Fractal Structures (Foreword by B. Mandelbrot); Masson, 1996.
ISBN 2-225-85130-1, and New York: Springer-Verlag, 1996. ISBN 978-0-387-94153-0. Out-of-print.
Available in PDF version at."Physics and Fractal Structures" (http://www.jfgouyet.fr/fractal/fractauk.htm
l) (in French). Jfgouyet.fr. Retrieved 2010-10-17.
Bunde, Armin; Havlin, Shlomo (1996). Fractals and Disordered Systems (http://havlin.biu.ac.il/Shlom
o%20Havlin%20books_fds.php). Springer.
Bunde, Armin; Havlin, Shlomo (1995). Fractals in Science (http://havlin.biu.ac.il/Shlomo%20Havlin%2
0books_f_in_s.php). Springer.
ben-Avraham, Daniel; Havlin, Shlomo (2000). Diffusion and Reactions in Fractals and Disordered
Systems (http://havlin.biu.ac.il/Shlomo%20Havlin%20books_d_r.php). Cambridge University Press.
Falconer, Kenneth (2013). Fractals, A Very Short Introduction. Oxford University Press.

External links
https://en.wikipedia.org/wiki/Fractal 14/15
7/25/2017 Fractal - Wikipedia

Fractals (https://www.dmoz.org/Science/Math/Chaos_and_Fractals/)
Scaling and Fractals (http://havlin.biu.ac.il/nas1/index.html) presented by Shlomo Havlin, Bar-Ilan
University
Hunting the Hidden Dimension (http://www.pbs.org/wgbh/nova/physics/hunting-hidden-
dimension.html), PBS NOVA, first aired August 24, 2011
Benoit Mandelbrot: Fractals and the Art of Roughness (http://www.ted.com/talks/benoit_mandelbrot_fra
ctals_the_art_of_roughness.html), TED (conference), February 2010
Technical Library on Fractals for controlling fluid (http://www.arifractal.com/technical-library/fractals)
Equations of self-similar fractal measure based on the fractional-order calculus (http://adsabs.harvard.ed
u/abs/2007PrGeo..22..451Y)(2007)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Fractal&oldid=791136746"

Categories: Fractals Mathematical structures Topology Computational fields of study

This page was last edited on 18 July 2017, at 11:05.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Fractal 15/15
7/25/2017 Econophysics - Wikipedia

Econophysics
From Wikipedia, the free encyclopedia

Econophysics is an interdisciplinary research field, applying theories and methods originally developed by
physicists in order to solve problems in economics, usually those including uncertainty or stochastic processes
and nonlinear dynamics. Some of its application to the study of financial markets has also been termed
statistical finance referring to its roots in statistical physics. For an accessible introduction and toy
manufacturing model, requiring only first semester calculus, see .[1]

Contents
1 History
2 Basic tools
3 Influence
4 Main results
5 See also
6 References
7 Further reading
8 Lectures
9 External links

History
Physicists’ interest in the social sciences is not new; Daniel Bernoulli, as an example, was the originator of
utility-based preferences. One of the founders of neoclassical economic theory, former Yale University
Professor of Economics Irving Fisher, was originally trained under the renowned Yale physicist, Josiah Willard
Gibbs.[2] Likewise, Jan Tinbergen, who won the first Nobel Prize in economics in 1969 for having developed
and applied dynamic models for the analysis of economic processes, studied physics with Paul Ehrenfest at
Leiden University. Most importantly, Jan Tinbergen has developed the Gravity Model of International Trade
that has became the work horse of the international economics.

Econophysics was started in the mid-1990s by several physicists working in the subfield of statistical
mechanics. Unsatisfied with the traditional explanations and approaches of economists - which usually
prioritized simplified approaches for the sake of soluble theoretical models over agreement with empirical data
- they applied tools and methods from physics, first to try to match financial data sets, and then to explain more
general economic phenomena.

One driving force behind econophysics arising at this time was the sudden availability of large amounts of
financial data, starting in the 1980s. It became apparent that traditional methods of analysis were insufficient -
standard economic methods dealt with homogeneous agents and equilibrium, while many of the more
interesting phenomena in financial markets fundamentally depended on heterogeneous agents and far-from-
equilibrium situations.

The term "econophysics" was coined by H. Eugene Stanley, to describe the large number of papers written by
physicists in the problems of (stock and other) markets, in a conference on statistical physics in Kolkata
(erstwhile Calcutta) in 1995 and first appeared in its proceedings publication in Physica A 1996.[3][4] The
inaugural meeting on Econophysics was organised 1998 in Budapest by János Kertész and Imre Kondor.

Currently, the almost regular meeting series on the topic include: APFA, ECONOPHYS-KOLKATA,[5]
Econophysics Colloquium, ESHIA/ WEHIA.

https://en.wikipedia.org/wiki/Econophysics 1/8
7/25/2017 Econophysics - Wikipedia

In recent years network science, heavily reliant on analogies from statistical mechanics, has been applied to the
study of productive systems. That is the case with the works done at the Santa Fe Institute in European Funded
Research Projects as Forecasting Financial Crises and the Harvard-MIT Observatory of Economic Complexity

If "econophysics" is taken to denote the principle of applying statistical mechanics to economic analysis, as
opposed to a particular literature or network, priority of innovation is probably due to Emmanuel Farjoun and
Moshé Machover (1983). Their book Laws of Chaos: A Probabilistic Approach to Political Economy proposes
dissolving (their words) the transformation problem in Marx's political economy by re-conceptualising the
relevant quantities as random variables.[6]

If, on the other side, "econophysics" is taken to denote the application of physics to economics, one can already
consider the works of Léon Walras and Vilfredo Pareto as part of it. Indeed, as shown by Bruna Ingrao and
Giorgio Israel, general equilibrium theory in economics is based on the physical concept of mechanical
equilibrium.

Econophysics has nothing to do with the "physical quantities approach" to economics, advocated by Ian
Steedman and others associated with Neo-Ricardianism. Notable econophysicists are Jean-Philippe Bouchaud,
Bikas K Chakrabarti, J. Doyne Farmer, Dirk Helbing, János Kertész, Francis Longstaff, Rosario N. Mantegna,
Matteo Marsili, Joseph L. McCauley, Enrico Scalas, Didier Sornette, H. Eugene Stanley, Victor Yakovenko and
Yi-Cheng Zhang. Particularly noteworthy among the formal courses on Econophysics is the one offered by the
Physics Department of the Leiden University,[7][8][9] from where the first Nobel-laureate in economics Jan
Tinbergen came. From September 2014 King's College has awarded the first position of Full Professor in
Econophysics.

Basic tools
Basic tools of econophysics are probabilistic and statistical methods often taken from statistical physics.

Physics models that have been applied in economics include the kinetic theory of gas (called the Kinetic
exchange models of markets [10]), percolation models, chaotic models developed to study cardiac arrest, and
models with self-organizing criticality as well as other models developed for earthquake prediction.[11]
Moreover, there have been attempts to use the mathematical theory of complexity and information theory, as
developed by many scientists among whom are Murray Gell-Mann and Claude E. Shannon, respectively.

Since economic phenomena are the result of the interaction among many heterogeneous agents, there is an
analogy with statistical mechanics, where many particles interact; but it must be taken into account that the
properties of human beings and particles significantly differ. However, statistical mechanics has been shown to
be a result of well-established tools used by economists in Potential games, as opposed to statistical mechanics
being used a-priori to model economics phenomena.[12] In fact, the Quantal response equilibrium was shown to
be a mean-field version of the Gibbs measure in the context of bounded-rational Potential games.

For Potential games, it has been shown that an emergence-producing equilibrium based on information via
Shannon information entropy produces the same equilibrium measure (Gibbs measure from statistical
mechanics) as a stochastic dynamical equation, both of which are based on bounded rationality models used by
economists. The fluctuation-dissipation theorem connects the two to establish a concrete correspondence of
"temperature", "entropy", "free potential/energy", and other physics notions to an economics system. The
statistical mechanics model is not constructed a-priori - it is a result of a bounded rational assumption and
modeling on existing neoclassical models. It has been used to prove the "inevitability of collusion" result of
Huw Dixon in a case for which the neoclassical version of the model does not predict collusion. [13] [14] Here
the demand is increasing, as with Veblen goods or stock buyers with the "hot hand" fallacy preferring to buy
more successful stocks and sell those that are less successful. [15] It also yields a phase transition in a model of
two interdependent markets, giving a different perspective to the Sonnenschein-Mantel-Debreu theorem.[16] [17]

https://en.wikipedia.org/wiki/Econophysics 2/8
7/25/2017 Econophysics - Wikipedia

Quantifiers derived from information theory were used in several papers by econophysicist Aurelio F. Bariviera
(http://www.aureliofernandez.net/) and coauthors in order to assess the degree in the informational efficiency of
stock markets. In a paper published in Physica A[18]

Zunino et al. use an innovative statistical tool in the financial literature: the complexity-entropy causality plane.
This Cartesian representation establish an efficiency ranking of different markets and distinguish different bond
market dynamics. Moreover, the authors conclude that the classification derived from the complexity-entropy
causality plane is consistent with the qualifications assigned by major rating companies to the sovereign
instruments. A similar study developed by Bariviera et al.[19] explore the relationship between credit ratings
and informational efficiency of a sample of corporate bonds of US oil and energy companies using also the
complexity–entropy causality plane. They find that this classification agrees with the credit ratings assigned by
Moody's.

Another good example is random matrix theory, which can be used to identify the noise in financial correlation
matrices. One paper has argued that this technique can improve the performance of portfolios, e.g., in applied in
portfolio optimization.[20]

There are, however, various other tools from physics that have so far been used, such as fluid dynamics,
classical mechanics and quantum mechanics (including so-called classical economy, quantum economy and
quantum finance),[21] and the path integral formulation of statistical mechanics.[22]

The concept of economic complexity index, introduced by the MIT physicist Cesar A. Hidalgo and the Harvard
economist Ricardo Hausmann and made available at MIT's Observatory of Economic Complexity, has been
devised as a predictive tool for economic growth. According to the estimates of Hausmann and Hidalgo, the
ECI is far more accurate in predicting GDP growth than the traditional governance measures of the World
Bank.[23]

There are also analogies between finance theory and diffusion theory. For instance, the Black–Scholes equation
for option pricing is a diffusion-advection equation (see however [24][25] for a critique of the Black-Scholes
methodology). The Black-Scholes theory can be extended to provide an analytical theory of main factors in
economic activities.[22]

Influence
Papers on econophysics have been published primarily in journals devoted to physics and statistical mechanics,
rather than in leading economics journals. Mainstream economists have generally been unimpressed by this
work.[26] Some Heterodox economists, including Mauro Gallegati, Steve Keen, Paul Ormerod, and Alan
Kirman have shown more interest, but also criticized trends in econophysics.

In contrast, econophysics is having some impact on the more applied field of quantitative finance, whose scope
and aims significantly differ from those of economic theory. Various econophysicists have introduced models
for price fluctuations in financial markets or original points of view on established models.[24][27][28] Also
several scaling laws have been found in various economic data.[29][30][31]

Main results
Presently, one of the main results of econophysics comprise the explanation of the "fat tails" in the distribution
of many kinds of financial data as a universal self-similar scaling property (i.e. scale invariant over many orders
of magnitude in the data),[32] arising from the tendency of individual market competitors, or of aggregates of
them, to exploit systematically and optimally the prevailing "microtrends" (e.g., rising or falling prices). These
"fat tails" are not only mathematically important, because they comprise the risks, which may be on the one
hand, very small such that one may tend to neglect them, but which - on the other hand - are not neglegible at
https://en.wikipedia.org/wiki/Econophysics 3/8
7/25/2017 Econophysics - Wikipedia

all, i.e. they can never be made exponentially tiny, but instead follow a measurable algebraically decreasing
power law, for example with a failure probability of only where x is an increasingly large variable in
the tail region of the distribution considered (i.e. a price statistics with much more than 108 data). I.e., the
events considered are not simply "outliers" but must really be taken into account and cannot be "insured
away".[33] It appears that it also plays a role that near a change of the tendency (e.g. from falling to rising
prices) there are typical "panic reactions" of the selling or buying agents with algebraically increasing bargain
rapidities and volumes.[33] The "fat tails" are also observed in commodity markets.

As in quantum field theory the "fat tails" can be obtained by complicated "nonperturbative" methods, mainly by
numerical ones, since they contain the deviations from the usual Gaussian approximations, e.g. the Black-
Scholes theory. Fat tails can, however, also be due to other phenomena, such as a random number of terms in
the central-limit theorem, or any number of other, non-econophysics models. Due to the difficulty in testing
such models, they have received less attention in traditional economic analysis.

Another result[17] shows the existence of a phase transition in a bounded-rational interdependent two-market
model with a longer-term investing agent and speculators who equilibrate the markets more quickly. The
emergence of market preference is shown to occur. When the longer-term investor is absent, speculators
spontaneously break a market exchange symmetry as they become more rational (below a critical temperature),
with diverging spatial-volatility/susceptibility, and crowd their preferences into one of the two markets. This
spontaneous symmetry breaking shows how one of two equilibria is chosen for a non-concave potential, which
gives a statistical mechanics perspective to the existence of multiple equilibria in the spirit of the
Sonnenschein-Mantel-Debreu theorem. Spatial volatility does not diverge when the longer-term investor is
present, showing the longer-term investor creates a more stable situation.

See also
Bose–Einstein condensation (network theory)
Potential Game
Complexity economics
Complex network
Detrended fluctuation analysis
Kinetic exchange models of markets
Long-range dependency
Network theory
Network science
Thermoeconomics

References
1. Campbell, Michael J.; Hanna, Joseph (2016). "The Optimal Can: An Uncanny Approach" (https://www.researchgate.
net/publication/307569165_The_Optimal_Can_An_Uncanny_Approach). The UMAP Journal. COMAP. 37 (1): 43–
63.
2. Yale Economic Review, Retrieved October-25-09 (http://www.yaleeconomicreview.com/issues/2006_spring/financep
hysical.html)
3. Interview of H. E. Stanley on Econophysics (Published in "IIM Kozhikode Society & Management Review", Sage
publication (USA), Vol. 2 Issue 2 (July), pp. 73-78 (2013)) (http://www.saha.ac.in/cmp/camcs/Stanley-interview.pdf)
4. Econophysics Research in India in the last two Decades (1993-2013) (Published in "IIM Kozhikode Society &
Management Review", Sage publication (USA), Vol. 2 Issue 2 (July), pp. 135-146 (2013)) (http://arxiv.org/pdf/1308.
2191.pdf)

https://en.wikipedia.org/wiki/Econophysics 4/8
7/25/2017 Econophysics - Wikipedia

5. ECONOPHYS-KOLKATA VIII: Econophysics and data driven modelling of market dynamics, March 14–17, 2014
(Proc. Vol.: Econophysics and data driven modelling of market dynamics, Eds. F. Abergel, H. Aoyama, B.K.
Chakrabarti, A. Chakraborti, A. Ghosh, New Economic Windows, Springer Int. Publ., Switzerland, 2015);
ECONOPHYS-KOLKATA VII : Econophysics of Agent Based Models, 8–12 November 2012 (Proc. Vol.:
Econophysics of Agent Based Models, Eds. F. Abergel, H. Aoyama, B.K. Chakrabarti, A. Chakraborti, A. Ghosh,
New Economic Windows, Springer Int. Publ., Switzerland, 2013); ECONOPHYS-KOLKATA VI : Econophysics of
Systemic Risk and Network Dynamics, 21–25 October 2011 (Proc. Vol.: Econophysics of Systemic Risk and
Network Dynamics, Eds. F. Abergel, B.K. Chakrabarti, A. Chakraborti, A. Ghosh, New Economic Windows,
Springer-Verlag, Milan, 2012); ECONOPHYS-KOLKATA V : Econophysics of Order-Driven Markets, 9–13 March
2010 (Proc. Vol.: Econophysics of Order-driven Markets, Eds. F. Abergel, B.K. Chakrabarti, A. Chakraborti, M.
Mitra, New Economic Windows, Springer-Verlag, Milan, 2011); ECONOPHYS-KOLKATA IV : Econophysics of
Games and Social Choices, 9–13 March 2009 (Proc. Vol.: Econophysics & Economics of Games, Social Choices and
Quantitative Techniques, Eds. B. Basu, B. K. Chakrabarti, S. R. Chakravarty, K. Gangopadhyay, New Economic
Windows, Springer-Verlag, Milan, 2010); ECONOPHYS-KOLKATA III: Econophysics & Sociophysics of Markets
and Networks, 12–15 March 2007 (Proc. Vol.: Econophysics of Markets and Business Networks, Eds. A. Chatterjee,
B.K. Chakrabarti, New Economic Windows, Springer-Verlag, Milan, 2007); ECONOPHYS-KOLKATA II:
Econophysics of Stock Markets and Minority Games, 14–17 February 2006 (Proc. Vol.: Econophysics of Stock and
other Markets, Eds. A. Chatterjee, B.K. Chakrabarti, New Economic Windows, Springer-Verlag, Milan, 2006);
ECONOPHYS-KOLKATA I: Econophysics of Wealth Distributions, 15–19 March 2005 (Proc. Vol.: Econophysics
of Wealth Distributions, Eds. A. Chatterjee, S. Yarlagadda, B.K. Chakrabarti, New Economic Windows, Springer-
Verlag, Milan, 2005).
6. Farjoun and Machover disclaim complete originality: their book is dedicated to the late Robert H. Langston, who
they cite for direct inspiration (page 12), and they also note an independent suggestion in a discussion paper by E.T.
Jaynes (page 239)
7. "Physics - Education" (https://studiegids.leidenuniv.nl/en/courses/show/34804/econofysica). Physics.leidenuniv.nl.
2011–2013. Retrieved 2013. Check date values in: |access-date= (help)
8. "Physics - Education" (https://studiegids.leidenuniv.nl/en/courses/show/41126/econofysica). Physics.leidenuniv.nl.
2011–2014. Retrieved 2014. Check date values in: |access-date= (help)
9. "Physics - Education" (https://studiegids.leidenuniv.nl/en/courses/show/41126/econofysica). Physics.leidenuniv.nl.
2011–2015. Retrieved 2015. Check date values in: |access-date= (help)
10. Bikas K Chakrabarti, Anirban Chakraborti, Satya R Chakravarty, Arnab Chatterjee (2012). Econophysics of Income
& Wealth Distributions. Cambridge University Press, Cambridge.
11. Didier Sornette (2003). Why Stock Markets Crash?. Princeton University Press.
12. Campbell, Michael J. (2005). "A Gibbsian approach to potential game theory (draft)". arXiv:cond-mat/0502112v2 (h
ttps://arxiv.org/abs/cond-mat/0502112v2)  .
13. Dixon, Huw (2000). "keeping up with the Joneses: competition and the evolution of collusion" (http://dx.doi.org/10.1
016/S0167-2681(00)00117-7). Journal of Economic Behavior and Organization. 43: 223–238.
14. Campbell, Michael J. (2016). "Inevitability of Collusion in a Coopetitive Bounded Rational Cournot Model with
Increasing Demand" (https://www.researchgate.net/publication/307578691_Inevitability_of_Collusion_in_a_Coopeti
tive_Bounded_Rational_Cournot_Model_with_Increasing_Demand). Journal of Mathematical Economics and
Finance. ASERS. 2 (1): 7–19.
15. Johnson, Joseph; Tellis, G.J.; Macinnis, D.J. (2005). "Losers, Winners, and Biased Trades". Journal of Consumer
Research. 2 (32): 324–329. doi:10.1086/432241 (https://doi.org/10.1086%2F432241).
16. Carfi, David; Campbell, Michael J. (2015). "Bounded Rational Speculative and Hedging Interaction Model in Oil
and U.S. Dollar Markets" (https://www.researchgate.net/publication/286930258_Bounded_Rational_Speculative_an
d_Hedging_Interaction_Model_in_Oil_and_US_Dollar_Markets). Journal of Mathematical Economics and Finance.
ASERS. 1 (1): 4–23. doi:10.14505/jmef.01 (https://doi.org/10.14505%2Fjmef.01).
17. Campbell, Michael J.; Carfi, David (2017). "Bounded Rational Speculative and Hedging Interaction Model in Oil
and U.S. Dollar Markets III - Phase Transition" (https://www.researchgate.net/publication/312193761_Speculative_a
nd_Hedging_Interaction_Model_in_Oil_and_US_Dollar_Markets_III_-_Phase_Transition). preprint.
18. Zunino, L., Bariviera, A.F., Guercio, M.B., Martinez, L.B. and Rosso, O.A. (2012). "On the efficiency of sovereign
bond markets". Physica A: Statistical Mechanics and its Applications. 391 (18): 4342–4349.
Bibcode:2012PhyA..391.4342Z (http://adsabs.harvard.edu/abs/2012PhyA..391.4342Z).
doi:10.1016/j.physa.2012.04.009 (https://doi.org/10.1016%2Fj.physa.2012.04.009).
19. Bariviera, A.F.,Zunino, L., Guercio, M.B., Martinez, L.B. and Rosso, O.A. (2013). "Efficiency and credit ratings: a
permutation-information-theory analysis". Journal of Statistical Mechanics: Theory and Experiment. 2013 (08):
P08007. Bibcode:2013JSMTE..08..007F (http://adsabs.harvard.edu/abs/2013JSMTE..08..007F). arXiv:1509.01839
(https://arxiv.org/abs/1509.01839)  . doi:10.1088/1742-5468/2013/08/P08007 (https://doi.org/10.1088%2F1742-546
8%2F2013%2F08%2FP08007).

https://en.wikipedia.org/wiki/Econophysics 5/8
7/25/2017 Econophysics - Wikipedia

20. Vasiliki Plerou; Parameswaran Gopikrishnan; Bernd Rosenow; Luis Amaral; Thomas Guhr; H. Eugene Stanley
(2002). "Random matrix approach to cross correlations in financial data". Physical Review E. 65 (6): 066126.
Bibcode:2002PhRvE..65f6126P (http://adsabs.harvard.edu/abs/2002PhRvE..65f6126P). arXiv:cond-mat/0108023 (ht
tps://arxiv.org/abs/cond-mat/0108023)  . doi:10.1103/PhysRevE.65.066126 (https://doi.org/10.1103%2FPhysRevE.6
5.066126).
21. Anatoly V. Kondratenko (2015). Probabilstic Economic Theory. LAP LAMBERT Academic Publishing. ISBN 978-
3-659-89232-5.
22. Chen, Jing (2015). The Unity of Science and Economics: A New Foundation of Economic Theory.
http://www.springer.com/us/book/9781493934645: Springer.
23. Ricardo Hausmann; Cesar Hidalgo; et al. "The Atlas of Economic Complexity" (http://atlas.media.mit.edu/atlas/).
The Observatory of Economic Complexity (MIT Media Lab). Retrieved 26 April 2012.
24. Jean-Philippe Bouchaud; Marc Potters (2003). Theory of Financial Risk and Derivative Pricing. Cambridge
University Press.
25. "Welcome to a non Black-Scholes world" (http://www.tandfonline.com/doi/abs/10.1080/713665871?journalCode=rq
uf20)
26. Philip Ball (2006). "Econophysics: Culture Crash". Nature. 441 (7094): 686–688. Bibcode:2006Natur.441..686B (htt
p://adsabs.harvard.edu/abs/2006Natur.441..686B). PMID 16760949 (https://www.ncbi.nlm.nih.gov/pubmed/1676094
9). doi:10.1038/441686a (https://doi.org/10.1038%2F441686a).
27. Enrico Scalas (2006). "The application of continuous-time random walks in finance and economics". Physica A. 362
(2): 225–239. Bibcode:2006PhyA..362..225S (http://adsabs.harvard.edu/abs/2006PhyA..362..225S).
doi:10.1016/j.physa.2005.11.024 (https://doi.org/10.1016%2Fj.physa.2005.11.024).
28. Y. Shapira; Y. Berman; E. Ben-Jacob (2014). "Modelling the short term herding behaviour of stock markets". New
Journal of Physics. 16. Bibcode:2014NJPh...16e3040S (http://adsabs.harvard.edu/abs/2014NJPh...16e3040S).
doi:10.1088/1367-2630/16/5/053040 (https://doi.org/10.1088%2F1367-2630%2F16%2F5%2F053040).
29. Y. Liu; P. Gopikrishnan; P. Cizeau; M. Meyer; C.-K. Peng; H. E. Stanley (1999). "Statistical properties of the
volatility of price fluctuations". Physical Review E. 60 (2): 1390. Bibcode:1999PhRvE..60.1390L (http://adsabs.harv
ard.edu/abs/1999PhRvE..60.1390L). arXiv:cond-mat/9903369 (https://arxiv.org/abs/cond-mat/9903369)  .
doi:10.1103/PhysRevE.60.1390 (https://doi.org/10.1103%2FPhysRevE.60.1390).
30. M. H. R. Stanley; L. A. N. Amaral; S. V. Buldyrev; S. Havlin; H. Leschhorn; P. Maass; M. A. Salinger; H. E. Stanley
(1996). "Scaling behaviour in the growth of companies" (http://havlin.biu.ac.il/Publications.php?keyword=Scaling+b
ehaviour+in+the+growth+of+companies&year=*&match=all). Nature. 379 (6568): 804.
Bibcode:1996Natur.379..804S (http://adsabs.harvard.edu/abs/1996Natur.379..804S). doi:10.1038/379804a0 (https://d
oi.org/10.1038%2F379804a0).
31. K. Yamasaki; L. Muchnik; S. Havlin; A. Bunde; H.E. Stanley (2005). "Scaling and memory in volatility return
intervals in financial markets" (http://havlin.biu.ac.il/Publications.php?keyword=Scaling+and+memory+in+volatility
+return+intervals+in+financial+markets&year=*&match=all). PNAS. 102 (26): 9424–8.
Bibcode:2005PNAS..102.9424Y (http://adsabs.harvard.edu/abs/2005PNAS..102.9424Y). PMC 1166612 (https://ww
w.ncbi.nlm.nih.gov/pmc/articles/PMC1166612)  . PMID 15980152 (https://www.ncbi.nlm.nih.gov/pubmed/1598015
2). doi:10.1073/pnas.0502613102 (https://doi.org/10.1073%2Fpnas.0502613102).
32. The physicists noted the scaling behaviour of "fat tails" through a letter to the scientific journal Nature by Rosario N.
Mantegna and H. Eugene Stanley: Scaling behavior in the dynamics of an economic index, Nature Vol. 376, pages
46-49 (1995), however the "fat tails"- phenomenon itself was discovered already earlier by economists.
33. See for example Preis, Mantegna, 2003.

Further reading
Rosario N. Mantegna, H. Eugene Stanley, An Introduction to Econophysics: Correlations and
Complexity in Finance, Cambridge University Press (Cambridge, UK, 1999) (http://www.cambridge.org/
uk/catalogue/catalogue.asp?isbn=0521620082)
Sitabhra Sinha, Arnab Chatterjee, Anirban Chakraborti, Bikas K Chakrabarti. Econophysics: An
Introduction, Wiley-VCH (2010) (http://www.wiley.com/WileyCDA/WileyTitle/productCd-3527408150,
descCd-authorInfo.html)
Bikas K Chakrabarti, Anirban Chakraborti, Arnab Chatterjee, Econophysics and Sociophysics : Trends
and Perspectives, Wiley-VCH, Berlin (2006) (http://www.wiley-vch.de/publish/en/books/bySubjectPH0
0/bySubSubjectPH95/3-527-40670-0/?sID=d05b)
Joseph McCauley, Dynamics of Markets, Econophysics and Finance, Cambridge University Press
(Cambridge, UK, 2004) (http://www.cambridge.org/catalogue/catalogue.asp?isbn=0521824478)
Bertrand Roehner, Patterns of Speculation - A Study in Observational Econophysics, Cambridge
University Press (Cambridge, UK, 2002) (http://www.cambridge.org/catalogue/catalogue.asp?isbn=0521

https://en.wikipedia.org/wiki/Econophysics 6/8
7/25/2017 Econophysics - Wikipedia

675731)
Surya Y., Situngkir, H., Dahlan, R. M., Hariadi, Y., Suroso, R. (2004). Aplikasi Fisika dalam Analisis
Keuangan (Physics Applications in Financial Analysis. Bina Sumber Daya MIPA. ISBN 9793073527
Arnab Chatterjee, Sudhakar Yarlagadda, Bikas K Chakrabarti, Econophysics of Wealth Distributions,
Springer-Verlag Italia (Milan, 2005) (http://www.springer.com/sgw/cda/frontpage/0,,5-165-72-52121089-
0,00.html)
Philip Mirowski, More Heat than Light - Economics as Social Physics, Physics as Nature's Economics,
Cambridge University Press (Cambridge, UK, 1989) (http://www.cambridge.org/catalogue/catalogue.as
p?isbn=0521426898)
Ubaldo Garibaldi and Enrico Scalas, Finitary Probabilistic Methods in Econophysics, Cambridge
University Press (Cambridge, UK, 2010) (http://www.cambridge.org/gb/knowledge/isbn/item2708018/?s
ite_locale=en_GB).
Emmanual Farjoun and Moshé Machover, Laws of Chaos: a probabilistic approach to political economy
(http://staffnet.kingston.ac.uk/~ku32530/PPE/PPEindex.html), Verso (London, 1983) ISBN 0 86091 768
1
Nature Physics Focus issue: Complex networks in finance March 2013 Volume 9 No 3 pp 119–128
Mark Buchanan, What has econophysics ever done for us?, Nature 2013 (http://www.nature.com/nphys/j
ournal/v9/n6/full/nphys2648.html)
An Analytical treatment of Gibbs-Pareto behaviour in wealth distribution by Arnab Das and Sudhakar
Yarlagadda [1] (http://arxiv.org/pdf/cond-mat/0409329.pdf)
A distribution function analysis of wealth distribution by Arnab Das and Sudhakar Yarlagadda [2] (http://
arxiv.org/pdf/cond-mat/0310343.pdf)
Analytical treatment of a trading market model by Arnab Das [3] (http://arxiv.org/pdf/cond-mat/0304685.
pdf)
Martin Shubik and Eric Smith, The Guidance of an Enterprise Economy, MIT Press, [4] (https://mitpress.
mit.edu/books/guidance-enterprise-economy) MIT Press (2016)
Abergel, F., Aoyama, H., Chakrabarti, B.K., Chakraborti, A., Deo, N., Raina, D., Vodenska, I. (Eds.),
Econophysics and Sociophysics: Recent Progress and Future Directions, [5] (http://www.springer.com/i
n/book/9783319477046), New Economic Windows Series, Springer (2017)

Lectures
Economic Fluctuations and Statistical Physics: Quantifying Extremely Rare and Much Less Rare Events,
Eugene Stanley, Videolectures.net (http://videolectures.net/ccss09_stanley_efasp/)
Applications of Statistical Physics to Understanding Complex Systems, Eugene Stanley,
Videolectures.net (http://videolectures.net/eccs08_stanley_aosptucs/)
Financial Bubbles, Real Estate Bubbles, Derivative Bubbles, and the Financial and Economic Crisis,
Didier Sornette, Videolectures.net (http://videolectures.net/ccss09_sornette_fbreb/)
Financial crises and risk management, Didier Sornette, Videolectures.net (http://videolectures.net/risc08_
sornette_fcrm/)
Bubble trouble: how physics can quantify stock-market crashes, Tobias Preis, Physics World Online
Lecture Series (http://physicsworld.com/cws/article/multimedia/45725)

External links
Econophysics Colloquium 2017 (http://www.ec2017.org)
Econophysics Ph.D. Program at University of Houston, Houston, TX. (http://phys.uh.edu/research/econo
physics/index.php)
Finance Gets Physical (http://www.yaleeconomicreview.org/issues/2006_spring/financephysical.html) -
Yale Economic Review
Econophysics Forum (http://www.unifr.ch/econophysics/)
Conference to mark 25th anniversary of Farjoun and Machover's book (http://staffnet.kingston.ac.uk/~ku
32530/PPE/PPEindex.html)
Chair of International Economics, University of Bamberg (Germany) (http://www.uni-bamberg.de/en/vwl
-iwf/)
Econophysics Colloquium (http://www.econophysics-colloquium.org/)
https://en.wikipedia.org/wiki/Econophysics 7/8
7/25/2017 Econophysics - Wikipedia

Retrieved from "https://en.wikipedia.org/w/index.php?title=Econophysics&oldid=784302941"

Categories: Applied and interdisciplinary physics Mathematical finance Schools of economic thought
Statistical mechanics Interdisciplinary subfields of economics

This page was last edited on 7 June 2017, at 15:27.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Econophysics 8/8
7/24/2017 Life - Wikipedia

Life
From Wikipedia, the free encyclopedia

Life is a characteristic distinguishing physical entities having


biological processes, such as signaling and self-sustaining Life
processes, from those that do not, either because such functions
have ceased, or because they never had such functions and are
classified as inanimate. Various forms of life exist, such as plants,
animals, fungi, protists, archaea, and bacteria. The criteria can at
times be ambiguous and may or may not define viruses, viroids,
or potential artificial life as "living". Biology is the primary
science concerned with the study of life, although many other
sciences are involved.

The definition of life is controversial. The current definition is Plants in the Rwenzori Mountains, Uganda
that organisms maintain homeostasis, are composed of cells,
Scientific classification
undergo metabolism, can grow, adapt to their environment,
respond to stimuli, and reproduce. However, many other Domains and Supergroups
biological definitions have been proposed, and there are some
borderline cases of life, such as viruses. Throughout history, there Life on Earth:
have been many attempts to define what is meant by "life" and
many theories on the properties and emergence of living things,
such as materialism, the belief that everything is made out of Non-cellular life[note 1][note 2]
matter and that life is merely a complex form of it; Viruses[note 3]
hylomorphism, the belief that all things are a combination of Viroids
matter and form, and the form of a living thing is its soul;
spontaneous generation, the belief that life repeatedly emerges Cellular life
from non-life; and vitalism, a now largely discredited hypothesis Domain Bacteria
that living organisms possess a "life force" or "vital spark". Domain Archaea
Modern definitions are more complex, with input from a diversity Domain Eukarya
of scientific disciplines. Biophysicists have proposed many
definitions based on chemical systems; there are also some living Archaeplastida
systems theories, such as the Gaia hypothesis, the idea that the SAR
Earth itself is alive. Another theory is that life is the property of Excavata
ecological systems, and yet another is elaborated in complex
Amoebozoa
systems biology, a branch or subfield of mathematical biology.
Abiogenesis describes the natural process of life arising from Opisthokonta
non-living matter, such as simple organic compounds. Properties
common to all organisms include the need for certain core
chemical elements to sustain biochemical functions.

Life on Earth first appeared as early as 4.28 billion years ago, soon after ocean formation 4.41 billion years ago,
and not long after the formation of the Earth 4.54 billion years ago.[1][2][3][4] Earth's current life may have
descended from an RNA world, although RNA-based life may not have been the first. The mechanism by
which life began on Earth is unknown, though many hypotheses have been formulated and are often based on
the Miller–Urey experiment. The earliest known life forms are microfossils of bacteria. In July 2016, scientists
reported identifying a set of 355 genes believed to be present in the last universal common ancestor (LUCA) of
all living organisms.[5]

Since its primordial beginnings, life on Earth has changed its environment on a geologic time scale. To survive
in most ecosystems, life must often adapt to a wide range of conditions. Some microorganisms, called
extremophiles, thrive in physically or geochemically extreme environments that are detrimental to most other
life on Earth. Aristotle was the first person to classify organisms. Later, Carl Linnaeus introduced his system of
https://en.wikipedia.org/wiki/Life 1/27
7/24/2017 Life - Wikipedia

binomial nomenclature for the classification of species. Eventually new groups and categories of life were
discovered, such as cells and microorganisms, forcing dramatic revisions of the structure of relationships
between living organisms. Cells are sometimes considered the smallest units and "building blocks" of life.
There are two kinds of cells, prokaryotic and eukaryotic, both of which consist of cytoplasm enclosed within a
membrane and contain many biomolecules such as proteins and nucleic acids. Cells reproduce through a
process of cell division, in which the parent cell divides into two or more daughter cells.

Though currently only known on Earth, life need not be restricted to it, and many scientists believe in the
existence of extraterrestrial life. Artificial life is a computer simulation or man-made reconstruction of any
aspect of life, which is often used to examine systems related to natural life. Death is the permanent termination
of all biological functions which sustain an organism, and as such, is the end of its life. Extinction is the process
by which an entire group or taxon, normally a species, dies out. Fossils are the preserved remains or traces of
organisms.

Contents
1 Definitions
1.1 Biology
1.1.1 Alternative definitions
1.1.2 Viruses
1.2 Biophysics
1.3 Living systems theories
1.3.1 Gaia hypothesis
1.3.2 Nonfractionability
1.3.3 Life as a property of ecosystems
1.3.4 Complex systems biology
1.3.5 Darwinian dynamic
1.3.6 Operator theory
2 History of study
2.1 Materialism
2.2 Hylomorphism
2.3 Spontaneous generation
2.4 Vitalism
3 Origin
4 Environmental conditions
4.1 Biosphere
4.2 Range of tolerance
4.3 Extremophiles
4.4 Chemical elements
4.4.1 DNA
5 Classification
5.1 Biota (taxonomy)
6 Cells
7 Extraterrestrial
8 Artificial
9 Death
9.1 Extinction
9.2 Fossils
10 See also
11 Notes
12 References
13 Further reading
14 External links

https://en.wikipedia.org/wiki/Life 2/27
7/24/2017 Life - Wikipedia

Definitions
It is a challenge for scientists and philosophers to define life.[6][7][8][9][10] This is partially because life is a
process, not a substance.[11][12][13] Any definition must be general enough to both encompass all known life
and any unknown life that may be different from life on Earth.[14][15][16]

Biology

Since there is no unequivocal definition of life, most current definitions


in biology are descriptive. Life is considered a characteristic of
something that exhibits all or most of the following
traits:[15][17][18][19][20][21][22]

1. Homeostasis: regulation of the internal environment to maintain


a constant state; for example, sweating to reduce temperature
2. Organization: being structurally composed of one or more
cells – the basic units of life
3. Metabolism: transformation of energy by converting chemicals The characteristics of life
and energy into cellular components (anabolism) and
decomposing organic matter (catabolism). Living things require
energy to maintain internal organization (homeostasis) and to produce the other phenomena associated
with life.
4. Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in
size in all of its parts, rather than simply accumulating matter.
5. Adaptation: the ability to change over time in response to the environment. This ability is fundamental
to the process of evolution and is determined by the organism's heredity, diet, and external factors.
6. Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to
external chemicals, to complex reactions involving all the senses of multicellular organisms. A response
is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism),
and chemotaxis.
7. Reproduction: the ability to produce new individual organisms, either asexually from a single parent
organism or sexually from two parent organisms.

These complex processes, called physiological functions, have underlying physical and chemical bases, as well
as signaling and control mechanisms that are essential to maintaining life.

Alternative definitions

From a physics perspective, living beings are thermodynamic systems with an organized molecular structure
that can reproduce itself and evolve as survival dictates.[23][24] Thermodynamically, life has been described as
an open system which makes use of gradients in its surroundings to create imperfect copies of itself.[25] Hence,
life is a self-sustained chemical system capable of undergoing Darwinian evolution.[26][27] A major strength of
this definition is that it distinguishes life by the evolutionary process rather than its chemical composition.[28]

Others take a systemic viewpoint that does not necessarily depend on molecular chemistry. One systemic
definition of life is that living things are self-organizing and autopoietic (self-producing). Variations of this
definition include Stuart Kauffman's definition as an autonomous agent or a multi-agent system capable of
reproducing itself or themselves, and of completing at least one thermodynamic work cycle.[29] This definition
is extended by the apparition of novel functions over time.[30]

Viruses

https://en.wikipedia.org/wiki/Life 3/27
7/24/2017 Life - Wikipedia

Whether or not viruses should be considered as alive is controversial.


They are most often considered as just replicators rather than forms of
life.[31] They have been described as "organisms at the edge of life"[32]
because they possess genes, evolve by natural selection,[33][34] and
replicate by creating multiple copies of themselves through self-
assembly. However, viruses do not metabolize and they require a host
cell to make new products. Virus self-assembly within host cells has
implications for the study of the origin of life, as it may support the
hypothesis that life could have started as self-assembling organic
Adenovirus as seen under an electron
molecules.[35][36][37]
microscope

Biophysics

To reflect the minimum phenomena required, other biological definitions of life have been proposed,[38] with
many of these being based upon chemical systems. Biophysicists have commented that living things function
on negative entropy.[39][40] In other words, living processes can be viewed as a delay of the spontaneous
diffusion or dispersion of the internal energy of biological molecules towards more potential microstates.[6] In
more detail, according to physicists such as John Bernal, Erwin Schrödinger, Eugene Wigner, and John Avery,
life is a member of the class of phenomena that are open or continuous systems able to decrease their internal
entropy at the expense of substances or free energy taken in from the environment and subsequently rejected in
a degraded form.[41][42]

Living systems theories

Living systems are open self-organizing living things that interact with their environment. These systems are
maintained by flows of information, energy, and matter.

Some scientists have proposed in the last few decades that a general living systems theory is required to explain
the nature of life.[43] Such a general theory would arise out of the ecological and biological sciences and
attempt to map general principles for how all living systems work. Instead of examining phenomena by
attempting to break things down into components, a general living systems theory explores phenomena in terms
of dynamic patterns of the relationships of organisms with their environment.[44]

Gaia hypothesis

The idea that the Earth is alive is found in philosophy and religion, but the first scientific discussion of it was
by the Scottish scientist James Hutton. In 1785, he stated that the Earth was a superorganism and that its proper
study should be physiology. Hutton is considered the father of geology, but his idea of a living Earth was
forgotten in the intense reductionism of the 19th century.[45]:10 The Gaia hypothesis, proposed in the 1960s by
scientist James Lovelock,[46][47] suggests that life on Earth functions as a single organism that defines and
maintains environmental conditions necessary for its survival.[45] This hypothesis served as one of the
foundations of the modern Earth system science.

Nonfractionability

The first attempt at a general living systems theory for explaining the nature of life was in 1978, by American
biologist James Grier Miller.[48] Robert Rosen (1991) built on this by defining a system component as "a unit
of organization; a part with a function, i.e., a definite relation between part and whole." From this and other
starting concepts, he developed a "relational theory of systems" that attempts to explain the special properties
of life. Specifically, he identified the "nonfractionability of components in an organism" as the fundamental
difference between living systems and "biological machines."[49]
https://en.wikipedia.org/wiki/Life 4/27
7/24/2017 Life - Wikipedia

Life as a property of ecosystems

A systems view of life treats environmental fluxes and biological fluxes together as a "reciprocity of
influence,"[50] and a reciprocal relation with environment is arguably as important for understanding life as it is
for understanding ecosystems. As Harold J. Morowitz (1992) explains it, life is a property of an ecological
system rather than a single organism or species.[51] He argues that an ecosystemic definition of life is preferable
to a strictly biochemical or physical one. Robert Ulanowicz (2009) highlights mutualism as the key to
understand the systemic, order-generating behavior of life and ecosystems.[52]

Complex systems biology

Complex systems biology (CSB) is a field of science that studies the emergence of complexity in functional
organisms from the viewpoint of dynamic systems theory.[53] The latter is also often called systems biology and
aims to understand the most fundamental aspects of life. A closely related approach to CSB and systems
biology called relational biology is concerned mainly with understanding life processes in terms of the most
important relations, and categories of such relations among the essential functional components of organisms;
for multicellular organisms, this has been defined as "categorical biology", or a model representation of
organisms as a category theory of biological relations, as well as an algebraic topology of the functional
organization of living organisms in terms of their dynamic, complex networks of metabolic, genetic, and
epigenetic processes and signaling pathways.[54][55] Alternative but closely related approaches focus on the
interdependance of constraints, where constraints can be either molecular, such as enzymes, or macroscopic,
such as the geometry of a bone or of the vascular system.[56]

Darwinian dynamic

It has also been argued that the evolution of order in living systems and certain physical systems obeys a
common fundamental principle termed the Darwinian dynamic.[57][58] The Darwinian dynamic was formulated
by first considering how macroscopic order is generated in a simple non-biological system far from
thermodynamic equilibrium, and then extending consideration to short, replicating RNA molecules. The
underlying order-generating process was concluded to be basically similar for both types of systems.[57]

Operator theory

Another systemic definition called the operator theory proposes that "life is a general term for the presence of
the typical closures found in organisms; the typical closures are a membrane and an autocatalytic set in the
cell"[59] and that an organism is any system with an organisation that complies with an operator type that is at
least as complex as the cell.[60][61][62][63] Life can also be modeled as a network of inferior negative feedbacks
of regulatory mechanisms subordinated to a superior positive feedback formed by the potential of expansion
and reproduction.[64]

History of study
Materialism

Some of the earliest theories of life were materialist, holding that all that exists is matter, and that life is merely
a complex form or arrangement of matter. Empedocles (430 BC) argued that everything in the universe is made
up of a combination of four eternal "elements" or "roots of all": earth, water, air, and fire. All change is
explained by the arrangement and rearrangement of these four elements. The various forms of life are caused
by an appropriate mixture of elements.[65]

https://en.wikipedia.org/wiki/Life 5/27
7/24/2017 Life - Wikipedia

Democritus (460 BC) thought that the essential characteristic of life is


having a soul (psyche). Like other ancient writers, he was attempting to
explain what makes something a living thing. His explanation was that
fiery atoms make a soul in exactly the same way atoms and void
account for any other thing. He elaborates on fire because of the
apparent connection between life and heat, and because fire moves.[66]

Plato's world of eternal and unchanging Forms, imperfectly


represented in matter by a divine Artisan, contrasts sharply
with the various mechanistic Weltanschauungen, of which Plant growth in the Hoh Rainforest
atomism was, by the fourth century at least, the most
prominent ... This debate persisted throughout the ancient
world. Atomistic mechanism got a shot in the arm from
Epicurus ... while the Stoics adopted a divine teleology ...
The choice seems simple: either show how a structured,
regular world could arise out of undirected processes, or
inject intelligence into the system.[67]

— R. J. Hankinson, Cause and Explanation in Ancient Herds of zebra and impala gathering
on the Maasai Mara plain
Greek Thought

The mechanistic materialism that originated in ancient Greece was


revived and revised by the French philosopher René Descartes, who
held that animals and humans were assemblages of parts that together
functioned as a machine. In the 19th century, the advances in cell theory
in biological science encouraged this view. The evolutionary theory of
Charles Darwin (1859) is a mechanistic explanation for the origin of
species by means of natural selection.[68] An aerial photo of microbial mats
around the Grand Prismatic Spring of
Hylomorphism Yellowstone National Park

Hylomorphism is a theory first expressed by the Greek


philosopher Aristotle (322 BC). The application of
hylomorphism to biology was important to Aristotle,
and biology is extensively covered in his extant
writings. In this view, everything in the material
universe has both matter and form, and the form of a
living thing is its soul (Greek psyche, Latin anima).
There are three kinds of souls: the vegetative soul of
plants, which causes them to grow and decay and
nourish themselves, but does not cause motion and
sensation; the animal soul, which causes animals to
move and feel; and the rational soul, which is the The structure of the souls of plants, animals, and humans,
source of consciousness and reasoning, which according to Aristotle
(Aristotle believed) is found only in man.[69] Each
higher soul has all of the attributes of the lower ones. Aristotle believed that while matter can exist without
form, form cannot exist without matter, and that therefore the soul cannot exist without the body.[70]

This account is consistent with teleological explanations of life, which account for phenomena in terms of
purpose or goal-directedness. Thus, the whiteness of the polar bear's coat is explained by its purpose of
camouflage. The direction of causality (from the future to the past) is in contradiction with the scientific

https://en.wikipedia.org/wiki/Life 6/27
7/24/2017 Life - Wikipedia

evidence for natural selection, which explains the consequence in terms of a prior cause. Biological features are
explained not by looking at future optimal results, but by looking at the past evolutionary history of a species,
which led to the natural selection of the features in question.[71]

Spontaneous generation

Spontaneous generation was the belief on the ordinary formation of living organisms without descent from
similar organisms. Typically, the idea was that certain forms such as fleas could arise from inanimate matter
such as dust or the supposed seasonal generation of mice and insects from mud or garbage.[72]

The theory of spontaneous generation was proposed by Aristotle,[73] who compiled and expanded the work of
prior natural philosophers and the various ancient explanations of the appearance of organisms; it held sway for
two millennia. It was decisively dispelled by the experiments of Louis Pasteur in 1859, who expanded upon the
investigations of predecessors such as Francesco Redi.[74][75] Disproof of the traditional ideas of spontaneous
generation is no longer controversial among biologists.[76][77][78]

Vitalism

Vitalism is the belief that the life-principle is non-material. This originated with Georg Ernst Stahl (17th
century), and remained popular until the middle of the 19th century. It appealed to philosophers such as Henri
Bergson, Friedrich Nietzsche, and Wilhelm Dilthey,[79] anatomists like Marie François Xavier Bichat, and
chemists like Justus von Liebig.[80] Vitalism included the idea that there was a fundamental difference between
organic and inorganic material, and the belief that organic material can only be derived from living things. This
was disproved in 1828, when Friedrich Wöhler prepared urea from inorganic materials.[81] This Wöhler
synthesis is considered the starting point of modern organic chemistry. It is of historical significance because
for the first time an organic compound was produced in inorganic reactions.[80]

During the 1850s, Hermann von Helmholtz, anticipated by Julius Robert von Mayer, demonstrated that no
energy is lost in muscle movement, suggesting that there were no "vital forces" necessary to move a muscle.[82]
These results led to the abandonment of scientific interest in vitalistic theories, although the belief lingered on
in pseudoscientific theories such as homeopathy, which interprets diseases and sickness as caused by
disturbances in a hypothetical vital force or life force.[83]

Origin
The age of the Earth is about 4.54 billion
years.[84][85][86] Evidence suggests that life on Life timeline
Earth has existed for at least 3.5 billion view • discuss •
years,[87][88][89][90][91][92][93][94][95] with the oldest 0— ←Earliest humans
Quaternary P Flowers
physical traces of life dating back 3.7 billion h Mammals
– a Dinosaurs
years;[96][97][98] however, some theories, such as Karoo
n Land life
Andean e
the Late Heavy Bombardment theory, suggest that -500 — r ←Cambrian explosion
life on Earth may have started even earlier, as early Cryogenian
o
z
←Ediacara biota

as 4.1–4.4 billion years ago,[87][88][89][90][91] and o
i Multicellular
the chemistry leading to life may have begun -1000 — c
life
shortly after the Big Bang, 13.8 billion years ago,
←Earliest sexual
during an epoch when the universe was only 10– –
reproduction
P
17 million years old.[99][100][101] -1500 — r
o
t
– e
r Eukaryotes
o
-2000 — z
https://en.wikipedia.org/wiki/Life 7/27
7/24/2017 Life - Wikipedia
-2000 — z
More than 99% of all species of life forms, o
i
amounting to over five billion species,[102] that Huronian – c ←Oxygen crisis
ever lived on Earth are estimated to be
-2500 — ←Atmospheric oxygen
extinct.[103][104]

Although the number of Earth's catalogued species Pongola
photosynthesis
of lifeforms is between 1.2 million and 2 -3000 — A
r
million,[105][106] the total number of species in the –h
c
planet is uncertain. Estimates range from 8 million e
a
to 100 million, [105][106] with a more narrow range -3500 — n ←Earliest oxygen
between 10 and 14 million,[105] but it may be as –
high as 1 trillion (with only one-thousandth of one Single-celled
←LHB meteorites
percent of the species described) according to -4000 — life
H
studies realized in May 2016.[107][108] The total a
–d ←Earliest life
amount of related DNA base pairs on Earth is e water
a ←Earliest water
estimated at 5.0 x 1037 and weighs 50 billion -4500 — n ←Earliest Earth
tonnes. [109] In comparison, the total mass of the Axis scale: millions of years.
(−4540)
Orange labels: known ice ages.
biosphere has been estimated to be as much as 4
Also see: Human timeline and Nature timeline
TtC (trillion tons of carbon).[110] In July 2016,
scientists reported identifying a set of 355 genes
from the Last Universal Common Ancestor (LUCA) of all organisms living on Earth.[5]

All known life forms share fundamental molecular mechanisms, reflecting their common descent; based on
these observations, hypotheses on the origin of life attempt to find a mechanism explaining the formation of a
universal common ancestor, from simple organic molecules via pre-cellular life to protocells and metabolism.
Models have been divided into "genes-first" and "metabolism-first" categories, but a recent trend is the
emergence of hybrid models that combine both categories.[111]

There is no current scientific consensus as to how life originated. However, most accepted scientific models
build on the Miller–Urey experiment and the work of Sidney Fox, which show that conditions on the primitive
Earth favored chemical reactions that synthesize amino acids and other organic compounds from inorganic
precursors,[112] and phospholipids spontaneously form lipid bilayers, the basic structure of a cell membrane.

Living organisms synthesize proteins, which are polymers of amino acids using instructions encoded by
deoxyribonucleic acid (DNA). Protein synthesis entails intermediary ribonucleic acid (RNA) polymers. One
possibility for how life began is that genes originated first, followed by proteins;[113] the alternative being that
proteins came first and then genes.[114]

However, because genes and proteins are both required to produce the other, the problem of considering which
came first is like that of the chicken or the egg. Most scientists have adopted the hypothesis that because of this,
it is unlikely that genes and proteins arose independently.[115]

Therefore, a possibility, first suggested by Francis Crick,[116] is that the first life was based on RNA,[115] which
has the DNA-like properties of information storage and the catalytic properties of some proteins. This is called
the RNA world hypothesis, and it is supported by the observation that many of the most critical components of
cells (those that evolve the slowest) are composed mostly or entirely of RNA. Also, many critical cofactors
(ATP, Acetyl-CoA, NADH, etc.) are either nucleotides or substances clearly related to them. The catalytic
properties of RNA had not yet been demonstrated when the hypothesis was first proposed,[117] but they were
confirmed by Thomas Cech in 1986.[118]

https://en.wikipedia.org/wiki/Life 8/27
7/24/2017 Life - Wikipedia

One issue with the RNA world hypothesis is that synthesis of RNA from simple inorganic precursors is more
difficult than for other organic molecules. One reason for this is that RNA precursors are very stable and react
with each other very slowly under ambient conditions, and it has also been proposed that living organisms
consisted of other molecules before RNA.[119] However, the successful synthesis of certain RNA molecules
under the conditions that existed prior to life on Earth has been achieved by adding alternative precursors in a
specified order with the precursor phosphate present throughout the reaction.[120] This study makes the RNA
world hypothesis more plausible.[121]

Geological findings in 2013 showed that reactive phosphorus species (like phosphite) were in abundance in the
ocean before 3.5 Ga, and that Schreibersite easily reacts with aqueous glycerol to generate phosphite and
glycerol 3-phosphate.[122] It is hypothesized that Schreibersite-containing meteorites from the Late Heavy
Bombardment could have provided early reduced phosphorus, which could react with prebiotic organic
molecules to form phosphorylated biomolecules, like RNA.[122]

In 2009, experiments demonstrated Darwinian evolution of a two-component system of RNA enzymes


(ribozymes) in vitro.[123] The work was performed in the laboratory of Gerald Joyce, who stated "This is the
first example, outside of biology, of evolutionary adaptation in a molecular genetic system."[124]

Prebiotic compounds may have originated extraterrestrially. NASA findings in 2011, based on studies with
meteorites found on Earth, suggest DNA and RNA components (adenine, guanine and related organic
molecules) may be formed in outer space.[125][126][127][128]

In March 2015, NASA scientists reported that, for the first time, complex DNA and RNA organic compounds
of life, including uracil, cytosine and thymine, have been formed in the laboratory under outer space conditions,
using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic
hydrocarbons (PAHs), the most carbon-rich chemical found in the universe, may have been formed in red
giants or in interstellar dust and gas clouds, according to the scientists.[129]

According to the panspermia hypothesis, microscopic life—distributed by meteoroids, asteroids and other small
Solar System bodies—may exist throughout the universe.[130]

Environmental conditions
The diversity of life on Earth is a result of the dynamic interplay
between genetic opportunity, metabolic capability, environmental
challenges,[131] and symbiosis.[132][133][134] For most of its existence,
Earth's habitable environment has been dominated by microorganisms
and subjected to their metabolism and evolution. As a consequence of
these microbial activities, the physical-chemical environment on Earth
has been changing on a geologic time scale, thereby affecting the path
of evolution of subsequent life.[131] For example, the release of
molecular oxygen by cyanobacteria as a by-product of photosynthesis
induced global changes in the Earth's environment. Because oxygen
was toxic to most life on Earth at the time, this posed novel
evolutionary challenges, and ultimately resulted in the formation of
Earth's major animal and plant species. This interplay between Cyanobacteria dramatically changed
organisms and their environment is an inherent feature of living the composition of life forms on
systems.[131] Earth by leading to the near-
extinction of oxygen-intolerant
Biosphere organisms.

https://en.wikipedia.org/wiki/Life 9/27
7/24/2017 Life - Wikipedia

The biosphere is the global sum of all ecosystems. It can also be termed as the zone of life on Earth, a closed
system (apart from solar and cosmic radiation and heat from the interior of the Earth), and largely self-
regulating.[135] By the most general biophysiological definition, the biosphere is the global ecological system
integrating all living beings and their relationships, including their interaction with the elements of the
lithosphere, geosphere, hydrosphere, and atmosphere.

Life forms live in every part of the Earth's biosphere, including soil, hot springs, inside rocks at least 19 km
(12 mi) deep underground, the deepest parts of the ocean, and at least 64 km (40 mi) high in the
atmosphere.[136][137][138] Under certain test conditions, life forms have been observed to thrive in the vacuum
of outer space.[139][140] Life forms appear to thrive in the Mariana Trench, the deepest spot in the Earth's
oceans.[141][142] Other researchers reported related studies that life forms thrive inside rocks up to 580 m
(1,900 ft; 0.36 mi) below the sea floor under 2,590 m (8,500 ft; 1.61 mi) of ocean off the coast of the
northwestern United States,[141][143] as well as 2,400 m (7,900 ft; 1.5 mi) beneath the seabed off Japan.[144] In
August 2014, scientists confirmed the existence of life forms living 800 m (2,600 ft; 0.50 mi) below the ice of
Antarctica.[145][146]

The biosphere is postulated to have evolved, beginning with a process of biopoesis (life created naturally from
non-living matter, such as simple organic compounds) or biogenesis (life created from living matter), at least
some 3.5 billion years ago.[147][148] The earliest evidence for life on Earth includes biogenic graphite found in
3.7 billion-year-old metasedimentary rocks from Western Greenland[96] and microbial mat fossils found in 3.48
billion-year-old sandstone from Western Australia.[97][98] More recently, in 2015, "remains of biotic life" were
found in 4.1 billion-year-old rocks in Western Australia.[88][89] In 2017, putative fossilized microorganisms (or
microfossils) were announced to have been discovered in hydrothermal vent precipitates in the Nuvvuagittuq
Belt of Quebec, Canada that were as old as 4.28 billion years, the oldest record of life on earth, suggesting "an
almost instantaneous emergence of life" after ocean formation 4.4 billion years ago, and not long after the
formation of the Earth 4.54 billion years ago.[1][2][3][4] According to one of the researchers, "If life arose
relatively quickly on Earth ... then it could be common in the universe."[88]

In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes
artificial biospheres such as Biosphere 2 and BIOS-3, and potentially ones on other planets or moons.[149]

Range of tolerance

The inert components of an ecosystem are the physical and chemical factors
necessary for life—energy (sunlight or chemical energy), water, temperature,
atmosphere, gravity, nutrients, and ultraviolet solar radiation protection.[150] In
most ecosystems, the conditions vary during the day and from one season to the
next. To live in most ecosystems, then, organisms must be able to survive a range
of conditions, called the "range of tolerance."[151] Outside that are the "zones of
physiological stress," where the survival and reproduction are possible but not
optimal. Beyond these zones are the "zones of intolerance," where survival and
reproduction of that organism is unlikely or impossible. Organisms that have a
wide range of tolerance are more widely distributed than organisms with a narrow
Deinococcus
range of tolerance.[151]
radiodurans is an
extremophile that can
resist extremes of cold, Extremophiles
dehydration, vacuum,
acid, and radiation To survive, selected microorganisms can assume forms that enable them to
exposure. withstand freezing, complete desiccation, starvation, high levels of radiation
exposure, and other physical or chemical challenges. These microorganisms may
survive exposure to such conditions for weeks, months, years, or even

https://en.wikipedia.org/wiki/Life 10/27
7/24/2017 Life - Wikipedia

centuries.[131] Extremophiles are microbial life forms that thrive outside the ranges where life is commonly
found.[152] They excel at exploiting uncommon sources of energy. While all organisms are composed of nearly
identical molecules, evolution has enabled such microbes to cope with this wide range of physical and chemical
conditions. Characterization of the structure and metabolic diversity of microbial communities in such extreme
environments is ongoing.[153]

Microbial life forms thrive even in the Mariana Trench, the deepest spot on the Earth.[141][142] Microbes also
thrive inside rocks up to 1900 feet below the sea floor under 8500 feet of ocean.[141][143]

Investigation of the tenacity and versatility of life on Earth,[152] as well as an understanding of the molecular
systems that some organisms utilize to survive such extremes, is important for the search for life beyond
Earth.[131] For example, lichen could survive for a month in a simulated Martian environment.[154][155]

Chemical elements

All life forms require certain core chemical elements needed for biochemical functioning. These include
carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur—the elemental macronutrients for all
organisms[156]—often represented by the acronym CHNOPS. Together these make up nucleic acids, proteins
and lipids, the bulk of living matter. Five of these six elements comprise the chemical components of DNA, the
exception being sulfur. The latter is a component of the amino acids cysteine and methionine. The most
biologically abundant of these elements is carbon, which has the desirable attribute of forming multiple, stable
covalent bonds. This allows carbon-based (organic) molecules to form an immense variety of chemical
arrangements.[157] Alternative hypothetical types of biochemistry have been proposed that eliminate one or
more of these elements, swap out an element for one not on the list, or change required chiralities or other
chemical properties.[158][159]

DNA

Deoxyribonucleic acid is a molecule that carries most of the genetic instructions used in the growth,
development, functioning and reproduction of all known living organisms and many viruses. DNA and RNA
are nucleic acids; alongside proteins and complex carbohydrates, they are one of the three major types of
macromolecule that are essential for all known forms of life. Most DNA molecules consist of two biopolymer
strands coiled around each other to form a double helix. The two DNA strands are known as polynucleotides
since they are composed of simpler units called nucleotides.[160] Each nucleotide is composed of a nitrogen-
containing nucleobase—either cytosine (C), guanine (G), adenine (A), or thymine (T)—as well as a sugar
called deoxyribose and a phosphate group. The nucleotides are joined to one another in a chain by covalent
bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-
phosphate backbone. According to base pairing rules (A with T, and C with G), hydrogen bonds bind the
nitrogenous bases of the two separate polynucleotide strands to make double-stranded DNA. The total amount
of related DNA base pairs on Earth is estimated at 5.0 x 1037, and weighs 50 billion tonnes.[109] In comparison,
the total mass of the biosphere has been estimated to be as much as 4 TtC (trillion tons of carbon).[110]

DNA stores biological information. The DNA backbone is resistant to cleavage, and both strands of the double-
stranded structure store the same biological information. Biological information is replicated as the two strands
are separated. A significant portion of DNA (more than 98% for humans) is non-coding, meaning that these
sections do not serve as patterns for protein sequences.

The two strands of DNA run in opposite directions to each other and are therefore anti-parallel. Attached to
each sugar is one of four types of nucleobases (informally, bases). It is the sequence of these four nucleobases
along the backbone that encodes biological information. Under the genetic code, RNA strands are translated to
specify the sequence of amino acids within proteins. These RNA strands are initially created using DNA
strands as a template in a process called transcription.
https://en.wikipedia.org/wiki/Life 11/27
7/24/2017 Life - Wikipedia

Within cells, DNA is organized into long structures called chromosomes. During cell division these
chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of
chromosomes. Eukaryotic organisms (animals, plants, fungi, and protists) store most of their DNA inside the
cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts.[161] In contrast,
prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within the chromosomes, chromatin
proteins such as histones compact and organize DNA. These compact structures guide the interactions between
DNA and other proteins, helping control which parts of the DNA are transcribed.

DNA was first isolated by Friedrich Miescher in 1869.[162] Its molecular structure was identified by James
Watson and Francis Crick in 1953, whose model-building efforts were guided by X-ray diffraction data
acquired by Rosalind Franklin.[163]

Classification
Life is usually classified by eight levels of taxa—domains, kingdoms, phyla, class,
order, family, genus, and species. In May 2016, scientists reported that 1 trillion
species are estimated to be on Earth currently with only one-thousandth of one
percent described.[107]

The first known attempt to classify organisms was conducted by the Greek
philosopher Aristotle (384–322 BC), who classified all living organisms known at
that time as either a plant or an animal, based mainly on their ability to move. He
also distinguished animals with blood from animals without blood (or at least
without red blood), which can be compared with the concepts of vertebrates and
invertebrates respectively, and divided the blooded animals into five groups:
viviparous quadrupeds (mammals), oviparous quadrupeds (reptiles and
amphibians), birds, fishes and whales. The bloodless animals were also divided
into five groups: cephalopods, crustaceans, insects (which included the spiders,
scorpions, and centipedes, in addition to what we define as insects today), shelled
animals (such as most molluscs and echinoderms), and "zoophytes" (animals that
resemble plants). Though Aristotle's work in zoology was not without errors, it
was the grandest biological synthesis of the time and remained the ultimate
authority for many centuries after his death.[164]

The exploration of the Americas revealed large numbers of new plants and animals
that needed descriptions and classification. In the latter part of the 16th century and The hierarchy of
the beginning of the 17th, careful study of animals commenced and was gradually biological classification's
extended until it formed a sufficient body of knowledge to serve as an anatomical eight major taxonomic
basis for classification. In the late 1740s, Carl Linnaeus introduced his system of ranks. Life is divided into
binomial nomenclature for the classification of species. Linnaeus attempted to domains, which are
improve the composition and reduce the length of the previously used many- subdivided into further
worded names by abolishing unnecessary rhetoric, introducing new descriptive groups. Intermediate
terms and precisely defining their meaning.[165] minor rankings are not
shown.
The fungi were originally treated as plants. For a short period Linnaeus had
classified them in the taxon Vermes in Animalia, but later placed them back in
Plantae. Copeland classified the Fungi in his Protoctista, thus partially avoiding the problem but acknowledging
their special status.[166] The problem was eventually solved by Whittaker, when he gave them their own
kingdom in his five-kingdom system. Evolutionary history shows that the fungi are more closely related to
animals than to plants.[167]

https://en.wikipedia.org/wiki/Life 12/27
7/24/2017 Life - Wikipedia

As new discoveries enabled detailed study of cells and microorganisms, new groups of life were revealed, and
the fields of cell biology and microbiology were created. These new organisms were originally described
separately in protozoa as animals and protophyta/thallophyta as plants, but were united by Haeckel in the
kingdom Protista; later, the prokaryotes were split off in the kingdom Monera, which would eventually be
divided into two separate groups, the Bacteria and the Archaea. This led to the six-kingdom system and
eventually to the current three-domain system, which is based on evolutionary relationships.[168] However, the
classification of eukaryotes, especially of protists, is still controversial.[169]

As microbiology, molecular biology and virology developed, non-cellular reproducing agents were discovered,
such as viruses and viroids. Whether these are considered alive has been a matter of debate; viruses lack
characteristics of life such as cell membranes, metabolism and the ability to grow or respond to their
environments. Viruses can still be classed into "species" based on their biology and genetics, but many aspects
of such a classification remain controversial.[170]

In the 1960s a trend called cladistics emerged, arranging taxa based on clades in an evolutionary or
phylogenetic tree.[171]

Linnaeus Haeckel Chatton Copeland Whittaker Woese et al. Cavalier-Smith


1735[172] 1866[173] 1925[174] 1938[175] 1969[176] 1990[168] 1998[177]
2 kingdoms 3 kingdoms 2 empires 4 kingdoms 5 kingdoms 3 domains 6 kingdoms
Bacteria
Prokaryota Monera Monera Bacteria
Archaea
(not treated) Protista
Protozoa
Protoctista Protista
Chromista
Eukaryota Plantae Eucarya Plantae
Vegetabilia Plantae Plantae
Fungi Fungi
Animalia Animalia Animalia Animalia Animalia

Biota (taxonomy)

In systems of scientific classification, Biota[178] is the superdomain that classifies all life.[179]

Cells
Cells are the basic unit of structure in every living thing, and all cells arise from pre-existing cells by division.
Cell theory was formulated by Henri Dutrochet, Theodor Schwann, Rudolf Virchow and others during the early
nineteenth century, and subsequently became widely accepted.[180] The activity of an organism depends on the
total activity of its cells, with energy flow occurring within and between them.[181] Cells contain hereditary
information that is carried forward as a genetic code during cell division.[182]

There are two primary types of cells. Prokaryotes lack a nucleus and other membrane-bound organelles,
although they have circular DNA and ribosomes. Bacteria and Archaea are two domains of prokaryotes. The
other primary type of cells are the eukaryotes, which have distinct nuclei bound by a nuclear membrane and
membrane-bound organelles, including mitochondria, chloroplasts, lysosomes, rough and smooth endoplasmic
reticulum, and vacuoles. In addition, they possess organized chromosomes that store genetic material. All
species of large complex organisms are eukaryotes, including animals, plants and fungi, though most species of
eukaryote are protist microorganisms.[183] The conventional model is that eukaryotes evolved from
prokaryotes, with the main organelles of the eukaryotes forming through endosymbiosis between bacteria and
the progenitor eukaryotic cell.[184]

https://en.wikipedia.org/wiki/Life 13/27
7/24/2017 Life - Wikipedia

The molecular mechanisms of cell biology are based on proteins. Most of these are synthesized by the
ribosomes through an enzyme-catalyzed process called protein biosynthesis. A sequence of amino acids is
assembled and joined together based upon gene expression of the cell's nucleic acid.[185] In eukaryotic cells,
these proteins may then be transported and processed through the Golgi apparatus in preparation for dispatch to
their destination.[186]

Cells reproduce through a process of cell division in which the parent cell divides into two or more daughter
cells. For prokaryotes, cell division occurs through a process of fission in which the DNA is replicated, then the
two copies are attached to parts of the cell membrane. In eukaryotes, a more complex process of mitosis is
followed. However, the end result is the same; the resulting cell copies are identical to each other and to the
original cell (except for mutations), and both are capable of further division following an interphase period.[187]

Multicellular organisms may have first evolved through the formation of colonies like cells. These cells can
form group organisms through cell adhesion. The individual members of a colony are capable of surviving on
their own, whereas the members of a true multi-cellular organism have developed specializations, making them
dependent on the remainder of the organism for survival. Such organisms are formed clonally or from a single
germ cell that is capable of forming the various specialized cells that form the adult organism. This
specialization allows multicellular organisms to exploit resources more efficiently than single cells.[188] In
January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single
molecule, called GK-PID, may have allowed organisms to go from a single cell organism to one of many
cells.[189]

Cells have evolved methods to perceive and respond to their microenvironment, thereby enhancing their
adaptability. Cell signaling coordinates cellular activities, and hence governs the basic functions of
multicellular organisms. Signaling between cells can occur through direct cell contact using juxtacrine
signalling, or indirectly through the exchange of agents as in the endocrine system. In more complex
organisms, coordination of activities can occur through a dedicated nervous system.[190]

Extraterrestrial
Though life is confirmed only on Earth, many think that extraterrestrial life is not only plausible, but probable
or inevitable.[191][192] Other planets and moons in the Solar System and other planetary systems are being
examined for evidence of having once supported simple life, and projects such as SETI are trying to detect
radio transmissions from possible alien civilizations. Other locations within the Solar System that may host
microbial life include the subsurface of Mars, the upper atmosphere of Venus,[193] and subsurface oceans on
some of the moons of the giant planets.[194][195] Beyond the Solar System, the region around another main-
sequence star that could support Earth-like life on an Earth-like planet is known as the habitable zone. The
inner and outer radii of this zone vary with the luminosity of the star, as does the time interval during which the
zone survives. Stars more massive than the Sun have a larger habitable zone, but remain on the main sequence
for a shorter time interval. Small red dwarfs have the opposite problem, with a smaller habitable zone that is
subject to higher levels of magnetic activity and the effects of tidal locking from close orbits. Hence, stars in
the intermediate mass range such as the Sun may have a greater likelihood for Earth-like life to develop.[196]
The location of the star within a galaxy may also affect the likelihood of life forming. Stars in regions with a
greater abundance of heavier elements that can form planets, in combination with a low rate of potentially
habitat-damaging supernova events, are predicted to have a higher probability of hosting planets with complex
life.[197] The variables of the Drake equation are used to discuss the conditions in planetary systems where
civilization is most likely to exist.[198] Use of the equation to predict the amount of extraterrestrial life,
however, is difficult; because many of the variables are unknown, the equation functions as more of a mirror to
what its user already thinks. As a result, the number of civilizations in the galaxy can be estimated as low as 9.1
x 10^-11 or as high as 156 million; for the calculations, see Drake equation.

Artificial
https://en.wikipedia.org/wiki/Life 14/27
7/24/2017 Life - Wikipedia

Artificial life is the simulation of any aspect of life, as through computers, robotics, or biochemistry.[199] The
study of artificial life imitates traditional biology by recreating some aspects of biological phenomena.
Scientists study the logic of living systems by creating artificial environments—seeking to understand the
complex information processing that defines such systems.[181] While life is, by definition, alive, artificial life
is generally referred to as data confined to a digital environment and existence.

Synthetic biology is a new area of biotechnology that combines science and biological engineering. The
common goal is the design and construction of new biological functions and systems not found in nature.
Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of
being able to design and build engineered biological systems that process information, manipulate chemicals,
fabricate materials and structures, produce energy, provide food, and maintain and enhance human health and
the environment.[200]

Death
Death is the permanent termination of all vital functions or life
processes in an organism or cell.[201][202] It can occur as a result of an
accident, medical conditions, biological interaction, malnutrition,
poisoning, senescence, or suicide. After death, the remains of an
organism re-enter the biogeochemical cycle. Organisms may be
consumed by a predator or a scavenger and leftover organic material
may then be further decomposed by detritivores, organisms that recycle
detritus, returning it to the environment for reuse in the food chain.
Animal corpses, like this African
One of the challenges in defining death is in distinguishing it from life.
buffalo, are recycled by the
Death would seem to refer to either the moment life ends, or when the
ecosystem, providing energy and
state that follows life begins.[202] However, determining when death has nutrients for living creatures
occurred requires drawing precise conceptual boundaries between life
and death. This is problematic, however, because there is little
consensus over how to define life. The nature of death has for millennia been a central concern of the world's
religious traditions and of philosophical inquiry. Many religions maintain faith in either a kind of afterlife or
reincarnation for the soul, or resurrection of the body at a later date.

Extinction

Extinction is the process by which a group of taxa or species dies out, reducing biodiversity.[203] The moment
of extinction is generally considered the death of the last individual of that species. Because a species' potential
range may be very large, determining this moment is difficult, and is usually done retrospectively after a period
of apparent absence. Species become extinct when they are no longer able to survive in changing habitat or
against superior competition. In Earth's history, over 99% of all the species that have ever lived are
extinct;[204][102][103][104] however, mass extinctions may have accelerated evolution by providing opportunities
for new groups of organisms to diversify.[205]

Fossils

Fossils are the preserved remains or traces of animals, plants, and other organisms from the remote past. The
totality of fossils, both discovered and undiscovered, and their placement in fossil-containing rock formations
and sedimentary layers (strata) is known as the fossil record. A preserved specimen is called a fossil if it is
older than the arbitrary date of 10,000 years ago.[206] Hence, fossils range in age from the youngest at the start
of the Holocene Epoch to the oldest from the Archaean Eon, up to 3.4 billion years old.[207][208]

See also
https://en.wikipedia.org/wiki/Life 15/27
7/24/2017 Life - Wikipedia

Biology, the study of life Lists of organisms by population


Astrobiology Phylogenetics
Biosignature Viable System Theory
Evolutionary history of life

Notes
1. The "evolution" of viruses and other similar forms is still uncertain. Therefore, this classification may be
paraphyletic because cellular life might have evolved from non-cellular life, or polyphyletic because the most recent
common ancestor might not be included.
2. Infectious protein molecules prions are not considered living organisms, but can be described as "organism-
comparable organic structures".
3. Certain specific virus-dependent organic structures may be considered subviral agents, including satellites and
defective interfering particles, both of which require another virus for their replication.

References
1. Dodd, Matthew S.; Papineau, Dominic; Grenne, Tor; Slack, John F.; Rittner, Martin; Pirajno, Franco; O'Neil,
Jonathan; Little, Crispin T. S. (1 March 2017). "Evidence for early life in Earth’s oldest hydrothermal vent
precipitates" (http://www.nature.com/nature/journal/v543/n7643/full/nature21377.html). Nature. 543: 60–64.
doi:10.1038/nature21377 (https://doi.org/10.1038%2Fnature21377). Retrieved 2 March 2017.
2. Zimmer, Carl (1 March 2017). "Scientists Say Canadian Bacteria Fossils May Be Earth’s Oldest" (https://www.nytim
es.com/2017/03/01/science/earths-oldest-bacteria-fossils.html). New York Times. Retrieved 2 March 2017.
3. Ghosh, Pallab (1 March 2017). "Earliest evidence of life on Earth 'found" (http://www.bbc.co.uk/news/science-envir
onment-39117523). BBC News. Retrieved 2 March 2017.
4. Dunham, Will (1 March 2017). "Canadian bacteria-like fossils called oldest evidence of life" (http://ca.reuters.com/ar
ticle/topNews/idCAKBN16858B?sp=true). Reuters. Retrieved 1 March 2017.
5. Wade, Nicholas (25 July 2016). "Meet Luca, the Ancestor of All Living Things" (https://www.nytimes.com/2016/07/
26/science/last-universal-ancestor.html). New York Times. Retrieved 25 July 2016.
6. A. Tsokolov, Serhiy A. (May 2009). "Why Is the Definition of Life So Elusive? Epistemological Considerations" (htt
p://online.liebertpub.com/doi/pdfplus/10.1089/ast.2007.0201) (PDF). Astrobiology. 9 (4): 401–12.
Bibcode:2009AsBio...9..401T (http://adsabs.harvard.edu/abs/2009AsBio...9..401T). PMID 19519215 (https://www.n
cbi.nlm.nih.gov/pubmed/19519215). doi:10.1089/ast.2007.0201 (https://doi.org/10.1089%2Fast.2007.0201).
Retrieved 11 April 2015.
7. Mullen, Leslie (19 June 2002). "Defining Life" (https://web.archive.org/web/20120421211210/http://www.astrobio.n
et/exclusive/226/defining-life). Astrobiology Magazine. NASA. Archived from the original (http://www.astrobio.net/
exclusive/226/defining-life) on 21 April 2012. Retrieved 12 November 2016.
8. Emmeche, Claus (1997). "Defining Life, Explaining Emergence" (http://www.nbi.dk/~emmeche/cePubl/97e.defLife.
v3f.html). Niels Bohr Institute. Retrieved 25 May 2012.
9. "Can We Define Life" (https://web.archive.org/web/20100610005441/http://artsandsciences.colorado.edu/magazine/
2009/03/can-we-define-life/). Colorado Arts & Sciences. Archived from the original (http://artsandsciences.colorado.
edu/magazine/2009/03/can-we-define-life/) on 10 June 2010. Retrieved 22 June 2009.
10. Strother, Paul K. (22 January 2010). "What is life?" (https://web.archive.org/web/20161220145116/https://www2.bc.
edu/~strother/GE_146/lectures/7.html). Origin and Evolution of Life on Earth. Boston College. Archived from the
original (https://www2.bc.edu/~strother/GE_146/lectures/7.html) on 20 December 2016. Retrieved 12 November
2016.
11. Mautner, Michael N. (1997). "Directed panspermia. 3. Strategies and motivation for seeding star-forming clouds" (ht
tp://www.astro-ecology.com/PDFDirectedPanspermia3JBIS1997Paper.pdf) (PDF). Journal of the British
Interplanetary Society. 50: 93–102. Bibcode:1997JBIS...50...93M (http://adsabs.harvard.edu/abs/1997JBIS...50...93
M).
12. Mautner, Michael N. (2000). Seeding the Universe with Life: Securing Our Cosmological Future (http://www.astro-e
cology.com/PDFSeedingtheUniverse2005Book.pdf) (PDF). Washington D. C.: Legacy Books (www.amazon.com).
ISBN 978-0-476-00330-9.
13. McKay, Chris (18 September 2014). "What is life? It's a Tricky, Often Confusing Question". Astrobiology Magazine.
14. Nealson, K. H.; Conrad, P. G. (December 1999). "Life: past, present and future" (http://journals.royalsociety.org/cont
ent/7r10hqn3rp1g1vag/fulltext.pdf) (PDF). Philosophical Transactions of the Royal Society of London B. 354 (1392):
1923–39. PMC 1692713 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1692713)  . PMID 10670014 (https://ww
w.ncbi.nlm.nih.gov/pubmed/10670014). doi:10.1098/rstb.1999.0532 (https://doi.org/10.1098%2Frstb.1999.0532).

https://en.wikipedia.org/wiki/Life 16/27
7/24/2017 Life - Wikipedia

15. McKay, Chris P. (14 September 2004). "What Is Life—and How Do We Search for It in Other Worlds?" (https://ww
w.ncbi.nlm.nih.gov/pmc/articles/PMC516796). PLoS Biology. 2 (2(9)): 302. PMC 516796 (https://www.ncbi.nlm.ni
h.gov/pmc/articles/PMC516796)  . PMID 15367939 (https://www.ncbi.nlm.nih.gov/pubmed/15367939).
doi:10.1371/journal.pbio.0020302 (https://doi.org/10.1371%2Fjournal.pbio.0020302).
16. Mautner, Michael N. (2009). "Life-centered ethics, and the human future in space" (http://www.astro-ecology.com/P
DFLifeCenteredBioethics2009Paper.pdf) (PDF). Bioethics. 23 (8): 433–40. PMID 19077128 (https://www.ncbi.nlm.n
ih.gov/pubmed/19077128). doi:10.1111/j.1467-8519.2008.00688.x (https://doi.org/10.1111%2Fj.1467-8519.2008.00
688.x).
17. Koshland, Jr., Daniel E. (22 March 2002). "The Seven Pillars of Life" (http://www.sciencemag.org/cgi/content/full/2
95/5563/2215). Science. 295 (5563): 2215–16. PMID 11910092 (https://www.ncbi.nlm.nih.gov/pubmed/11910092).
doi:10.1126/science.1068489 (https://doi.org/10.1126%2Fscience.1068489). Retrieved 25 May 2009.
18. "life". The American Heritage Dictionary of the English Language (4th ed.). Houghton Mifflin. 2006. ISBN 978-0-
618-70173-5.
19. "Life" (http://www.merriam-webster.com/dictionary/life). Merriam-Webster Dictionary. Retrieved 12 November
2016.
20. "Habitability and Biology: What are the Properties of Life?" (http://phoenix.lpl.arizona.edu/mars141.php). Phoenix
Mars Mission. The University of Arizona. Retrieved 6 June 2013.
21. Trifonov, Edward N. (2012). "Definition of Life: Navigation through Uncertainties" (http://www.jbsdonline.com/mc_
images/category/4317/21-trifonov-jbsd_29_4_2012.pdf) (PDF). Journal of Biomolecular Structure & Dynamics.
Adenine Press. 29 (4): 647–50. ISSN 0739-1102 (https://www.worldcat.org/issn/0739-1102).
doi:10.1080/073911012010525017 (https://doi.org/10.1080%2F073911012010525017). Retrieved 12 January 2012.
22. Zimmer, Carl (11 January 2012). "Can scientists define 'life' ... using just three words?" (http://www.nbcnews.com/i
d/45963181/ns/technology_and_science/#.WCcX-snQf0w). NBC News. Retrieved 12 November 2016.
23. Luttermoser, Donald G. "ASTR-1020: Astronomy II Course Lecture Notes Section XII" (http://www.etsu.edu/physic
s/lutter/courses/astr1020/a1020chap12.pdf) (PDF). East Tennessee State University. Retrieved 28 August 2011.
24. Luttermoser, Donald G. (Spring 2008). "Physics 2028: Great Ideas in Science: The Exobiology Module" (http://ww
w.etsu.edu/physics/lutter/courses/phys2028/p2028exobnotes.pdf) (PDF). East Tennessee State University. Retrieved
28 August 2011.
25. Lammer, H.; Bredehöft, J. H.; Coustenis, A.; Khodachenko, M. L.; et al. (2009). "What makes a planet habitable?" (h
ttps://web.archive.org/web/20160602235333/http://veilnebula.jorgejohnson.me/uploads/3/5/8/7/3587678/lammer_et_
al_2009_astron_astro_rev-4.pdf) (PDF). The Astronomy and Astrophysics Review. 17: 181–249. doi:10.1007/s00159-
009-0019-z (https://doi.org/10.1007%2Fs00159-009-0019-z). Archived from the original (http://veilnebula.jorgejohn
son.me/uploads/3/5/8/7/3587678/lammer_et_al_2009_astron_astro_rev-4.pdf) (PDF) on 2 June 2016. Retrieved
2016-05-03. "Life as we know it has been described as a (thermodynamically) open system (Prigogine et al. 1972),
which makes use of gradients in its surroundings to create imperfect copies of itself."
26. Joyce, Gerald F. (1995). The RNA world: life before DNA and protein (http://ebooks.cambridge.org/chapter.jsf?bid=
CBO9780511564970&cid=CBO9780511564970A022). Cambridge University Press. pp. 139–51.
doi:10.1017/CBO9780511564970.017 (https://doi.org/10.1017%2FCBO9780511564970.017). Retrieved 27 May
2012.
27. Overbye, Dennis (28 October 2015). "Cassini Seeks Insights to Life in Plumes of Enceladus, Saturn's Icy Moon" (htt
ps://www.nytimes.com/2015/10/29/science/space/in-icy-breath-of-saturns-moon-enceladus-cassini-hunts-for-life.htm
l). New York Times. Retrieved 28 October 2015.
28. Domagal-Goldman, Shawn D.; Wright, Katherine E. (2016). "The Astrobiology Primer v2.0" (http://online.liebertpu
b.com/doi/pdfplus/10.1089/ast.2015.1460) (PDF). Astrobiology. 16 (8): 561–53. Bibcode:2016AsBio..16..561D (htt
p://adsabs.harvard.edu/abs/2016AsBio..16..561D). PMC 5008114 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5
008114)  . PMID 27532777 (https://www.ncbi.nlm.nih.gov/pubmed/27532777). doi:10.1089/ast.2015.1460 (https://
doi.org/10.1089%2Fast.2015.1460). Retrieved 2016-08-29.
29. Kaufmann, Stuart (2004). Barrow, John D.; Davies, P. C. W.; Harper, Jr., C. L., eds. "Autonomous agents" (https://bo
oks.google.com/books?id=K_OfC0Pte_8C&pg=PA654). Science and Ultimate Reality: Quantum Theory,
Cosmology, and Complexity. Cambridge University Press: 654–66. ISBN 978-0-521-83113-0.
30. Longo, Giuseppe; Montévil, Maël; Kauffman, Stuart (2012-01-01). "No Entailing Laws, but Enablement in the
Evolution of the Biosphere" (https://www.academia.edu/11720588/No_entailing_laws_but_enablement_in_the_evol
ution_of_the_biosphere). Proceedings of the 14th Annual Conference Companion on Genetic and Evolutionary
Computation. GECCO '12. New York, NY, USA: ACM: 1379–92. ISBN 978-1-4503-1178-6.
doi:10.1145/2330784.2330946 (https://doi.org/10.1145%2F2330784.2330946).
31. Koonin, E. V.; Starokadomskyy, P. (7 March 2016). "Are viruses alive? The replicator paradigm sheds decisive light
on an old but misguided question.". Stud Hist Philos Biol Biomed Sci. PMID 26965225 (https://www.ncbi.nlm.nih.go
v/pubmed/26965225). doi:10.1016/j.shpsc.2016.02.016 (https://doi.org/10.1016%2Fj.shpsc.2016.02.016).
32. Rybicki, EP (1990). "The classification of organisms at the edge of life, or problems with virus systematics". S Aft J
Sci. 86: 182–86.

https://en.wikipedia.org/wiki/Life 17/27
7/24/2017 Life - Wikipedia

33. Holmes, E. C. (October 2007). "Viral evolution in the genomic age" (http://biology.plosjournals.org/perlserv/?request
=get-document&doi=10.1371/journal.pbio.0050278). PLoS Biol. 5 (10): e278. PMC 1994994 (https://www.ncbi.nlm.
nih.gov/pmc/articles/PMC1994994)  . PMID 17914905 (https://www.ncbi.nlm.nih.gov/pubmed/17914905).
doi:10.1371/journal.pbio.0050278 (https://doi.org/10.1371%2Fjournal.pbio.0050278). Retrieved 13 September 2008.
34. Forterre, Patrick (3 March 2010). "Defining Life: The Virus Viewpoint" (https://www.ncbi.nlm.nih.gov/pmc/articles/
PMC2837877). Orig Life Evol Biosph. 40 (2): 151–60. Bibcode:2010OLEB...40..151F (http://adsabs.harvard.edu/ab
s/2010OLEB...40..151F). PMC 2837877 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2837877)  .
PMID 20198436 (https://www.ncbi.nlm.nih.gov/pubmed/20198436). doi:10.1007/s11084-010-9194-1 (https://doi.or
g/10.1007%2Fs11084-010-9194-1).
35. Koonin, E. V.; Senkevich, T. G.; Dolja, V. V. (2006). "The ancient Virus World and evolution of cells" (http://www.bi
ology-direct.com/content/1//29). Biology Direct. 1: 29. PMC 1594570 (https://www.ncbi.nlm.nih.gov/pmc/articles/P
MC1594570)  . PMID 16984643 (https://www.ncbi.nlm.nih.gov/pubmed/16984643). doi:10.1186/1745-6150-1-29
(https://doi.org/10.1186%2F1745-6150-1-29). Retrieved 14 September 2008.
36. Rybicki, Ed (November 1997). "Origins of Viruses" (https://web.archive.org/web/20090509094459/http://www.mcb.
uct.ac.za/tutorial/virorig.html). Archived from the original (http://www.mcb.uct.ac.za/tutorial/virorig.html#Virus%20
Origins) on 9 May 2009. Retrieved 12 April 2009.
37. "Giant Viruses Shake Up Tree of Life" (https://web.archive.org/web/20120917183158/http://www.astrobio.net/pressr
elease/5015/giant-viruses-shake-up-tree-of-life). Astrobiology Magazine. 15 September 2012. Archived from the
original (http://www.astrobio.net/pressrelease/5015/giant-viruses-shake-up-tree-of-life) on 17 September 2012.
Retrieved 13 November 2016.
38. Popa, Radu (March 2004). Between Necessity and Probability: Searching for the Definition and Origin of Life
(Advances in Astrobiology and Biogeophysics). Springer. ISBN 978-3-540-20490-9.
39. Schrödinger, Erwin (1944). What is Life?. Cambridge University Press. ISBN 978-0-521-42708-1.
40. Margulis, Lynn; Sagan, Dorion (1995). What is Life?. University of California Press. ISBN 978-0-520-22021-8.
41. Lovelock, James (2000). Gaia – a New Look at Life on Earth. Oxford University Press. ISBN 978-0-19-286218-1.
42. Avery, John (2003). Information Theory and Evolution. World Scientific. ISBN 978-981-238-399-0.
43. Woodruff, T. Sullivan; John Baross (8 October 2007). Planets and Life: The Emerging Science of Astrobiology.
Cambridge University Press. Cleland and Chyba wrote a chapter in Planets and Life: "In the absence of such a
theory, we are in a position analogous to that of a 16th-century investigator trying to define 'water' in the absence of
molecular theory." [...] "Without access to living things having a different historical origin, it is difficult and perhaps
ultimately impossible to formulate an adequately general theory of the nature of living systems".
44. Brown, Molly Young (2002). "Patterns, Flows, and Interrelationship" (https://web.archive.org/web/2009010812252
6/http://www.mollyyoungbrown.com/systems_article.htm). Archived from the original (http://www.mollyyoungbrow
n.com/systems_article.htm) on 8 January 2009. Retrieved 2009-06-27.
45. Lovelock, James (1979). Gaia: A New Look at Life on Earth. Oxford University Press. ISBN 978-0-19-286030-9.
46. Lovelock, J. E. (1965). "A physical basis for life detection experiments". Nature. 207 (7): 568–70.
Bibcode:1965Natur.207..568L (http://adsabs.harvard.edu/abs/1965Natur.207..568L). PMID 5883628 (https://www.n
cbi.nlm.nih.gov/pubmed/5883628). doi:10.1038/207568a0 (https://doi.org/10.1038%2F207568a0).
47. Lovelock, James. "Geophysiology" (http://www.jameslovelock.org/page4.html#GEO). Papers by James Lovelock.
48. Woodruff, T. Sullivan; John Baross (8 October 2007). Planets and Life: The Emerging Science of Astrobiology.
Cambridge University Press. ISBN 978-0-521-82421-7. Cleland and Chyba wrote a chapter in Planets and Life: "In
the absence of such a theory, we are in a position analogous to that of a 16th-century investigator trying to define
'water' in the absence of molecular theory."... "Without access to living things having a different historical origin, it is
difficult and perhaps ultimately impossible to formulate an adequately general theory of the nature of living
systems".
49. Robert, Rosen (November 1991). Life Itself: A Comprehensive Inquiry into the Nature, Origin, and Fabrication of
Life. ISBN 978-0-231-07565-7.
50. Fiscus, Daniel A. (April 2002). "The Ecosystemic Life Hypothesis" (https://web.archive.org/web/20090806015449/h
ttp://www.calresco.org/fiscus/esl.htm). Bulletin of the Ecological Society of America. Archived from the original (htt
p://www.calresco.org/fiscus/esl.htm) on 6 August 2009. Retrieved 28 August 2009.
51. Morowitz, Harold J. (1992). Beginnings of cellular life: metabolism recapitulates biogenesis (https://books.google.co
m/books?id=CmQDSHN_UrIC). Yale University Press. ISBN 978-0-300-05483-5.
52. Ulanowicz, Robert W.; Ulanowicz, Robert E. (2009). A third window: natural life beyond Newton and Darwin (http
s://books.google.com/books?id=gAQKAQAAMAAJ). Templeton Foundation Press. ISBN 978-1-59947-154-9.
53. Baianu, I. C. (2006). "Robert Rosen's Work and Complex Systems Biology". Axiomathes. 16 (1–2): 25–34.
doi:10.1007/s10516-005-4204-z (https://doi.org/10.1007%2Fs10516-005-4204-z).
54. * Rosen, R. (1958a). "A Relational Theory of Biological Systems". Bulletin of Mathematical Biophysics. 20 (3):
245–60. doi:10.1007/bf02478302 (https://doi.org/10.1007%2Fbf02478302).
55. * Rosen, R. (1958b). "The Representation of Biological Systems from the Standpoint of the Theory of Categories".
Bulletin of Mathematical Biophysics. 20 (4): 317–41. doi:10.1007/bf02477890 (https://doi.org/10.1007%2Fbf024778
90).
https://en.wikipedia.org/wiki/Life 18/27
7/24/2017 Life - Wikipedia

56. Montévil, Maël; Mossio, Matteo (2015-05-07). "Biological organisation as closure of constraints" (https://www.acad
emia.edu/11705712/Biological_organisation_as_closure_of_constraints). Journal of Theoretical Biology. 372: 179–
91. doi:10.1016/j.jtbi.2015.02.029 (https://doi.org/10.1016%2Fj.jtbi.2015.02.029).
57. Harris Bernstein; Henry C. Byerly; Frederick A. Hopf; Richard A. Michod; G. Krishna Vemulapalli (June 1983).
"The Darwinian Dynamic". The Quarterly Review of Biology. The University of Chicago Press. 58 (2): 185.
JSTOR 2828805 (https://www.jstor.org/stable/2828805). doi:10.1086/413216 (https://doi.org/10.1086%2F413216).
58. Michod, Richard E. (2000). Darwinian Dynamics: Evolutionary Transitions in Fitness and Individuality. Princeton:
Princeton University Press. ISBN 978-0-691-05011-9.
59. Jagers, Gerard (2012). The Pursuit of Complexity: The Utility of Biodiversity from an Evolutionary Perspective.
KNNV Publishing. ISBN 978-90-5011-443-1.
60. "Towards a Hierarchical Definition of Life, the Organism, and Death". Foundations of Science. 15.
61. "Explaining the Origin of Life is not Enough for a Definition of Life". Foundations of Science. 16.
62. "The role of logic and insight in the search for a definition of life". J. Biomol. Struct. Dyn. 29.
63. Jagers, Gerald (2012). "Contributions of the Operator Hierarchy to the Field of Biologically Driven Mathematics and
Computation". In Ehresmann, Andree C.; Simeonov, Plamen L.; Smith, Leslie S. Integral Biomathics. Springer.
ISBN 978-3-642-28110-5.
64. Korzeniewski, Bernard (7 April 2001). "Cybernetic formulation of the definition of life". Journal of Theoretical
Biology. 209 (3): 275–86. PMID 11312589 (https://www.ncbi.nlm.nih.gov/pubmed/11312589).
doi:10.1006/jtbi.2001.2262 (https://doi.org/10.1006%2Fjtbi.2001.2262).
65. Parry, Richard (4 March 2005). "Empedocles" (http://plato.stanford.edu/entries/empedocles/). Stanford Encyclopedia
of Philosophy. Retrieved 25 May 2012.
66. Parry, Richard (25 August 2010). "Democritus" (http://plato.stanford.edu/entries/democritus/#4). Stanford
Encyclopedia of Philosophy. Retrieved 25 May 2012.
67. Hankinson, R. J. (1997). Cause and Explanation in Ancient Greek Thought (https://books.google.com/books?id=iwfy
-n5IWL8C). Oxford University Press. p. 125. ISBN 978-0-19-924656-4.
68. Thagard, Paul (2012). The Cognitive Science of Science: Explanation, Discovery, and Conceptual Change (https://bo
oks.google.com/books?id=HrJIV19_nZYC&pg=PA204). MIT Press. pp. 204–05. ISBN 978-0-262-01728-2.
69. Aristotle. On the Soul. Book II.
70. Marietta, Don (1998). Introduction to ancient philosophy (https://books.google.com/books/about/Introduction_to_An
cient_Philosophy.html?id=Gz-8PsrT32AC). M. E. Sharpe. p. 104. ISBN 978-0-7656-0216-9.
71. Stewart-Williams, Steve (2010). Darwin, God and the meaning of life: how evolutionary theory undermines
everything you thought you knew of life (https://books.google.com/books?id=KBp69los_-oC&pg=PA193).
Cambridge University Press. pp. 193–94. ISBN 978-0-521-76278-6.
72. Stillingfleet, Edward (1697). Origines Sacrae. Cambridge University Press – via Internet Archive.
73. André Brack (1998). "Introduction" (http://assets.cambridge.org/97805215/64755/excerpt/9780521564755_excerpt.p
df) (PDF). In André Brack. The Molecular Origins of Life. Cambridge University Press. p. 1. ISBN 978-0-521-56475-
5. Retrieved 2009-01-07.
74. Levine, Russell; Evers, Chris. "The Slow Death of Spontaneous Generation (1668–1859)" (http://www.ncsu.edu/proj
ect/bio183de/Black/cellintro/cellintro_reading/Spontaneous_Generation.html). North Carolina State University.
National Health Museum.
75. Tyndall, John (1905). Fragments of Science. 2. New York: P. F. Collier. Chapters IV, XII, and XIII – via Internet
Archive.
76. Bernal, J. D. (1967) [Reprinted work by A. I. Oparin originally published 1924; Moscow: The Moscow Worker]. The
Origin of Life. The Weidenfeld and Nicolson Natural History. Translation of Oparin by Ann Synge. London:
Weidenfeld & Nicolson. LCCN 67098482 (https://lccn.loc.gov/67098482).
77. Zubay, Geoffrey (2000). Origins of Life: On Earth and in the Cosmos (2nd ed.). Academic Press. ISBN 978-0-12-
781910-5.
78. Smith, John Maynard; Szathmary, Eors (1997). The Major Transitions in Evolution. Oxford Oxfordshire: Oxford
University Press. ISBN 978-0-19-850294-4.
79. Schwartz, Sanford (2009). C. S. Lewis on the Final Frontier: Science and the Supernatural in the Space Trilogy (http
s://books.google.com/books?id=4hQLdPtJe9EC&pg=PA56). Oxford University Press. p. 56. ISBN 978-0-19-
988839-9.
80. Wilkinson, Ian (1998). "History of Clinical Chemistry – Wöhler & the Birth of Clinical Chemistry" (http://www.ifcc.
org/ifccfiles/docs/130304003.pdf) (PDF). The Journal of the International Federation of Clinical Chemistry and
Laboratory Medicine. 13 (4). Retrieved 27 December 2015.
81. Friedrich Wöhler (1828). "Ueber künstliche Bildung des Harnstoffs" (http://gallica.bnf.fr/ark:/12148/bpt6k15097k/f2
61.chemindefer). Annalen der Physik und Chemie. 88 (2): 253–56. Bibcode:1828AnP....88..253W (http://adsabs.harv
ard.edu/abs/1828AnP....88..253W). doi:10.1002/andp.18280880206 (https://doi.org/10.1002%2Fandp.18280880206).
82. Rabinbach, Anson (1992). The Human Motor: Energy, Fatigue, and the Origins of Modernity (https://books.google.c
om/books?id=e5ZBNv-zTlQC&pg=PA124&lpg=PA124). University of California Press. pp. 124–25. ISBN 978-0-
520-07827-7.
https://en.wikipedia.org/wiki/Life 19/27
7/24/2017 Life - Wikipedia

83. "NCAHF Position Paper on Homeopathy" (http://www.ncahf.org/pp/homeop.html). National Council Against Health
Fraud. February 1994. Retrieved 12 June 2012.
84. "Age of the Earth" (http://pubs.usgs.gov/gip/geotime/age.html). U.S. Geological Survey. 1997. Archived (https://we
b.archive.org/web/20051223072700/http://pubs.usgs.gov/gip/geotime/age.html) from the original on 23 December
2005. Retrieved 10 January 2006.
85. Dalrymple, G. Brent (2001). "The age of the Earth in the twentieth century: a problem (mostly) solved". Special
Publications, Geological Society of London. 190 (1): 205–21. Bibcode:2001GSLSP.190..205D (http://adsabs.harvar
d.edu/abs/2001GSLSP.190..205D). doi:10.1144/GSL.SP.2001.190.01.14 (https://doi.org/10.1144%2FGSL.SP.2001.1
90.01.14).
86. Manhesa, Gérard; Allègre, Claude J.; Dupréa, Bernard & Hamelin, Bruno (1980). "Lead isotope study of basic-
ultrabasic layered complexes: Speculations about the age of the earth and primitive mantle characteristics". Earth
and Planetary Science Letters. 47 (3): 370–82. Bibcode:1980E&PSL..47..370M (http://adsabs.harvard.edu/abs/1980
E&PSL..47..370M). doi:10.1016/0012-821X(80)90024-2 (https://doi.org/10.1016%2F0012-821X%2880%2990024-
2).
87. Tenenbaum, David (14 October 2002). "When Did Life on Earth Begin? Ask a Rock" (https://web.archive.org/web/2
0130520084101/http://www.astrobio.net/exclusive/293/when-did-life-on-earth-begin-ask-a-rock). Astrobiology
Magazine. Archived from the original (http://www.astrobio.net/exclusive/293/when-did-life-on-earth-begin-ask-a-ro
ck) on 20 May 2013. Retrieved 13 April 2014.
88. Borenstein, Seth (19 October 2015). "Hints of life on what was thought to be desolate early Earth" (http://apnews.exc
ite.com/article/20151019/us-sci--earliest_life-a400435d0d.html). Excite. Yonkers, NY: Mindspark Interactive
Network. Associated Press. Retrieved 2015-10-20.
89. Bell, Elizabeth A.; Boehnike, Patrick; Harrison, T. Mark; et al. (19 October 2015). "Potentially biogenic carbon
preserved in a 4.1 billion-year-old zircon" (http://www.pnas.org/content/early/2015/10/14/1517557112.full.pdf)
(PDF). Proc. Natl. Acad. Sci. U.S.A. Washington, D.C.: National Academy of Sciences. 112: 14518–21. ISSN 1091-
6490 (https://www.worldcat.org/issn/1091-6490). PMC 4664351 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC46
64351)  . PMID 26483481 (https://www.ncbi.nlm.nih.gov/pubmed/26483481). doi:10.1073/pnas.1517557112 (http
s://doi.org/10.1073%2Fpnas.1517557112). Retrieved 2015-10-20. Early edition, published online before print.
90. Courtland, Rachel (2 July 2008). "Did newborn Earth harbour life?" (https://www.newscientist.com/article/dn14245-
did-newborn-earth-harbour-life). New Scientist. Retrieved 14 November 2016.
91. Steenhuysen, Julie (20 May 2009). "Study turns back clock on origins of life on Earth" (https://www.reuters.com/arti
cle/us-asteroids-idUSTRE54J5PX20090520). Reuters. Retrieved 14 November 2016.
92. "Evidence of Archean life: Stromatolites and microfossils". Precambrian Research. 158.
93. "Fossil evidence of Archaean life". Philos. Trans. R. Soc. Lond. B Biol. Sci. 29.
94. Hamilton Raven, Peter; Brooks Johnson, George (2002). Biology (https://books.google.com/books?id=GtlqPwAAC
AAJ). McGraw-Hill Education. p. 68. ISBN 978-0-07-112261-0. Retrieved 7 July 2013.
95. Milsom, Clare; Rigby, Sue (2009). Fossils at a Glance (https://books.google.com/books?id=OdrCdxr7QdgC&pg=PA
134) (2nd ed.). John Wiley & Sons. p. 134. ISBN 1-4051-9336-0.
96. Ohtomo, Yoko; Kakegawa, Takeshi; Ishida, Akizumi; Nagase, Toshiro; Rosing, Minik T. (8 December 2013).
"Evidence for biogenic graphite in early Archaean Isua metasedimentary rocks" (http://www.nature.com/ngeo/journa
l/vaop/ncurrent/full/ngeo2025.html). Nature Geoscience. 7: 25–28. Bibcode:2014NatGe...7...25O (http://adsabs.harv
ard.edu/abs/2014NatGe...7...25O). doi:10.1038/ngeo2025 (https://doi.org/10.1038%2Fngeo2025).
97. Borenstein, Seth (13 November 2013). "Oldest fossil found: Meet your microbial mom" (http://apnews.excite.com/ar
ticle/20131113/DAA1VSC01.html). Associated Press.
98. Noffke, Nora; Christian, Daniel; Wacey, David; Hazen, Robert M. (8 November 2013). "Microbially Induced
Sedimentary Structures Recording an Ancient Ecosystem in the ca. 3.48 Billion-Year-Old Dresser Formation,
Pilbara, Western Australia" (http://online.liebertpub.com/doi/abs/10.1089/ast.2013.1030). Astrobiology (journal). 13
(12): 1103–24. Bibcode:2013AsBio..13.1103N (http://adsabs.harvard.edu/abs/2013AsBio..13.1103N). PMC 3870916
(https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3870916)  . PMID 24205812 (https://www.ncbi.nlm.nih.gov/pubm
ed/24205812). doi:10.1089/ast.2013.1030 (https://doi.org/10.1089%2Fast.2013.1030).
99. Loeb, Abraham (October 2014). "The Habitable Epoch of the Early Universe" (http://journals.cambridge.org/action/
displayAbstract?fromPage=online&aid=9371049&fileId=S1473550414000196). International Journal of
Astrobiology. 13 (4): 337–39. Bibcode:2014IJAsB..13..337L (http://adsabs.harvard.edu/abs/2014IJAsB..13..337L).
CiteSeerX 10.1.1.680.4009 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.680.4009)  .
doi:10.1017/S1473550414000196 (https://doi.org/10.1017%2FS1473550414000196). Retrieved 15 December 2014.
100. Loeb, Abraham (2 December 2013). "The Habitable Epoch of the Early Universe". International Journal of
Astrobiology. 13 (4): 337–39. Bibcode:2014IJAsB..13..337L (http://adsabs.harvard.edu/abs/2014IJAsB..13..337L).
arXiv:1312.0613v3 (https://arxiv.org/abs/1312.0613v3)  . doi:10.1017/S1473550414000196 (https://doi.org/10.101
7%2FS1473550414000196).
101. Dreifus, Claudia (2 December 2014). "Much-Discussed Views That Go Way Back – Avi Loeb Ponders the Early
Universe, Nature and Life" (https://www.nytimes.com/2014/12/02/science/avi-loeb-ponders-the-early-universe-natur
e-and-life.html). New York Times. Retrieved 3 December 2014.
https://en.wikipedia.org/wiki/Life 20/27
7/24/2017 Life - Wikipedia

102. Kunin, W.E.; Gaston, Kevin, eds. (31 December 1996). The Biology of Rarity: Causes and consequences of rare—
common differences (https://books.google.com/books?id=4LHnCAAAQBAJ&pg=PA110&lpg=PA110&dq#v=onepa
ge&q&f=false). ISBN 978-0-412-63380-5. Retrieved 26 May 2015.
103. Stearns, Beverly Peterson; Stearns, S. C.; Stearns, Stephen C. (2000). Watching, from the Edge of Extinction (https://
books.google.com/books?id=0BHeC-tXIB4C&q=99%20percent#v=onepage&q=99%20percent&f=false). Yale
University Press. p. preface x. ISBN 978-0-300-08469-6. Retrieved 30 May 2017.
104. Novacek, Michael J. (8 November 2014). "Prehistory's Brilliant Future" (https://www.nytimes.com/2014/11/09/opini
on/sunday/prehistorys-brilliant-future.html). New York Times. Retrieved 25 December 2014.
105. G. Miller; Scott Spoolman (2012). Environmental Science - Biodiversity Is a Crucial Part of the Earth's Natural
Capital (https://books.google.com/books?id=NYEJAAAAQBAJ&pg=PA62). Cengage Learning. p. 62. ISBN 1-133-
70787-4. Retrieved 2014-12-27. "We do not know how many species there are on the earth. Estimates range from 8
million to 100 million. The best guess is that there are 10–14 million species. So far, biologists have identified almost
2 million species."
106. Mora, C.; Tittensor, D.P.; Adl, S.; Simpson, A.G.; Worm, B. (23 August 2011). "How many species are there on
Earth and in the ocean?" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3160336). PLOS Biology. 9: e1001127.
PMC 3160336 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3160336)  . PMID 21886479 (https://www.ncbi.nl
m.nih.gov/pubmed/21886479). doi:10.1371/journal.pbio.1001127
(https://doi.org/10.1371%2Fjournal.pbio.1001127). "In spite of 250 years of taxonomic classification and over 1.2
million species already catalogued in a central database, our results suggest that some 86% of existing species on
Earth and 91% of species in the ocean still await description."
107. Staff (2 May 2016). "Researchers find that Earth may be home to 1 trillion species" (http://www.nsf.gov/news/news_
summ.jsp?cntn_id=138446). National Science Foundation. Retrieved 6 May 2016.
108. Pappas, Stephanie (5 May 2016). "There Might Be 1 Trillion Species on Earth" (https://www.livescience.com/54660-
1-trillion-species-on-earth.html). LiveScience. Retrieved 7 June 2017.
109. Nuwer, Rachel (18 July 2015). "Counting All the DNA on Earth" (https://www.nytimes.com/2015/07/21/science/cou
nting-all-the-dna-on-earth.html). The New York Times. New York: The New York Times Company. ISSN 0362-4331
(https://www.worldcat.org/issn/0362-4331). Retrieved 2015-07-18.
110. "The Biosphere: Diversity of Life" (http://www.agci.org/classroom/biosphere/index.php). Aspen Global Change
Institute. Basalt, CO. Retrieved 2015-07-19.
111. Coveney, Peter V.; Fowler, Philip W. (2005). "Modelling biological complexity: a physical scientist's perspective".
Journal of the Royal Society Interface. 2 (4): 267–80. doi:10.1098/rsif.2005.0045 (https://doi.org/10.1098%2Frsif.20
05.0045).
112. "Habitability and Biology: What are the Properties of Life?" (http://phoenix.lpl.arizona.edu/mars145.php). Phoenix
Mars Mission. The University of Arizona. Retrieved 6 June 2013.
113. Senapathy, Periannan (1994). Independent birth of organisms (https://books.google.com/books?
id=3hJFAQAAIAAJ). Madison, Wisconsin: Genome Press. ISBN 0-9641304-0-8.
114. Eigen, Manfred; Winkler, Ruthild (1992). Steps towards life: a perspective on evolution (German edition, 1987) (http
s://books.google.com/books/about/Steps_towards_life.html?id=R7QTAQAAIAAJ). Oxford University Press. p. 31.
ISBN 0-19-854751-X.
115. Barazesh, Solmaz (13 May 2009). "How RNA Got Started: Scientists Look for the Origins of Life" (http://www.usne
ws.com/science/articles/2009/05/13/how-rna-got-started-scientists-examine-the-origins-of-life). U. S. News & World
Report. Retrieved 14 November 2016.
116. Watson, James D. (1993). Gesteland, R. F.; Atkins, J. F., eds. Prologue: early speculations and facts about RNA
templates. The RNA World. Cold Spring Harbor, New York: Cold Spring Harbor Laboratory Press. pp. xv–xxiii.
117. Gilbert, Walter (20 February 1986). "Origin of life: The RNA world". Nature. 319 (618): 618.
Bibcode:1986Natur.319..618G (http://adsabs.harvard.edu/abs/1986Natur.319..618G). doi:10.1038/319618a0 (https://
doi.org/10.1038%2F319618a0).
118. Cech, Thomas R. (1986). "A model for the RNA-catalyzed replication of RNA" (http://www.pnas.org/content/83/12/
4360.abstract). Proceedings of the National Academy of Sciences USA. 83 (12): 4360–63.
Bibcode:1986PNAS...83.4360C (http://adsabs.harvard.edu/abs/1986PNAS...83.4360C).
doi:10.1073/pnas.83.12.4360 (https://doi.org/10.1073%2Fpnas.83.12.4360). Retrieved 25 May 2012.
119. Cech, T.R. (2011). "The RNA Worlds in Context" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3385955). Cold
Spring Harb Perspect Biol. 4 (7): a006742. PMC 3385955 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC338595
5)  . PMID 21441585 (https://www.ncbi.nlm.nih.gov/pubmed/21441585). doi:10.1101/cshperspect.a006742 (https://
doi.org/10.1101%2Fcshperspect.a006742).
120. Powner, Matthew W.; Gerland, Béatrice; Sutherland, John D. (14 May 2009). "Synthesis of activated pyrimidine
ribonucleotides in prebiotically plausible conditions". Nature. 459 (7244): 239–42. Bibcode:2009Natur.459..239P (ht
tp://adsabs.harvard.edu/abs/2009Natur.459..239P). PMID 19444213 (https://www.ncbi.nlm.nih.gov/pubmed/194442
13). doi:10.1038/nature08013 (https://doi.org/10.1038%2Fnature08013).

https://en.wikipedia.org/wiki/Life 21/27
7/24/2017 Life - Wikipedia

121. Szostak, Jack W. (14 May 2009). "Origins of life: Systems chemistry on early Earth". Nature. 459 (7244): 171–172.
Bibcode:2009Natur.459..171S (http://adsabs.harvard.edu/abs/2009Natur.459..171S). PMID 19444196 (https://www.n
cbi.nlm.nih.gov/pubmed/19444196). doi:10.1038/459171a (https://doi.org/10.1038%2F459171a).
122. Pasek, Matthew A.; et at.; Buick, R.; Gull, M.; Atlas, Z. (18 June 2013). "Evidence for reactive reduced phosphorus
species in the early Archean ocean" (http://www.pnas.org/content/110/25/10089). PNAS. 110 (25): 10089–94.
Bibcode:2013PNAS..11010089P (http://adsabs.harvard.edu/abs/2013PNAS..11010089P). PMC 3690879 (https://ww
w.ncbi.nlm.nih.gov/pmc/articles/PMC3690879)  . PMID 23733935 (https://www.ncbi.nlm.nih.gov/pubmed/2373393
5). doi:10.1073/pnas.1303904110 (https://doi.org/10.1073%2Fpnas.1303904110). Retrieved 16 July 2013.
123. Lincoln, Tracey A.; Joyce, Gerald F. (27 February 2009). "Self-Sustained Replication of an RNA Enzyme" (https://w
ww.ncbi.nlm.nih.gov/pmc/articles/PMC2652413). Science. 323 (5918): 1229–32. Bibcode:2009Sci...323.1229L (htt
p://adsabs.harvard.edu/abs/2009Sci...323.1229L). PMC 2652413 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2
652413)  . PMID 19131595 (https://www.ncbi.nlm.nih.gov/pubmed/19131595). doi:10.1126/science.1167856 (http
s://doi.org/10.1126%2Fscience.1167856).
124. Joyce, Gerald F. (2009). "Evolution in an RNA world" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2891321).
Cold Spring Harbor Symposium on Quantitative Biology. 74: 17–23. PMC 2891321 (https://www.ncbi.nlm.nih.gov/p
mc/articles/PMC2891321)  . PMID 19667013 (https://www.ncbi.nlm.nih.gov/pubmed/19667013).
doi:10.1101/sqb.2009.74.004 (https://doi.org/10.1101%2Fsqb.2009.74.004).
125. Callahan; Smith, K.E.; Cleaves, H.J.; Ruzica, J.; Stern, J.C.; Glavin, D.P.; House, C.H.; Dworkin, J.P. (11 August
2011). "Carbonaceous meteorites contain a wide range of extraterrestrial nucleobases" (http://www.pnas.org/content/
early/2011/08/10/1106493108). PNAS. 108: 13995–98. PMC 3161613 (https://www.ncbi.nlm.nih.gov/pmc/articles/P
MC3161613)  . PMID 21836052 (https://www.ncbi.nlm.nih.gov/pubmed/21836052). doi:10.1073/pnas.1106493108
(https://doi.org/10.1073%2Fpnas.1106493108). Retrieved 15 August 2011.
126. Steigerwald, John (8 August 2011). "NASA Researchers: DNA Building Blocks Can Be Made in Space" (http://ww
w.nasa.gov/topics/solarsystem/features/dna-meteorites.html). NASA. Retrieved 10 August 2011.
127. "DNA Building Blocks Can Be Made in Space, NASA Evidence Suggests" (http://www.sciencedaily.com/releases/2
011/08/110808220659.htm). ScienceDaily. 9 August 2011. Retrieved 9 August 2011.
128. Gallori, Enzo (November 2010). "Astrochemistry and the origin of genetic material" (http://www.springerlink.com/c
ontent/x332837483630g24/). Rendiconti Lincei. 22 (2): 113–18. doi:10.1007/s12210-011-0118-4 (https://doi.org/10.1
007%2Fs12210-011-0118-4). Retrieved 11 August 2011.
129. Marlaire, Ruth (3 March 2015). "NASA Ames Reproduces the Building Blocks of Life in Laboratory" (http://www.n
asa.gov/content/nasa-ames-reproduces-the-building-blocks-of-life-in-laboratory). NASA. Retrieved 5 March 2015.
130. Rampelotto, P.H. (2010). "Panspermia: A Promising Field Of Research" (http://www.lpi.usra.edu/meetings/abscicon
2010/pdf/5224.pdf) (PDF). Retrieved 3 December 2014.
131. Rothschild, Lynn (September 2003). "Understand the evolutionary mechanisms and environmental limits of life" (htt
p://www.webcitation.org/664nY11q7?url=http://astrobiology.arc.nasa.gov/roadmap/g5.html). NASA. Archived from
the original (http://astrobiology.arc.nasa.gov/roadmap/g5.html) on 11 March 2012. Retrieved 13 July 2009.
132. King, G.A.M. (April 1977). "Symbiosis and the origin of life" (http://www.springerlink.com/content/n10p775113175
l67/). Origins of Life and Evolution of Biospheres. 8 (1): 39–53. Bibcode:1977OrLi....8...39K (http://adsabs.harvard.e
du/abs/1977OrLi....8...39K). doi:10.1007/BF00930938 (https://doi.org/10.1007%2FBF00930938). Retrieved
22 February 2010.
133. Margulis, Lynn (2001). The Symbiotic Planet: A New Look at Evolution. London, England: Orion Books Ltd.
ISBN 0-7538-0785-8.
134. Douglas J. Futuyma; Janis Antonovics (1992). Oxford surveys in evolutionary biology: Symbiosis in evolution. 8.
London, England: Oxford University Press. pp. 347–74. ISBN 0-19-507623-0.
135. The Columbia Encyclopedia, Sixth Edition (http://www.questia.com/library/encyclopedia/biosphere.jsp). Columbia
University Press. 2004. Retrieved 2010-11-12.
136. University of Georgia (25 August 1998). "First-Ever Scientific Estimate Of Total Bacteria On Earth Shows Far
Greater Numbers Than Ever Known Before" (http://www.sciencedaily.com/releases/1998/08/980825080732.htm).
Science Daily. Retrieved 10 November 2014.
137. Hadhazy, Adam (12 January 2015). "Life Might Thrive a Dozen Miles Beneath Earth's Surface" (http://www.astrobi
o.net/extreme-life/life-might-thrive-dozen-miles-beneath-earths-surface/). Astrobiology Magazine. Retrieved
11 March 2017.
138. Fox-Skelly, Jasmin (24 November 2015). "The Strange Beasts That Live In Solid Rock Deep Underground" (http://w
ww.bbc.com/earth/story/20151124-meet-the-strange-creatures-that-live-in-solid-rock-deep-underground). BBC
online. Retrieved 11 March 2017.
139. Dose, K.; Bieger-Dose, A.; Dillmann, R.; Gill, M.; Kerz, O.; Klein, A.; Meinert, H.; Nawroth, T.; Risi, S.; Stridde, C.
(1995). "ERA-experiment "space biochemistry" ". Advances in Space Research. 16 (8): 119–29. PMID 11542696 (htt
ps://www.ncbi.nlm.nih.gov/pubmed/11542696). doi:10.1016/0273-1177(95)00280-R (https://doi.org/10.1016%2F02
73-1177%2895%2900280-R).

https://en.wikipedia.org/wiki/Life 22/27
7/24/2017 Life - Wikipedia

140. Vaisberg, Horneck G.; Eschweiler, U.; Reitz, G.; Wehner, J.; Willimek, R.; Strauch, K. (1995). "Biological responses
to space: results of the experiment "Exobiological Unit" of ERA on EURECA I". Adv Space Res. 16 (8): 105–18.
Bibcode:1995AdSpR..16..105V (http://adsabs.harvard.edu/abs/1995AdSpR..16..105V). PMID 11542695 (https://ww
w.ncbi.nlm.nih.gov/pubmed/11542695). doi:10.1016/0273-1177(95)00279-N (https://doi.org/10.1016%2F0273-117
7%2895%2900279-N).
141. Choi, Charles Q. (17 March 2013). "Microbes Thrive in Deepest Spot on Earth" (http://www.livescience.com/27954-
microbes-mariana-trench.html). LiveScience. Retrieved 17 March 2013.
142. Glud, Ronnie; Wenzhöfer, Frank; Middelboe, Mathias; Oguri, Kazumasa; Turnewitsch, Robert; Canfield, Donald E.;
Kitazato, Hiroshi (17 March 2013). "High rates of microbial carbon turnover in sediments in the deepest oceanic
trench on Earth" (http://www.nature.com/ngeo/journal/vaop/ncurrent/full/ngeo1773.html). Nature Geoscience. 6 (4):
284–88. Bibcode:2013NatGe...6..284G (http://adsabs.harvard.edu/abs/2013NatGe...6..284G). doi:10.1038/ngeo1773
(https://doi.org/10.1038%2Fngeo1773). Retrieved 17 March 2013.
143. Oskin, Becky (14 March 2013). "Intraterrestrials: Life Thrives in Ocean Floor" (http://www.livescience.com/27899-o
cean-subsurface-ecosystem-found.html). LiveScience. Retrieved 17 March 2013.
144. Morelle, Rebecca (15 December 2014). "Microbes discovered by deepest marine drill analysed" (http://www.bbc.co
m/news/science-environment-30489814). BBC News. Retrieved 15 December 2014.
145. Fox, Douglas (20 August 2014). "Lakes under the ice: Antarctica's secret garden" (http://www.nature.com/news/lake
s-under-the-ice-antarctica-s-secret-garden-1.15729). Nature. 512 (7514): 244–46. Bibcode:2014Natur.512..244F (htt
p://adsabs.harvard.edu/abs/2014Natur.512..244F). PMID 25143097 (https://www.ncbi.nlm.nih.gov/pubmed/2514309
7). doi:10.1038/512244a (https://doi.org/10.1038%2F512244a). Retrieved 21 August 2014.
146. Mack, Eric (20 August 2014). "Life Confirmed Under Antarctic Ice; Is Space Next?" (http://www.forbes.com/sites/er
icmack/2014/08/20/life-confirmed-under-antarctic-ice-is-space-next/). Forbes. Retrieved 21 August 2014.
147. Campbell, Neil A.; Brad Williamson; Robin J. Heyden (2006). Biology: Exploring Life (http://www.phschool.com/el
_marketing.html). Boston, Massachusetts: Pearson Prentice Hall. ISBN 0-13-250882-6.
148. Zimmer, Carl (3 October 2013). "Earth's Oxygen: A Mystery Easy to Take for Granted" (https://www.nytimes.com/2
013/10/03/science/earths-oxygen-a-mystery-easy-to-take-for-granted.html). New York Times. Retrieved 3 October
2013.
149. "Meaning of biosphere" (http://www.webdictionary.co.uk/definition.php?query=biosphere). WebDictionary.co.uk.
WebDictionary.co.uk. Retrieved 2010-11-12.
150. "Essential requirements for life" (http://cmapsnasacmex.ihmc.us/servlet/SBReadResourceServlet?rid=102520016110
9_2045745605_1714&partName=htmltext). CMEX-NASA. Retrieved 14 July 2009.
151. Chiras, Daniel C. (2001). Environmental Science – Creating a Sustainable Future (6th ed.). ISBN 0-7637-1316-3.
152. Chang, Kenneth (12 September 2016). "Visions of Life on Mars in Earth's Depths" (https://www.nytimes.com/2016/
09/13/science/south-african-mine-life-on-mars.html). New York Times. Retrieved 12 September 2016.
153. Rampelotto, Pabulo Henrique (2010). "Resistance of microorganisms to extreme environmental conditions and its
contribution to astrobiology". Sustainability. 2 (6): 1602–23. Bibcode:2010Sust....2.1602R (http://adsabs.harvard.ed
u/abs/2010Sust....2.1602R). doi:10.3390/su2061602 (https://doi.org/10.3390%2Fsu2061602).
154. Baldwin, Emily (26 April 2012). "Lichen survives harsh Mars environment" (http://www.skymania.com/wp/2012/0
4/lichen-survives-harsh-martian-setting.html). Skymania News. Retrieved 27 April 2012.
155. de Vera, J.-P.; Kohler, Ulrich (26 April 2012). "The adaptation potential of extremophiles to Martian surface
conditions and its implication for the habitability of Mars" (http://www.webcitation.org/68GROCilv?url=http://medi
a.egu2012.eu/media/filer_public/2012/04/05/10_solarsystem_devera.pdf) (PDF). European Geosciences Union.
Archived from the original (http://media.egu2012.eu/media/filer_public/2012/04/05/10_solarsystem_devera.pdf)
(PDF) on 8 June 2012. Retrieved 27 April 2012.
156. Hotz, Robert Lee (3 December 2010). "New link in chain of life" (https://online.wsj.com/article/SB10001424052748
703377504575650840897300342.html?mod=ITP_pageone_1#printMode). Wall Street Journal. Dow Jones &
Company, Inc. "Until now, however, they were all thought to share the same biochemistry, based on the Big Six, to
build proteins, fats and DNA."
157. Neuhaus, Scott (2005). Handbook for the Deep Ecologist: What Everyone Should Know About Self, the Environment,
And the Planet (https://books.google.com/books?id=uzBDQPxe6zsC&pg=PA23). iUniverse. pp. 23–50. ISBN 978-
0-521-83113-0.
158. Committee on the Limits of Organic Life in Planetary Systems; Committee on the Origins and Evolution of Life;
National Research Council (2007). The Limits of Organic Life in Planetary Systems (http://www.nap.edu/catalog.ph
p?record_id=11919). National Academy of Sciences. ISBN 0-309-66906-5. Retrieved 3 June 2012.
159. Benner, Steven A.; Ricardo, Alonso; Carrigan, Matthew A. (December 2004). "Is there a common chemical model
for life in the universe?" (http://www.webcitation.org/68GROD92N?url=http://www.fossildna.com/articles/benner_c
ommonmodelforlife.pdf) (PDF). Current Opinion in Chemical Biology. 8 (6): 672–89. PMID 15556414 (https://www.
ncbi.nlm.nih.gov/pubmed/15556414). doi:10.1016/j.cbpa.2004.10.003 (https://doi.org/10.1016%2Fj.cbpa.2004.10.00
3). Archived from the original (http://www.fossildna.com/articles/benner_commonmodelforlife.pdf) (PDF) on 8 June
2012. Retrieved 3 June 2012.

https://en.wikipedia.org/wiki/Life 23/27
7/24/2017 Life - Wikipedia

160. Purcell, Adam (5 February 2016). "DNA" (http://basicbiology.net/micro/genetics/dna). Basic Biology. Retrieved
15 November 2016.
161. Russell, Peter (2001). iGenetics. New York: Benjamin Cummings. ISBN 0-8053-4553-1.
162. Dahm R (2008). "Discovering DNA: Friedrich Miescher and the early years of nucleic acid research". Hum. Genet.
122 (6): 565–81. PMID 17901982 (https://www.ncbi.nlm.nih.gov/pubmed/17901982). doi:10.1007/s00439-007-
0433-0 (https://doi.org/10.1007%2Fs00439-007-0433-0).
163. Portin P (2014). "The birth and development of the DNA theory of inheritance: sixty years since the discovery of the
structure of DNA". Journal of Genetics. 93 (1): 293–302. PMID 24840850 (https://www.ncbi.nlm.nih.gov/pubmed/2
4840850). doi:10.1007/s12041-014-0337-4 (https://doi.org/10.1007%2Fs12041-014-0337-4).
164. "Aristotle" (http://www.ucmp.berkeley.edu/history/aristotle.html). University of California Museum of Paleontology.
Retrieved 15 November 2016.
165. Knapp S, Lamas G, Lughadha EN, Novarino G (April 2004). "Stability or stasis in the names of organisms: the
evolving codes of nomenclature" (http://journals.royalsociety.org/openurl.asp?genre=article&issn=0962-8436&volu
me=359&issue=1444&spage=611). Philosophical Transactions of the Royal Society of London B. 359 (1444): 611–
22. PMC 1693349 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1693349)  . PMID 15253348 (https://www.ncb
i.nlm.nih.gov/pubmed/15253348). doi:10.1098/rstb.2003.1445 (https://doi.org/10.1098%2Frstb.2003.1445).
166. "The Kingdoms of Organisms". Quarterly Review of Biology. 13.
167. Whittaker, R. H. (January 1969). "New concepts of kingdoms or organisms. Evolutionary relations are better
represented by new classifications than by the traditional two kingdoms". Science. 163 (3863): 150–60.
Bibcode:1969Sci...163..150W (http://adsabs.harvard.edu/abs/1969Sci...163..150W). CiteSeerX 10.1.1.403.5430 (http
s://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.403.5430)  . PMID 5762760 (https://www.ncbi.nlm.nih.gov/p
ubmed/5762760). doi:10.1126/science.163.3863.150 (https://doi.org/10.1126%2Fscience.163.3863.150).
168. Woese, C.; Kandler, O.; Wheelis, M. (1990). "Towards a natural system of organisms: proposal for the domains
Archaea, Bacteria, and Eucarya." (http://www.pnas.org/cgi/reprint/87/12/4576). Proceedings of the National
Academy of Sciences of the United States of America. 87 (12): 4576–9. Bibcode:1990PNAS...87.4576W (http://adsab
s.harvard.edu/abs/1990PNAS...87.4576W). PMC 54159 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC54159)  .
PMID 2112744 (https://www.ncbi.nlm.nih.gov/pubmed/2112744). doi:10.1073/pnas.87.12.4576 (https://doi.org/10.1
073%2Fpnas.87.12.4576).
169. Adl SM, Simpson AG, Farmer MA, et al. (2005). "The new higher level classification of eukaryotes with emphasis
on the taxonomy of protists". J. Eukaryot. Microbiol. 52 (5): 399–451. PMID 16248873 (https://www.ncbi.nlm.nih.g
ov/pubmed/16248873). doi:10.1111/j.1550-7408.2005.00053.x (https://doi.org/10.1111%2Fj.1550-7408.2005.00053.
x).
170. Van Regenmortel MH (January 2007). "Virus species and virus identification: past and current controversies".
Infection, Genetics and Evolution. 7 (1): 133–44. PMID 16713373 (https://www.ncbi.nlm.nih.gov/pubmed/1671337
3). doi:10.1016/j.meegid.2006.04.002 (https://doi.org/10.1016%2Fj.meegid.2006.04.002).
171. Pennisi E (March 2001). "Taxonomy. Linnaeus's last stand?". Science. New York, N.Y. 291 (5512): 2304–07.
PMID 11269295 (https://www.ncbi.nlm.nih.gov/pubmed/11269295). doi:10.1126/science.291.5512.2304 (https://doi.
org/10.1126%2Fscience.291.5512.2304).
172. Linnaeus, C. (1735). Systemae Naturae, sive regna tria naturae, systematics proposita per classes, ordines, genera &
species.
173. Haeckel, E. (1866). Generelle Morphologie der Organismen. Reimer, Berlin.
174. Chatton, É. (1925). "Pansporella perplexa. Réflexions sur la biologie et la phylogénie des protozoaires". Annales des
Sciences Naturelles - Zoologie et Biologie Animale. 10-VII: 1–84.
175. Copeland, H. (1938). "The kingdoms of organisms". Quarterly Review of Biology. 13: 383–420. doi:10.1086/394568
(https://doi.org/10.1086%2F394568).
176. Whittaker, R. H. (January 1969). "New concepts of kingdoms of organisms". Science. 163 (3863): 150–60.
Bibcode:1969Sci...163..150W (http://adsabs.harvard.edu/abs/1969Sci...163..150W). PMID 5762760 (https://www.nc
bi.nlm.nih.gov/pubmed/5762760). doi:10.1126/science.163.3863.150 (https://doi.org/10.1126%2Fscience.163.3863.1
50).
177. Cavalier-Smith, T. (1998). "A revised six-kingdom system of life" (http://journals.cambridge.org/action/displayAbstr
act?fromPage=online&aid=685). Biological Reviews. 73 (03): 203–66. PMID 9809012 (https://www.ncbi.nlm.nih.go
v/pubmed/9809012). doi:10.1111/j.1469-185X.1998.tb00030.x (https://doi.org/10.1111%2Fj.1469-185X.1998.tb000
30.x).
178. Systema Naturae 2000 "Biota" (http://sn2000.taxonomy.nl/Main/Classification/1.htm) Archived (https://web.archive.
org/web/20100614111911/http://sn2000.taxonomy.nl/Main/Classification/1.htm) 14 June 2010 at the Wayback
Machine.
179. Taxonomicon "Biota" (http://taxonomicon.taxonomy.nl/TaxonName.aspx) Archived (https://web.archive.org/web/20
140115025557/http://taxonomicon.taxonomy.nl/TaxonName.aspx) 15 January 2014 at the Wayback Machine.
180. Sapp, Jan (2003). Genesis: The Evolution of Biology (https://books.google.com/books?id=f4kXJv1XiFUC&pg=PA7
5). Oxford University Press. pp. 75–78. ISBN 0-19-515619-6.
181. Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media. pp. 170–83, 297–362. ISBN 1-57955-008-8.
https://en.wikipedia.org/wiki/Life 24/27
7/24/2017 Life - Wikipedia

182. Lintilhac, P. M. (Jan 1999). "Thinking of biology: toward a theory of cellularity—speculations on the nature of the
living cell" (https://web.archive.org/web/20130406043511/https://www.rz.uni-karlsruhe.de/~db45/Studiendekanat/Le
hre/Master/Module/Botanik_1/M1401/Evolution_Zellbiologie/Lintilhac%202003.pdf) (PDF). BioScience. 49 (1): 59–
68. JSTOR 1313494 (https://www.jstor.org/stable/1313494). PMID 11543344 (https://www.ncbi.nlm.nih.gov/pubme
d/11543344). doi:10.2307/1313494 (https://doi.org/10.2307%2F1313494). Archived from the original (https://www.r
z.uni-karlsruhe.de/~db45/Studiendekanat/Lehre/Master/Module/Botanik_1/M1401/Evolution_Zellbiologie/Lintilha
c%202003.pdf) (PDF) on 6 April 2013. Retrieved 2 June 2012.
183. Whitman, W.; Coleman, D.; Wiebe, W. (1998). "Prokaryotes: The unseen majority" (https://www.ncbi.nlm.nih.gov/p
mc/articles/PMC33863). Proceedings of the National Academy of Sciences of the United States of America. 95 (12):
6578–83. Bibcode:1998PNAS...95.6578W (http://adsabs.harvard.edu/abs/1998PNAS...95.6578W). PMC 33863 (http
s://www.ncbi.nlm.nih.gov/pmc/articles/PMC33863)  . PMID 9618454 (https://www.ncbi.nlm.nih.gov/pubmed/9618
454). doi:10.1073/pnas.95.12.6578 (https://doi.org/10.1073%2Fpnas.95.12.6578).
184. Pace, Norman R. (18 May 2006). "Concept Time for a change" (http://www.webcitation.org/68GROFAbU?url=htt
p://coursesite.uhcl.edu/NAS/Kang/BIOL3231/Week3-Pace_2006.pdf) (PDF). Nature. 441 (7091): 289.
Bibcode:2006Natur.441..289P (http://adsabs.harvard.edu/abs/2006Natur.441..289P). PMID 16710401 (https://www.n
cbi.nlm.nih.gov/pubmed/16710401). doi:10.1038/441289a (https://doi.org/10.1038%2F441289a). Archived from the
original (http://coursesite.uhcl.edu/NAS/Kang/BIOL3231/Week3-Pace_2006.pdf) (PDF) on 8 June 2012. Retrieved
2 June 2012.
185. "Scientific background" (https://www.nobelprize.org/nobel_prizes/chemistry/laureates/2009/advanced.html). The
Nobel Prize in Chemistry 2009. Royal Swedish Academy of Sciences. Retrieved 10 June 2012.
186. Nakano A, Luini A (2010). "Passage through the Golgi.". Curr Opin Cell Biol. 22 (4): 471–78. PMID 20605430 (htt
ps://www.ncbi.nlm.nih.gov/pubmed/20605430). doi:10.1016/j.ceb.2010.05.003 (https://doi.org/10.1016%2Fj.ceb.20
10.05.003).
187. Panno, Joseph (2004). The Cell (https://books.google.com/books?id=sYgKY6zz20YC&pg=PA60). Facts on File
science library. Infobase Publishing. pp. 60–70. ISBN 0-8160-6736-8.
188. Alberts, Bruce; et al. (1994). "From Single Cells to Multicellular Organisms". Molecular Biology of the Cell (http://w
ww.ncbi.nlm.nih.gov/books/NBK28332/) (3rd ed.). New York: Garland Science. ISBN 0-8153-1620-8. Retrieved
12 June 2012.
189. Zimmer, Carl (7 January 2016). "Genetic Flip Helped Organisms Go From One Cell to Many" (https://www.nytimes.
com/2016/01/12/science/genetic-flip-helped-organisms-go-from-one-cell-to-many.html). New York Times. Retrieved
7 January 2016.
190. Alberts, Bruce; et al. (2002). "General Principles of Cell Communication". Molecular Biology of the Cell (http://ww
w.ncbi.nlm.nih.gov/books/NBK26813/). New York: Garland Science. ISBN 0-8153-3218-1. Retrieved 12 June 2012.
191. Race, Margaret S.; Randolph, Richard O. (2002). "The need for operating guidelines and a decision making
framework applicable to the discovery of non-intelligent extraterrestrial life". Advances in Space Research. 30 (6):
1583–91. Bibcode:2002AdSpR..30.1583R (http://adsabs.harvard.edu/abs/2002AdSpR..30.1583R). ISSN 0273-1177
(https://www.worldcat.org/issn/0273-1177). doi:10.1016/S0273-1177(02)00478-7 (https://doi.org/10.1016%2FS0273
-1177%2802%2900478-7). "There is growing scientific confidence that the discovery of extraterrestrial life in some
form is nearly inevitable"
192. Cantor, Matt (15 February 2009). "Alien Life 'Inevitable': Astronomer" (http://www.webcitation.org/query?url=htt
p%3A%2F%2Fwww.newser.com%2Fstory%2F50874%2Falien-life-inevitable-astronomer.html&date=2013-05-03).
Newser. Archived from the original (http://www.newser.com/story/50874/alien-life-inevitable-astronomer.html) on 3
May 2013. Retrieved 3 May 2013. "Scientists now believe there could be as many habitable planets in the cosmos as
there are stars, and that makes life's existence elsewhere "inevitable" over billions of years, says one."
193. Schulze-Makuch, Dirk; Dohm, James M.; Fairén, Alberto G.; Baker, Victor R.; Fink, Wolfgang; Strom, Robert G.
(December 2005). Venus, Mars, and the Ices on Mercury and the Moon: Astrobiological Implications and Proposed
Mission Designs. Astrobiology. 5. pp. 778–95. Bibcode:2005AsBio...5..778S (http://adsabs.harvard.edu/abs/2005As
Bio...5..778S). doi:10.1089/ast.2005.5.778 (https://doi.org/10.1089%2Fast.2005.5.778).
194. Woo, Marcus (27 January 2015). "Why We're Looking for Alien Life on Moons, Not Just Planets" (https://www.wire
d.com/2015/01/looking-alien-life-moons-just-planets/). Wired. Retrieved 27 January 2015.
195. Strain, Daniel (14 December 2009). "Icy moons of Saturn and Jupiter may have conditions needed for life" (http://ne
ws.ucsc.edu/2009/12/3443.html). The University of Santa Cruz. Retrieved 4 July 2012.
196. Selis, Frank (2006). "Habitability: the point of view of an astronomer". In Gargaud, Muriel; Martin, Hervé; Claeys,
Philippe. Lectures in Astrobiology (https://books.google.com/books?id=3uYmP0K5PXEC&pg=PA210). 2. Springer.
pp. 210–14. ISBN 3-540-33692-3.
197. Lineweaver, Charles H.; Fenner, Yeshe; Gibson, Brad K. (January 2004). "The Galactic Habitable Zone and the age
distribution of complex life in the Milky Way". Science. 303 (5654): 59–62. Bibcode:2004Sci...303...59L (http://adsa
bs.harvard.edu/abs/2004Sci...303...59L). PMID 14704421 (https://www.ncbi.nlm.nih.gov/pubmed/14704421).
arXiv:astro-ph/0401024 (https://arxiv.org/abs/astro-ph/0401024)  . doi:10.1126/science.1092322 (https://doi.org/10.
1126%2Fscience.1092322).

https://en.wikipedia.org/wiki/Life 25/27
7/24/2017 Life - Wikipedia

198. Vakoch, Douglas A.; Harrison, Albert A. (2011). Civilizations beyond Earth: extraterrestrial life and society (https://
books.google.com/?id=BVJzsvqWip0C&pg=PA37). Berghahn Series. Berghahn Books. pp. 37–41. ISBN 0-85745-
211-8.
199. "Artificial life" (http://www.dictionary.com/browse/artificial--life). Dictionary.com. Retrieved 15 November 2016.
200. Chopra, Paras; Akhil Kamma. "Engineering life through Synthetic Biology" (http://www.bioinfo.de/isb/2006/06/003
8/). In Silico Biology. 6. Retrieved 9 June 2008.
201. Definition of death (http://www.webcitation.org/5kwsdvU8f?url=http://encarta.msn.com/dictionary_1861602899/dea
th.html). Archived from the original (http://encarta.msn.com/dictionary_1861602899/death.html) on 1 November
2009.
202. "Definition of death" (http://www.deathreference.com/Da-Em/Definitions-of-Death.html). Encyclopedia of Death
and Dying. Advameg, Inc. Retrieved 25 May 2012.
203. Extinction – definition (http://www.webcitation.org/5kwseRB80?url=http://encarta.msn.com/dictionary_186160997
4/extinction.html). Archived from the original (http://encarta.msn.com/dictionary_1861609974/extinction.html) on 1
November 2009.
204. "What is an extinction?" (http://palaeo.gly.bris.ac.uk/Palaeofiles/Triassic/extinction.htm). Late Triassic. Bristol
University. Retrieved 27 June 2012.
205. Van Valkenburgh, B. (1999). "Major patterns in the history of carnivorous mammals". Annual Review of Earth and
Planetary Sciences. 27: 463–93. Bibcode:1999AREPS..27..463V (http://adsabs.harvard.edu/abs/1999AREPS..27..46
3V). doi:10.1146/annurev.earth.27.1.463 (https://doi.org/10.1146%2Fannurev.earth.27.1.463).
206. "Frequently Asked Questions" (http://www.sdnhm.org/science/paleontology/resources/frequent/). San Diego Natural
History Museum. Retrieved 25 May 2012.
207. Vastag, Brian (21 August 2011). "Oldest 'microfossils' raise hopes for life on Mars" (http://www.washingtonpost.co
m/national/health-science/oldest-microfossils-hail-from-34-billion-years-ago-raise-hopes-for-life-on-mars/2011/08/1
9/gIQAHK8UUJ_story.html?hpid=z3). The Washington Post. Retrieved 21 August 2011.
208. Wade, Nicholas (21 August 2011). "Geological Team Lays Claim to Oldest Known Fossils" (https://www.nytimes.co
m/2011/08/22/science/earth/22fossil.html?_r=1&partner=rss&emc=rss&src=ig). The New York Times. Retrieved
21 August 2011.

Further reading
Kauffman, Stuart. The Adjacent Possible: A Talk with Stuart Kauffman (http://www.edge.org/3rd_cultur
e/kauffman03/kauffman_index.html)
Seeding the Universe With Life (http://www.astro-ecology.com/PDFSeedingtheUniverse2005Book.pdf)
Legacy Books, Washington D. C., 2000, ISBN 0-476-00330-X
Walker, Martin G. (http://rationalphilosophy.net/index.php/the-book)permanent dead link] LIFE! Why We
Exist ... And What We Must Do to Survive Dog Ear Publishing, 2006, ISBN 1-59858-243-7

External links
Life (https://web.archive.org/web/20051222163318/http://sn2000.
Look up life or living in
taxonomy.nl/Main/Classification/1.htm) (Systema Naturae 2000)
Wiktionary, the free
Vitae (http://www.biolib.cz/en/taxon/id14772) (BioLib) dictionary.
Biota (https://web.archive.org/web/20140715055239/http://taxon
omicon.taxonomy.nl/TaxonTree.aspx?id=1&src=0) (Taxonomicon)
Wikispecies – a free directory of life
Resources for life in the Solar System and in galaxy, and the potential scope of life in the cosmological
future (http://astro-ecology.com/)
"The Adjacent Possible: A Talk with Stuart Kauffman" (http://www.edge.org/3rd_culture/kauffman03/ka
uffman_index.html)
Stanford Encyclopedia of Philosophy entry (http://plato.stanford.edu/entries/life/)
The Kingdoms of Life (http://logic-law.com/index.php?title=The_Kingdoms_of_Life)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Life&oldid=791500164"

Categories: Life

https://en.wikipedia.org/wiki/Life 26/27
7/24/2017 Life - Wikipedia

This page was last edited on 20 July 2017, at 17:47.


Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.

https://en.wikipedia.org/wiki/Life 27/27

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy