The brain has always been understood to be vital to life. Neuroscience draws its origins to prehistoric life, when our ancestors recognized the brain’s importance and even performed skull surgeries such as trepanation. Evidence shows that many of these early patients survived, as their skulls reveal signs of healing. Later, in ancient Egypt, people believed the heart was the seat of the soul and memory rather than the brain. During mummification, the lungs, stomach, liver, and intestines were preserved, while the brain was removed and discarded. However, the Egyptians still noted the brain’s role in health, as seen in the Edwin Smith Papyrus, the oldest known surgical document, which describes the treatment of traumatic brain injuries and their consequences.
The nervous system can be divided in two: the peripheral nervous system (nerves and ganglia) and the central nervous system (tracts and nuclei). These divisions can be further subdivided: the former into the somatic (voluntary) nervous system and the latter into the autonomic nervous system (which itself is further split into the sympathetic, fight or flight, and the parasympathetic, rest and digest, nervous systems).
The nervous system is the body's information system—it gathers input, processes, and produces output. The spinal and cranial nerves take in sensory information and mediate output in the periphery—that is, not in the central nervous system. Incidentally, the spinal cord and brain stem are what relay information between the periphery to the brain; they also have some processing and output capacity. The brain is the hub of such information; it is a major site of processing and (initiating) output.
Recall. The peripheral nervous system (PNS) is divided in two: the somatic and autonomic nervous systems.
The processes of the somatic nervous system are more strongly associated with conscious awareness and control whereas the autonomic nervous system (ANS) is not (e.g., blood pressure, pupil dilation/contriction).
There are anatomical anchor points to determine where you are in the nervous system. These are:
Remark. Dorsal root ganglia contain the sensory neuron cell bodies that send signals from the body to the spinal cord, while ventral roots carry motor commands from the spinal cord to the skeletal muscles, forming the core of the somatic nervous system for voluntary movement and sensation. The somatic nervous system, via these roots, connects the central nervous system to sensory receptors and muscles, allowing conscious control and perception.
The sympathetic chain contains the nerves of the autonomic system that innervates organs.
Cranial Nerves are those which emerge directly from the brain. They are for the body parts which are above and include the neck. The cranial nerves are numbered 1 through 12; three are for sensory only, five are motor only, and four contain both sensory and motor neurons.
The rule stated above about how cranial nerves are for those body parts which lie above and include the neck has an exception: the vagus nerve, which is a cranial nerve that sends information to and from the organs of the body.
There are two divisions of the ANS, a latter third arises if one squints:
The sympathetic nervous system is known for the fight, flight or freeze response. Think: "emergency! I must live!" The sympathetic chain is innervated by preganglionic neurons in the spinal cord and runs along each side of the spinal column.
On the other hand, the parasympathetic nervous system is the system of rest and digest and is often in opposition to sympathetic activity. It is concerned with long term survival. Consider: a sympathetic nervous response elevates the heart's rate; it will slow by parasympathetic activity.
The enteric nervous system consists of a local network of neurons that governs the function of the gut; it is innervated by both the sympathetic and parasympathetic divisions of the nervous system.
The brain and spinal cord live in a controlled, privileged environment encased in bone and a protective covering called meninges. The central nervous system has certain privileges of note:
Definition. Meninges are the three protective soft tissue layers surrounding the entire central nervous system, both the brain and spinal cord.
There are three layers to the meninges (IN TOP-DOWN, bone to brain, ORDER):
Definition. The blood-brain barrier (BBB) is a system of protection involving capillary endothelial cells and astrocytes that form a highly selective semipermeable border between the circulatory and central nervous systems (i.e., from bloodstream to brain). It regulates the transfer of solutes and chemicals.
The brain is dominated by two cerebral hemispheres, a left side and a right side, separated by the median longitudinal fissure (the gap betwixt the two) and connected by the corpus callosum. These hemispheres are lateralised, meaning they have specialised functions. In particular, the left is generally dominatnt for language and logical tasks and the right for processing spacial information, emotions and creativity.
There is an outermost layer of the cerebral hemispheres, the cerebral cortex. Cortex (n.) is derived from Latin cortex, meaning "bark, rind, outer shell."
The idea of memories, sensation and the brain find their origins in the propositions of Galen in the second century CE. Galen believed the brain had different parts suited to different tasks. The cerebrum, being relatively soft, seemed designed to receive impressions from the sensations and the cerebellu, being firmer, seemed better suited to movement command. It was also thought by Galen that there were special fluids, "animal spirits," contained in the ventricles that flowed through hollow nerves to communicate movement and sensation throughout the body. Taken together, Galen advanced an early understanding of localisation of function, that is: sensation in the cerebrum, motor control in the cerebellum, and fluid-mediated communication via the ventricles.
In Galen's framework, the ventricles were not only empty spaces; rather they were the engine room. He imagined the fluids inside them carried signals. This "ventricular theory" was later expanded by medieval scholars, assigning mental faculties to specific chambers:
In this age of modernity, we understand the ventricles differently, as four connected, fluid-filled spaces that circulate cerebrospinal fluid which cushions the brain, helps maintain chemical stability, and supports waste clearance. Fluid theory is replaced with neural circuits, locating sensation in the cortical sensory areas and memory in the hippocampus (formation) and distributed cortex (storage).
We have the frontal lobe, which is the most anterior region; the parietal lobe, which lies between the fontal and occipital lobes; the occipital, posterior, lobe; and the temporal, lateral, lobe.
In addition to knowing the four lobes of the cerebrum, we also must know more about how the brain can be further described. With regard to the surface of the brain, the gyrus is a ridged or raised portion of the convoluted brain surface and the sulcus is a furrow of such surface.
Sulci and gyri can be used to describe the brain, this we know, which means there are certain landmarks which are of note:
In the 1800s, scientists argued about whether the mind was spread uniformly across the brain or whether specific mental faculties were located in different areas. Franz Joseph Gall's phrenology pushed a strong, perhaps even too strong, perspective of localisation. To him, the brain is an organ of the mind that is itself constituted by multiple "organs" for distinct mental abilities. The argument was that such organs were topogrpahically localised and that, all other things being equal, the size of an organ determines it's strength. Furthermore, Gall argued phrenology, the idea that the external contours of the skull, which can be felt, revealed much about a person's mental traints.
Indeed, Gall's phrenological thesis is flawed—there is no correlation between a skull's shape and the brains topology. However, the idea of localisation was popularised on Gall's account, which propelled neuroscience further into investigating structure-function relationships in the brain.
And Gall's making of the brain to be localised was not wrong either—just taken too far. Phineas Gage survived an explosion in which a tamping iron impaled his left frontal lobe: rendering him more impulsive, unreliable, and socially inappropriate. This gives strong evidence of the frontal lobes being linked to executive control, decision making and social behaviour.
Another example of localisation is in the discovery of Broca's Area. A patient "Tan" could understand language but could only speak one syllable: "Tan." Following Tan's death, it was later found that a lesion in the left inferior frontal gyrus, now Broca's Area, the area responsible for expressive (motor) communication.
The frontal lobe is the motor cortex on the precentral gyrus and controls voluntary movement. It houses Broca's Area, where speech production occurs, as well as the prefontal cortex, where planning, impulse control, and decision-making occurs.
The parietal lobe is important for body sensations, attention, perception, and spatial localisation. It is the primary somatosensory cortex on the postcentral gyrus and processes skin senses, body position and movement. Parietal association areas integrate information from different sensory modalities.
The temporal lobe is delinated by the sylvian fissure and contains the primary auditory cortex. It also contains Wernicke's Area, where language comprehension and production is found. The temporal lobe also has inferior temporal cortex visual indentification.
The occipital lobe is primarily a visual cortex that contains a map of visual space. It also contains secondary visual areas that process individual components of a scene.
To discuss the orientation of the brain, we must examine its planes.
There are some more vocabulary words to know. Namely, we use medial to mean “toward the middle,” while lateral means “toward the side.” If something is on the ipsilateral side, it is on the same side of the body, whereas contralateral means it is on the opposite side. We say anterior (or rostral) when referring to the head end, and posterior (or caudal) for the tail end. When a structure is proximal, it is closer to the center, and when it is distal, it is farther out toward the periphery. Finally, dorsal means “toward the back,” while ventral means “toward the belly or front.”
White matter consists mostly of axons with white myelin sheaths; hence the "white" in the name. On the other hand, gray matter contains more cell bodies and dendrites, both of which lack myelin.
Recall. The corpus callosum connects the left and right hemispheres of the brain. It is a fat body; a tract of axons.
The thalamus is responsible for sensory processing and arousal. The Hypothalamus is the master hormone regulator and is responsible for emotions and motivations. The Midbrain plays secondary roles in vision, audition and movement. The Hindbrain is subdivided into parts:
There are certain brain structures which are unseen from the outside or midsaggital views. The are:
To understand how the nervous system works, it helps to start with the basic building blocks. We will first look at the parts of a neuron and its overall morphology, which shape how it functions. Next, we will trace the flow of information both within a single neuron and between neurons through their connections. Finally, we will consider the non-neuronal cells of the nervous system, which provide essential support, protection, and regulation for neural activity.
Recall that eukaryotic cells carry a nucleus. It is critical for us to furthermore recollect the organelles that are not only important for cellular life but also for neuroscience.
The plasma membrane is the most critical structure for membrane potentials. It contains the ion channels, pumps, and receptors that regulate the flow of sodium(Na+), potassium (K+), cloride (Cl-) and calcium (Ca2+). The Na+/K+ ATPase pump, or more simply the sodium-potassium pump, is embedded in this membrane and maintains the resting potential by transporting sodium out and potassium in.
Remark. At rest, neuronal membranes are more permeable to K+
than to Na+. That is why the resting membrane potential is around -65 to -70 mV, which happens
to be close to the Nernst equilibrium potential for K+ (around -90 mV).
Put differently, potassium "dominates" the resting potential since it leaks through K+
leak channels. This efflux sets up a negative relative to the outside. But even at rest, a small
amount of Na+ leaks in, since sodium's equilibrium potential is around +60 mV and the
plasma membrane, hereafter "the membrane," is not perfectly impermeable. Without correction, sodium will
gradually depolarise the cell (i.e., make it less negative) and disrupt the potassium gradient.
Note: The inside of a neuronal cell is low in Na+ and high in K+ and vice versa for
the outside. The higher concentration of an ion is called as the "gradient." The reason why sodium
keeps wanting to enter the membrane is simple: it is positive (and the inside is more negative at an
approximate -65 mV resting potential) and there are less sodiums inside and a surplus of them outside.
Remark (cont'd). This is bad, because as sodium accumulates inside, the
concentration gradient that favours potassium leaving—which incidentally is what keeps the
resting potential stable—is diminished. Without the corrective actions of the sodium-potassium pump, which
contantly pushes sodium out and sucks potassium in, then the cell can maintain its resting potential no longer.
So, while the sodium-potassium pump does not actually set the -65 mV resting potntial directly, it preserves
the conditions requisite of allowing the K+ leak channels to do that work.
Note: The sodium-potassium pump maintains the gradients by pumping out three
sodiums and sucking two potassiums. It does that by being activated by adenosine triphosphate (ATP). Remember:
the membrane is passively K+ permeable.
The mitochondria is the "powerhouse of the cell." It produces the ATP that powers ion pumps, like those Na+/K+ ATPase pumps and Ca2+ transporters. The crux of the matter here is that without mitochondria, there will not be any ATP production, which means cells cannot maintain their electrical gradients; that is bad because it can lead to cell disfunction or death.
The endoplasmic reticulum (ER) has two variants:
The Golgi apparatus packages and modifies the proteins made in the ER (specifically the rough one!). In neurons, the Golgi apparatus is essential for sorting ion channels, neurotransmitter receptors, and transport proteins to their correct destinations (e.g., to axon vs. to dendrite).
The cytoskeleton provides the cell its structure and support but, in neurons, also guides the transport of vesicles, receptors, and transport proteins to their correct destinations (e.g., to axon vs. to dendrite).
The nucleus controls gene expression, including the production of proteins needed for ion channels
and synaptic function. The nucleolus produces ribosomes, which are vital for protein synthesis (which
requires ATP).
Note: Gene expression changes underlie long-term plasticity in neurons.
Neurons share the same basic organelles as a typical eukaryotic cell: a nucleus, nucleolus, rough and smooth ER, Golgi apparatus, mitochondria, ribosomes, cytoskeleton, and plasma membrane.
But, neurons are not just the life support machinery eukaryotic cells are—they are highly specialised communication machines optimised for rapid, long-distance electrical and chemical signaling. In particular, some special features of neurons include:
Cajal used Golgi's staining method to visualise neurons and revealed them as distinct cells, the "neuron doctrine," as opposed to a previous framework. The drawing shows many different neuronal morphologies:
Each morphology is adapted for a specific computational role within the brain. Speaking of form, form supports function. Large, branching dendritic trees (e.g., purkinje cells) integrate thousands of inputs for complex processing; long axons enable communication spanning distant brain regions; small, compact neurons for fast, local processing. All of these give rise to the concept of functional specialisation of different brain circuits.
Neurons are electrically active cells that store and transmit information. They are supported by glial cells provide support for such neurons; they are:
Notice the form of the neuron: it illustrates the direction of information flow. Dendrites receive information; the cell body/soma integrates; axons and axon terminals conduct and output information
Information generally flows in a single direction. Pre-synaptic flow concerns the axon sending information. The axon generates an AP which arrives at the pre-synaptic terminal, which introduces an influx of calcium ions through voltage-gated calcium channels that cause neurotransmitter release.
Post-synaptic concerns the dendrites mostly, receiving information. The neurotransmitter activates the postsynaptic terminal on dendrites, which transmit a signal to the cell body.
Definition. A process is said to be primary, or called as a neurite, if it originates directly from the cell body.
Neurons may be classified by the number of primary dendrites.
Consider the following:
All neurons have the same four functional zones—
—although they are organised in different ways.
Synapses are where two neurons exchange information,. They require closely apposed membranes:
Like all structures, synapses have certain key features. Synaptic vesicles contain neurotransmitters, which are released in response to electrical activity in the axon. Receptors in the postsynaptic membrane are specialised proteins that react to a neurotransmitter.
Axons are different from dendrites! The former transmits the information that the latter receives. They have distinct morphology/subcellular structures.
Dendritic spines are specialised protrusions on dendrites that contain postsynaptic density for receiving signals. It increases surface area for synapses. They are said to be plastic, that is, they can change in size or shape and in number; such changes occur in response to neural activity or experience.
We have already discussed some features of the axon—see the aforementioned discussions on dendrites, synapses, cell body, and polarity—but there are some that deserve to be further clarified. The axon hillock is a cone-shaped area of the cell body that gives rise to the axon. The axon collateral is a branch of an aon that also ends in terminals and innervates other cells.
| Usually one per neuron, with many terminal branches | Usually many per neuron | |
| Uniform until the start of terminal branching | Tapers progressively towards ending | |
| Present | None | |
| Has myelin sheathing | None | |
| Can be practically nonexistent to several metres long | Oftentimes much shorter than axons |
Recall the two classifications of nervous system cells: neurons and glia—the latter of which can be further subdivided into astrocytes (for metabolic and other support), microglia (the immune cells), and oligodencrocytes/ Schwann cells (responsible for the myelination of axons). That lattermost part, the Schwann cells/oligodendrocytes, is the focus of this section. This is the part where information get's transmitted over great distances, the conduction zone, starting from the axon hillock (the integration zone, where the decision to produce a neural signal is made) all the way to the head of the axon terminals, the output zone.
Oligodendrocytes are a part of the CNS and Schwann cells the PNS; they insulate axons so the electric signals can propogate. Along the myelination, the fatty insulation, there are certain gaps called the Nodes of Ranvier where the axolemma is exposed to the extracellular space. These domains are high in sodium and potassium ion channels that are complexed with cell adhesion molecules. This results in an exchange of ions that regenerates the action potential (AP).
Note: myelination is not universal among vertebrate, with primitive groups lacking it. The reason for that is there are two ways one can preserve the action potention as it travels across the axon:
Astrocytes provide structural, metabolic and trophic (nutritional) support for neurons. They help form the blood-brain barrier (BBB) and play important roles in plasticity and in modifying/supporting synaptic activity. They are star-shaped and detect neural activity; they also regulate adjacent capillaries for blood flow, supplying neurons with more energy when they are active.
Astrocytes can also aid in neurotransmitter removal from the extracellular space. Consider a case in which which a neurotransmitter is released into the synaptic cleft. It will bind to receptors in the postsynaptic neuron and trigger a response. However, the neurotransmitter must be cleared away swiftly, lest it overstimulates the neuron. And so, we remove the neurotransmitter by:
Microglia are the immune cells of the central nervous system; they can be thought of as special immune cells, since the regular ones cannot enter the brain. They are macrophage-like, in that they scavenge the brain for debris.
A critical question of this section is as follows:
How are neurons electrically active?
At rest (i.e., equilibrium), there is an electric potential across a neuron's plasma membrane. The basis for electrical potential across the neuron's membrane lies inside the neuron proper, where we know its interior is negative with respect to the extracellular space outside (and this can be shown by inserting a microelectrode). Neurons actively establish a differential distribution of ions across the plasma membrane. Changes in the membrane potential $V_m$ are the electrical signals neurons produce. The resting membrane potential is $-50$ to $-80$ mV with respect to the extracellular solution. The convention of this class, and perhaps of the industry, is to assume $V_m = -65$ mV.
That said,
How does the neuron establish the resting membrane potential?
To answer this question, let us consider how solutes (i.e., ions) behave in a solution. We do this by beginning to recall the behaviour of solutes: that they will "run down" their chemical gradient. This is a formal way to say, "solutes will diffuse from areas of high concentration to areas of low concentration to achieve equilibrium."
To cross a membrane, one of two things must be true:
This is because of the biological property that membranes are semipermeable.
Note: electrostatic forces also play a role in the distribution of ions. Recall the principle of opposites attract. This means ions will run down their electrical gradient to equalise charge. So, what we are doing is speaking to the fact ions run down their electrochemical gradient, the distribution of both concenctration AND charge.
Having said all these facts, we can now explain how neurons are electrically active. They are so because a constant voltage difference across their membrane (the membrane potential, Vm) is maintained and can be rapidly changed in response to stimuli. This potential is created by the unequal distribution of ions—mainly Na+, K+, and Cl–—across the plasma membrane, the selective permeability of ion channels, and the continuous action of the sodium–potassium pump.
The sodium–potassium pump is fundamental: it uses ATP to move 3 Na+ ions out of the neuron and 2 K+ ions into the neuron with each cycle. This action not only maintains the steep concentration gradients for sodium and potassium but also directly contributes to the inside of the cell being more negative, since more positive charge leaves than enters.
At rest, the inside of the neuron is negative relative to the extracellular space. However, the resting membrane is not equally permeable to all ions. Instead, it is much more permeable to K+, because of the abundance of open K+ “leak” channels. As a result, the resting membrane potential lies much closer to the equilibrium potential for K+ (≈ −90 mV) than to that of Na+ (+60 mV). The sodium–potassium pump and the small sodium influx through resting channels stabilize this potential near −65 mV.
When ion channels open, ions move according to their electrochemical gradients (the combination of concentration and electrical forces). Potassium, being more concentrated inside the neuron, tends to leave the cell through K+ channels; this efflux is opposed by the negative interior, so net K+ movement is outward but moderated by electrical pull. Sodium, which is more concentrated outside, rushes into the cell through Na+ channels because both its concentration gradient and the negative interior drive it inward. For Cl–, the concentration gradient often favors inward movement while the electrical gradient (negative inside) favors outward movement; its equilibrium potential (ECl) usually lies near the resting potential, so opening Cl– channels tends to stabilize or slightly hyperpolarize the membrane.
Each ion has its own equilibrium potential (Eion), the voltage at which the outward and inward forces on that ion are balanced. For K+, this is around −90 mV; for Na+, about +60 mV; and for Cl–, near −65 mV in many neurons. The actual resting membrane potential represents a weighted average of these equilibrium potentials, determined by the relative permeability of the membrane to each ion, as described by the Goldman–Hodgkin–Katz equation.
Importantly, because the resting potential lies much closer to EK than to ENa, there is a large “stored” driving force for Na+ entry. When voltage-gated Na+ channels open, Na+ rushes inward, causing the rapid depolarization that defines an action potential. Action potentials propagate along axons and enable neurons to communicate with one another and with target tissues (e.g., muscles). Thus, the interplay of ion pumps, leak channels, selective permeability, and voltage-gated ion channels explains how neurons are electrically active and why dynamic control of the membrane potential underlies excitability and the brain’s ability to process and transmit information.
Equation cue: Ions drive the membrane toward their own equilibrium potentials.
$$ I_{\text{ion}} \;=\; g_{\text{ion}}\bigl(V_m - E_{\text{ion}}\bigr) $$where \(g_{\text{ion}}\) is the ion’s conductance (proportional to the number of open channels), \(V_m\) is the membrane potential, and \(E_{\text{ion}}\) is the Nernst (equilibrium) potential for that ion.
Convention note: some texts write \(I_{\text{ion}} = g_{\text{ion}}(E_{\text{ion}} - V_m)\). Both are equivalent as long as you keep a consistent sign convention for inward/outward current.
| Ion | Typical Distribution | Net Electrochemical Drive (at ~–65 mV) | Eion (≈, mV) | Effect when Channel Opens |
|---|---|---|---|---|
| Na+ | High outside → low inside | Strongly inward (both chemical and electrical) | ≈ +60 | Depolarizes (Vm moves toward +60 mV) |
| K+ | High inside → low outside | Outward (chemical out, electrical in; chemical dominates at rest) | ≈ −90 | Hyperpolarizes/stabilizes (Vm moves toward −90 mV) |
| Cl− | Higher outside than inside (mature neurons) | Chemical inward, electrical outward (often balance near rest) | ≈ −65 (varies by cell/type) | Stabilizes or slight hyperpolarization (tracks Vm) |
Notes: Values are typical for mammalian neurons at ~37 °C; exact numbers vary with ionic concentrations and temperature. Eion is from the Nernst equation; Vm is set by the weighted contributions of permeant ions (Goldman–Hodgkin–Hatz).
Here's a question:
What are ways a neuron could change its membrane potential?
Well, we talked about that in the previous section! A neuron's membrane potential changes whenever the movement of ions across the membrane is altered. This can happen in several ways:
Definition. An action potential (AP) is a highly stereotyped wave of depolarisation that travels down the axon. It is the electrical signal that propagates down the axon and triggers communication across a synapse.
APs are regenerated along the axon—each adjacent section is depolarised and a new AP occurs. The direction of the AP's travel is unidirectional as the refractory state of the membrane after a depolarisation event. It starts at the axon hillock, when it it depolarised to threshold.
Definition. The refractory state of an action potential refers to the period after an AP during which a cell is unable to generate a new AP, or requires a stornger stimulus to do so. It consists of two phases:
We see that sodium channels will inactivate briefly after opening. So, only segments of the axon that have not recently been depolarised can generate an action potential. Hence, unidirectional movement is induced.
APs are an all-or-none phenomenon—it either fires at all, uniform strength or it does not fire at all. The intensity of the stimulus does not change the amplitude nor speed nor magnitude nor duration of the AP within a given neuron, but it can increase the frequency at which the neuron fires. They are also regenerated along the axon, where each adjacent section is depolarised and a new action potential occurs. APs travel unidirectionally (see above).
Normally, the neuron's resting potential is negative. This is on account of the plasma membrane being primarily permeable to $K^+$ at rest (i.e., at rest, $K^+$ can leave the cell). During an AP, the membrane potential briefly becomes positive due to the membrane briefly become very permeable to $Na^+$ (i.e., $Na^+$ can enter the cell).
Mylenation makes the AP faster!
Definition. The conduction velocity is the speed of propagation of APs.
Definition. Saltatory conduction is the "jumping" of the AP from each node to Ranvier to another.
What happens when the AP arrives at the synapse?
To answer this question, we must define the two types of synapses.
Definition. An electrical synapse is a gap junction which connects two cells physically (which can also be called as "directly"). It enables fast communication between neurons, but is in the minority of synapses in the nervous system. Since it establishes a direct link, depolarisation of the presynaptic neuron always results in the depolarisation of the postsynaptic neuron.
Otto Loewi discovered chemical neurotransmission in 1921. He did so in the following steps:
Loewi found at step No. 3 that something happened: the second heart slowed down, showing that something dissolved in the fluid, a chemical messenger later found to be acetylcholine.
The majority of the synapses of the nervous system are chemical: an electrical signal, AP, is converted to a chemical, the release of a neurotransmitter. These chemical synapses can either be excitatory (depolarising) or inhibitory (hyperpolarisiing).
Synaptic transmission can be seen as a series of steps:
Synaptic potentials are a type of graded potential that varies in amplitude and duration, as opposed to APs which do NOT.
The sum of potentials at the cell body determines influence on whether the action potential will be fired.
When neurotransmitters bind to receptors on a postsynaptic neuron, they produce small voltage changes in the membrane called postsynaptic potentials (PSPs). Thse can be:
Whether a postsynaptic potential is excitatory (EPSP) or inhibitory (IPSP) depends on the type of neurotransmitter and, crucially, the specific postsynaptic receptor it binds to. This binding process causes ion channels to open or close—the type of ion that can pass through the channel determines if the resulting potential change depolarises (EPSP) or hyperpolarises (IPSP) the cell.
A single PSP is oftentimes too small to trigger an action potential firing. So, we must take a summation of many PSPs over space and time. There are many factors which affect that summation:
Essentially, the goal is this: if the net depolarisation at the axon hillock reaches threshold (which is approximately $-55$ mV), then then an action potential fires.
| Property | Action Potential | Synaptic Potential |
|---|---|---|
| Amplitude & Duration | All-or-none; always the same amplitude and duration | Graded; vary in amplitude and duration |
| Location | Axons | Dendrites and soma |
| Polarity | Always depolarizing (briefly) | Can be depolarizing (excitatory) or hyperpolarizing (inhibitory) |
| Propagation | Regenerates; self-propagating | Does not regenerate; signal weakens as it spreads |
What determines the response of a postsynaptic cell to neurotransmitter release? The answer lies in two factors: (1) the neurotransmitter released, and (2) the receptor type present on the postsynaptic membrane. In general, any given synapse releases only one neurotransmitter, but the postsynaptic response can differ dramatically depending on which receptor family receives that signal. Thus, neurotransmission is not a “one size fits all” system: the same transmitter can cause excitation in one cell type and inhibition in another.
Otto Loewi’s classic experiment (1921) revealed that nerve signals can be transmitted chemically. Stimulating the vagus nerve of a frog’s heart slowed its contractions. When the surrounding solution was transferred to a second heart, that heart also slowed, despite lacking nerve input. This showed that the vagus nerve released a chemical messenger—later identified as acetylcholine (ACh)—into the fluid, establishing the principle of chemical neurotransmission.
The NMJ was the first synapse to be deeply studied. At the NMJ, motor neurons release acetylcholine (ACh) onto skeletal muscle fibers. Here, ACh binding produces depolarization (excitatory) and ultimately muscle contraction. This contrasted with the vagus nerve’s use of ACh on cardiac muscle, where the effect is inhibitory. These observations illustrate a critical principle: the effect of a neurotransmitter depends on the receptor subtype it activates.
In both cases the neurotransmitter is ACh, but the postsynaptic receptors differ, producing opposite responses.
Neurotransmitters act by binding to specialized receptors. These receptors control ion flow across the membrane in two main ways:
The same neurotransmitter can produce very different effects depending on the receptor type it activates. This versatility underlies the richness of neural signaling. For instance, ACh can either excite skeletal muscle through ionotropic nAChRs or inhibit cardiac muscle through metabotropic mAChRs. The diversity of receptor families allows a limited set of neurotransmitters to generate a wide variety of physiological responses.
| Property | Ionotropic Receptors | Metabotropic Receptors |
|---|---|---|
| Mechanism | Receptor is itself a ligand-gated ion channel | Receptor activates G proteins, which influence channels indirectly |
| Speed | Fast (milliseconds) | Slower (hundreds of ms to seconds) |
| Duration | Short-lived | Long-lasting |
| Example | Nicotinic ACh receptor at NMJ (Na+ influx, muscle contraction) | Muscarinic ACh receptor in heart (K+ efflux, slows heart rate) |
So far, we have focused on how neurons generate and propagate electrical signals and how those signals cross the synapse. We now turn to the “chemical vocabulary” of the nervous system: neurotransmitters. Different neurons speak in slightly different chemical dialects, releasing particular transmitter molecules that act on specific receptors. These combinations of transmitter and receptor endow each neural circuit with its own characteristic mode of signaling, time-course, and function.
Definition. A neurotransmitter is a chemical messenger released by a presynaptic neuron at the synapse that binds to specific receptors on a postsynaptic cell (neuron, muscle fiber, or gland) and changes that cell’s membrane potential or intracellular signaling state.
Classically, a substance is considered a neurotransmitter if it satisfies several criteria:
Not all signaling molecules in the nervous system are classical neurotransmitters. Some, such as hormones and neuromodulators, act at longer distances or over longer time scales. Others, like gaseous transmitters, diffuse freely rather than being packaged into vesicles. Nonetheless, the core idea remains: chemical signals allow neurons to influence one another’s electrical activity and gene expression.
Neurotransmitters can be grouped into several broad families based on their chemistry and synthesis:
Each family has its own style of signaling. Small molecules typically mediate fast, point-to-point synaptic communication; peptides and atypical transmitters tend to modulate activity over wider regions and longer time scales.
One useful way to think about neurotransmitters is to distinguish between those that are nearly ubiquitous—used everywhere in the brain—and those that are more specialized in specific circuits.
Workhorse transmitters are used by enormous numbers of synapses and form the basic excitatory–inhibitory backbone of neural signaling:
Boutique transmitters are associated with particular neural pathways and psychological functions:
The “workhorses” are involved in most synaptic computations in the cortex, hippocampus, and cerebellum, mediating fast EPSPs and IPSPs. The “boutique” transmitters ride on top of this background activity, tuning the overall responsiveness and mode of processing of entire circuits (e.g., “alert and vigilant” vs. “calm and sleepy”).
Glutamate is the principal excitatory neurotransmitter in the CNS. When glutamate binds to its postsynaptic receptors, it typically allows cations (Na+, sometimes Ca2+) to enter the cell, producing EPSPs and bringing the membrane potential closer to threshold.
Major glutamate receptor types include:
GABA (γ-aminobutyric acid) is the primary inhibitory transmitter in the adult brain. Activation of GABA receptors usually leads to Cl− influx or K+ efflux, generating IPSPs and making neurons less likely to fire an action potential.
Major GABA receptor types include:
Many psychoactive drugs (e.g., benzodiazepines, barbiturates, alcohol) act in part by enhancing GABAA-mediated inhibition, tilting the balance of excitation and inhibition toward less overall neural activity.
Acetylcholine (ACh) is the first neurotransmitter to have been discovered and remains one of the most important. In the peripheral nervous system, ACh is the transmitter at the neuromuscular junction (causing skeletal muscle contraction) and at many autonomic synapses. In the brain, ACh is produced by several key groups of neurons whose axons project widely.
Major cholinergic pathways include:
Examples of ACh function in the brain.
Degeneration of basal forebrain cholinergic neurons is a hallmark of Alzheimer’s disease, and many current treatments for Alzheimer’s attempt to enhance cholinergic signaling (e.g., by inhibiting acetylcholinesterase).
Dopamine (DA) is a monoamine transmitter with several discrete pathways in the brain. These pathways support diverse functions including motor control, motivation, and reinforcement learning.
The mesostriatal (or nigrostriatal) DA pathway originates in the substantia nigra of the midbrain and projects primarily to the striatum (caudate nucleus and putamen).
Functions.
Loss of dopaminergic neurons in the substantia nigra → striatum pathway leads to Parkinson’s disease, a disorder characterized by bradykinesia (slowness of movement), resting tremor, rigidity, and postural instability. Treatments such as L-DOPA attempt to restore dopamine levels in this circuit.
The mesolimbocortical DA pathway originates in the ventral tegmental area (VTA) of the midbrain and projects to:
Functions.
Abnormalities in dopaminergic signaling in mesolimbocortical circuits have been linked to schizophrenia and other psychotic disorders. Many antipsychotic medications work by blocking or partially blocking D2-family DA receptors.
Norepinephrine (NE), also known as noradrenaline, is a monoamine transmitter used both in the peripheral sympathetic nervous system and in the brain.
Central noradrenergic neurons are concentrated primarily in the locus coeruleus (LC) in the pons, as well as in a few other brainstem nuclei. Axons from the LC project broadly throughout the cortex, hippocampus, cerebellum, and spinal cord.
Functions of NE in the brain.
In the body, NE is a key mediator of the fight-or-flight response, increasing heart rate, blood pressure, and redirecting blood flow to muscles.
Serotonin (5-hydroxytryptamine, 5-HT) is another monoamine transmitter with cell bodies primarily in the raphe nuclei of the brainstem. From there, serotonergic fibers project widely throughout the brain and spinal cord.
Functions of 5-HT.
Many antidepressant drugs, such as selective serotonin reuptake inhibitors (SSRIs; e.g., Prozac), act by increasing 5-HT levels in the synaptic cleft. By blocking the serotonin transporter, they prolong and enhance activation of postsynaptic 5-HT receptors.
Neuropeptides include a large and diverse set of signaling molecules:
Peptide transmitters are often co-released with a classical transmitter (e.g., glutamate, GABA) and act via metabotropic receptors to modulate circuit function rather than to drive fast EPSPs or IPSPs.
Every neurotransmitter must bind to a receptor to exert its effects. Receptors are typically named according to:
In many cases, a receptor subtype is named after a synthetic drug that selectively activates it. For instance:
The table below summarizes several important transmitter systems, their main receptor families, and broad functional roles.
| Transmitter | Representative Receptor Subtypes | Receptor Type | Selected Functions |
|---|---|---|---|
| Glutamate | AMPA, NMDA, kainate; mGluR1–8 | AMPA/NMDA/kainate: ionotropic; mGluRs: metabotropic | Main fast excitation; synaptic plasticity (especially via NMDA receptors); learning and memory. |
| GABA | GABAA, GABAB, GABAC (also called GABAA-ρ) | GABAA/GABAC: ionotropic; GABAB: metabotropic | Main inhibition in the brain; controls excitability; prevents runaway excitation and seizures. |
| Acetylcholine | Nicotinic (nAChR); muscarinic (M1–5) | nAChR: ionotropic; muscarinic: metabotropic | Neuromuscular junction (muscle contraction); cortical arousal; attention and memory; autonomic function. |
| Dopamine (DA) | D1, D2, D3, D4, D5 | All metabotropic | Motor control (basal ganglia); reward and reinforcement; motivation; aspects of cognition and psychosis. |
| Norepinephrine (NE) | α1, α2, β1, β2, β3 | All metabotropic | Sympathetic “fight-or-flight” responses; arousal and vigilance; mood; modulation of attention. |
| Serotonin (5-HT) | 5-HT1–5-HT7 families (multiple subtypes) | Most metabotropic; 5-HT3 is ionotropic | Regulation of mood, anxiety, sleep, appetite, and many cognitive and autonomic functions. |
| Peptide transmitters | Opioid receptors (μ, κ, δ); receptors for CCK, NPY, substance P, etc. | All metabotropic | Pain control; stress responses; feeding; social and emotional behaviors, depending on peptide and site. |
Note: Each transmitter family contains multiple receptor subtypes with distinct distributions and functions. This diversity allows a relatively small set of chemical messengers to support a vast repertoire of neural computations and behaviors.
Finally, because neurotransmitters act through specific receptors and transporters, they provide numerous targets for drugs. Neuropharmacology is the study of how drugs affect the nervous system and behavior. At synapses, drugs can:
Because the same transmitter can have very different effects in different brain regions and receptor contexts, drug actions are both powerful and complex. Understanding the underlying transmitter systems and their receptor families is therefore essential for making sense of how psychoactive drugs—from therapeutic medications to drugs of abuse—alter brain function and behavior.
We have seen that neurons use a rich chemical vocabulary—neurotransmitters—to communicate. These transmitters act through specific receptors to excite, inhibit, or modulate their targets. Neuropharmacology asks a closely related question:
How can exogenous chemicals—drugs—alter these transmitter systems, and what can that tell us about brain function and dysfunction?
Recall that each neuron usually releases one, or mostly one, primary neurotransmitter from all of its axon terminals. This “chemical identity” shapes how it participates in a circuit.
We distinguished between:
Workhorse transmitters carry the bulk of fast synaptic traffic, whereas boutique transmitters act more like “settings” or “modes” for large-scale circuits—tuning arousal, mood, reward, and motivation.
Acetylcholine (ACh) neurons form several key pathways in the brain:
Examples of ACh function in the brain.
Dopamine (DA) neurons are clustered in midbrain nuclei and send their axons to specific targets, forming distinct pathways.
Mesostriatal (nigrostriatal) pathway.
Loss of dopaminergic neurons in the substantia nigra → striatum pathway leads to Parkinson’s disease, characterized by:
Mesolimbocortical pathway.
Functions of DA in this pathway.
Abnormal dopaminergic signaling in this pathway is associated with schizophrenia and related psychotic disorders. Many antipsychotics act as antagonists at dopamine receptors, especially of the D2 family.
Norepinephrine (NE), also called noradrenaline, is produced by neurons in the locus coeruleus and a few other brainstem nuclei. Their fibers project broadly throughout the brain.
Functions of NE.
Serotonin (5-HT) cell bodies are mainly found in the raphe nuclei of the brainstem. Serotonergic fibers project widely to cortex, limbic system, and spinal cord.
Functions of 5-HT.
Antidepressants such as Prozac are selective serotonin reuptake inhibitors (SSRIs). They increase 5-HT in the synaptic cleft by blocking its reuptake, thereby enhancing postsynaptic 5-HT receptor activity.
To understand how drugs act, we must first understand the receptors they bind. For any given receptor, we ask:
Many receptor subtypes are named after synthetic drugs that selectively activate them:
Neuropharmacology is powerful because it provides tools and treatments:
Definition. In neuropharmacology, a drug is an exogenous substance (coming from outside the body) that has biological activity in the nervous system and is not a normal food or nutrient.
Drugs are usually specific chemicals with definable molecular targets, such as:
Drugs differ in their modes of activity:
Definition. A ligand is any substance that binds to a receptor, whether endogenous (e.g., a neurotransmitter) or exogenous (e.g., a drug or toxin).
We can classify ligands by how they influence receptor activity:
More refined categories (useful later) include:
To understand how drugs block receptors, we consider where and how they bind.
Competitive antagonists.
Noncompetitive antagonists.
In all cases, the endogenous ligand is the “reference” that tells us what the receptor is normally supposed to do. Drugs then mimic, enhance, or interfere with this natural signaling.
How do we quantify how strongly a drug acts? A central tool is the dose–response curve.
Definition. A dose–response curve (DRC) plots the magnitude or percentage of response as a function of the drug dose.
Typically:
ED50 (effective dose 50) is the dose at which 50% of subjects show a defined response (or where the effect reaches 50% of its maximum in a given preparation). It is a standard measure of potency: lower ED50 means greater potency.
Two key features of a DRC:
Every drug has two faces: benefit and risk. We care about the doses that produce the desired therapeutic effects and the doses that cause toxicity or death.
Definitions.
Therapeutic index is a measure of safety—it reflects the separation between the effective dose range and the toxic/lethal dose range. Intuitively:
A wide gap is good; a narrow gap is dangerous.
Conceptually:
Over time, the same dose of a drug may produce less effect. This phenomenon is called tolerance.
Definition. Drug tolerance is a decrease in the effect of a drug following repeated administration, such that higher doses are required to achieve the same response.
Types of tolerance:
Recall that the effect of a neurotransmitter or drug depends on how many receptors are present and how strongly they signal. Neurons can adapt by changing receptor expression:
These changes shift the dose–response curve over time. A dose that was once effective may become subtherapeutic (tolerance), or a dose that was once safe may become toxic if metabolic capacity is lost.
Putting it all together:
In the subsequent steps of our journey, we will apply these principles to specific drug classes—how they act on particular neurotransmitter systems, what they reveal about brain function, and how they are used (and misused) in clinical and everyday settings.
In the previous section, we built the conceptual toolkit of neuropharmacology: we defined neurotransmitters, receptors, agonists and antagonists, dose–response curves, therapeutic index, and tolerance. We saw that drugs can mimic, enhance, or block the actions of endogenous ligands, and that repeated exposure can drive the nervous system to adapt.
We now put those concepts to work by examining specific classes of psychoactive drugs—substances that alter mood, perception, cognition, and behavior by targeting particular neurotransmitter systems. As we do so, keep asking:
Which transmitter system is being targeted? Which receptors? Is this drug acting as an agonist, antagonist, or something more subtle?
Psychoactive drugs can be grouped (roughly) by their primary psychological effects and main targets:
In every case, there is a tension between therapeutic potential and risk (addiction, toxicity, and long-term harm). Neuropharmacology aims to understand both sides.
Opiates are drugs derived from the opium poppy; opioids more broadly include natural, semi-synthetic, and fully synthetic drugs that act on the same receptors as endogenous opioid peptides (e.g., endorphins, enkephalins).
Examples.
Opioids are generally agonists at opioid receptors (μ, κ, δ), which are metabotropic (GPCRs) widely expressed in the brain and spinal cord. Activation of these receptors:
Clinically, opioids are used as:
However, they also cause:
Tolerance to opioids can develop rapidly. A particularly dangerous scenario is when a person stops using (e.g., during treatment), loses tolerance, and then relapses at their previous high dose—greatly increasing the risk of fatal overdose.
In many countries, especially the United States, there has been a major opioid epidemic:
A key life-saving tool is naloxone (Narcan), a competitive opioid receptor antagonist. It rapidly displaces opioids from μ-receptors and can reverse opioid-induced respiratory depression if given in time. Making naloxone widely available (e.g., nasal sprays) is an important harm-reduction strategy.
Stimulant drugs increase wakefulness, alertness, and often produce a sense of energy and confidence. They act by increasing activity in excitatory or modulatory systems, particularly acetylcholine, glutamate, and monoamines (dopamine, norepinephrine).
Common examples:
All stimulants can produce tolerance and sometimes dependence; some (particularly cocaine and amphetamines) have high addiction potential and serious health risks.
Different stimulants act at different targets:
Several drugs in the amphetamine family (or functionally similar stimulants) are prescribed for attention-deficit/hyperactivity disorder (ADHD) and certain sleep disorders (e.g., narcolepsy). Examples include:
These medications, when used under medical supervision, can improve attention and impulse control by modulating catecholamine signaling in prefrontal and striatal circuits.
However, they are also frequently used without prescription for “academic performance” or recreational purposes, which increases the risk of cardiovascular side effects, dependence, and other harms.
Short-term effects of amphetamines may include:
Long-term or high-dose use, however, can be severely damaging:
Chronic methamphetamine use is notoriously associated with marked physical and psychological deterioration—a vivid reminder of how powerful and destructive monoamine dysregulation can be.
Hallucinogens, often called psychedelics, are drugs that profoundly alter perception, emotion, and sometimes the sense of self. They can produce:
They act on multiple systems, but a major theme is modulation of serotonin (5-HT) receptors (especially 5-HT2A) in cortex. Others affect glutamate, acetylcholine, and norepinephrine as well.
Many hallucinogens are abused (used non-medically or illicitly), but most are not strongly addictive in the sense of causing compulsive daily use with severe withdrawal. Their risks arise more from acute psychological distress, unsafe behavior, and (for some agents) potential neurotoxicity.
Lysergic acid diethylamide (LSD) is a semi-synthetic compound originally derived from ergot (a fungus that infects grains). A key figure in its discovery and early exploration is often called the “father of LSD.”
LSD acts primarily as an agonist (or partial agonist) at several serotonin receptors, especially 5-HT2A, which are dense in visual cortex and higher-order association areas. It also has actions at other monoamine receptors.
Subjective effects frequently include:
These experiences are strongly shaped by set (mindset, expectations) and setting (environment and social context).
Recent research has revisited classic and “newer” psychedelics as potential treatments for certain psychiatric conditions, always in carefully controlled clinical settings with psychological support. Some of the leading candidates include:
| Drug | Primary Action in Brain | Typical Subjective Effects (Recreational) | Possible Clinical Applications (Under Study) |
|---|---|---|---|
| Psilocybin / psilocin (“magic mushrooms”) |
Partial agonist at 5-HT receptors, especially 5-HT2A; alters activity in frontal and occipital cortex. | Intense visual phenomena, changes in time perception, feelings of transcendence or “mystical” experiences; effects strongly depend on set and setting. | Promising results for treatment-resistant depression, anxiety in serious illness, obsessive–compulsive symptoms, and cluster headache when administered in structured therapeutic contexts. |
| LSD (lysergic acid diethylamide) | Agonist at several monoamine receptors, notably 5-HT and DA; increases activity in frontal, cingulate, and visual cortices. | Pronounced perceptual changes, vivid colors and patterns, novel imagery, altered sense of self and time. | Being explored as an adjunct to psychotherapy for alcohol and other substance use disorders, and for some anxiety conditions. |
| Ketamine | Blocks NMDA-type glutamate receptors; also interacts with opioid and cholinergic systems; produces dissociation. | “Detached” or dreamlike state; distortions of body perception and environment; at higher doses, full anesthesia. | Low doses can produce rapid antidepressant effects in some patients who do not respond to conventional treatments; used in tightly monitored clinical settings. |
| MDMA (“Ecstasy”) |
Increases release of serotonin, dopamine, and norepinephrine; enhances oxytocin signaling; acts as a potent monoamine releaser. | Heightened sensory experiences, empathy, strong prosocial feelings, and euphoria. | Studied as an adjunct to psychotherapy for post-traumatic stress disorder (PTSD); safety concerns (e.g., overheating, neurotoxicity) require careful control. |
Important: these potential clinical uses involve carefully controlled doses, medical screening, and structured psychotherapy. This is very different from unsupervised recreational use, which can be dangerous.
Across opioids, stimulants, and hallucinogens, the same core principles reappear:
Neuropharmacology, then, is not just about how drugs change the brain; it is also about how the brain changes in response to drugs—and how we can harness these changes safely and ethically in the service of treating disease.
In the previous section, we explored major classes of psychoactive drugs—opiates, stimulants, and hallucinogens—and began to connect their subjective effects to their molecular actions on neurotransmitter systems. We also saw that some of these compounds, especially psychedelics, may have carefully controlled clinical uses despite their history as recreational substances.
In this final section on neuropharmacology, we will:
Hallucinogens, or psychedelics, are drugs that create profound perceptual distortions—visual, auditory, and somatosensory—and shifts in cognition and emotion. As noted earlier:
Key examples include:
Recent research suggests that, when used in a structured therapeutic context (screened patients, controlled dosing, psychotherapy support), these compounds may:
Important: this promising work does not mean unsupervised use is safe. The clinical setting is tightly controlled, unlike recreational contexts.
Beyond recreational substances, many psychoactive drugs are prescribed daily for psychiatric and neurological conditions. Here we consider four broad categories:
Antipsychotic drugs are among the most frequently prescribed classes of medications in psychiatry. They are primarily used to treat psychotic disorders such as schizophrenia and schizoaffective disorder, and sometimes bipolar disorder with psychotic features.
Two broad groups are often distinguished:
Antidepressants are widely prescribed for major depressive disorder, anxiety disorders, and related conditions. They primarily modulate monoamine neurotransmission (5-HT, NE, DA).
Major classes include:
Anesthetics are drugs that produce unconsciousness (general anesthesia) or localized loss of sensation (local anesthesia).
Local anesthetics:
General anesthetics:
Depressants are drugs that depress action potential firing and neural activity (the term “depressant” refers to neural activity, not necessarily mood). They are often used as:
Most act as agonists (or positive allosteric modulators) at inhibitory receptors:
Examples include:
Marijuana is derived from Cannabis sativa. Its best-known psychoactive component is Δ9-tetrahydrocannabinol (THC), but the plant contains many other biologically active compounds, including cannabinol (CBN), cannabidiol (CBD), and tetrahydrocannabivarin.
Cannabis produces a mixture of effects:
The brain produces its own cannabis-like molecules, called endocannabinoids.
Endocannabinoids are:
There are two major cannabinoid receptor types, both G protein–coupled (metabotropic):
Clinically, cannabinoids (plant-derived or synthetic) may be used for:
As with all psychoactive drugs, benefits must be weighed against risks (e.g., cognitive effects, dependence in some individuals, and possible interactions with other conditions or medications).
Not all psychoactive drugs are equally addictive. A critical organizing idea is:
Drugs that strongly activate the brain’s reward circuits have the highest potential for addiction.
Examples with high addiction liability:
We have focused till this point on the adult nervous system—how neurons are built, how they communicate, and how drugs modulate their activity. But those mature circuits do not simply appear fully formed. They are the end result of a long, tightly orchestrated sequence of developmental events that begin with a single cell.
Neural development asks:
How do we get from a fertilized egg to a complex brain with billions of neurons and trillions of synapses?
Development begins at the moment of fertilization:
Early in development, a key process called gastrulation reorganizes the embryo and establishes three primary lineages known as the germ layers:
It may feel surprising that the brain—buried inside the skull—is derived from the outermost germ layer, the ectoderm. Most ectodermal cells become “outside” structures such as skin. Neural tissue arises because part of this ectodermal sheet folds inward into the interior of the embryo.
The sequence looks like this (simplified):
The neural tube ultimately gives rise to the central nervous system (CNS)— brain and spinal cord. The cells at the crest of the folding ectoderm, the neural crest cells, migrate out and form much of the peripheral nervous system (PNS), including:
Development is polar: as the neural tube forms, it becomes spatially organized along its length. Different regions along the rostrocaudal (head-to-tail) axis will form major divisions of the CNS:
Signals along the neural tube (from adjacent tissues and internal patterning molecules) help establish where spinal cord, hindbrain, midbrain, and forebrain territories will form. This “rough map” is in place very early; the rest of development refines, populates, and connects these regions.
We can conceptually break neural development into six major stages. In reality, many of these overlap in time, but it is useful to treat them separately:
Definition. Neurogenesis is the production of new nerve cells (neurons) from dividing precursor cells.
Inside the neural tube is a region called the ventricular zone (VZ). Here:
Early in development, neurogenesis is extremely active; later, much of it slows or stops, and the brain relies more on refining existing circuits than adding entirely new neurons (with some regional exceptions).
Once neurons are born, they rarely stay where they were generated. In cell migration, neurons move away from the ventricular zone to form distinct layers and nuclei.
Key features:
In the cerebral cortex, this results in an “inside-out” layering: older neurons form deeper layers; younger neurons migrate further to form more superficial layers.
After migration, neurons must acquire their mature identity.
Location matters: the physical position of a cell within the nervous system exposes it to specific cues, which help determine what that cell will become. Once differentiated, neurons must then form functional connections—enter synaptogenesis.
Synaptogenesis is the process by which neurons extend processes and form synapses with their appropriate targets.
Two related processes occur:
At the tips of growing axons and dendrites are specialized structures called growth cones.
Target cells and intermediate guideposts release:
The general strategy is:
Use simple local rules (follow attractants, avoid repellents) to construct complex, large-scale wiring diagrams.
One of the striking features of neural development is that many more neurons are produced than will survive. A significant fraction of neurons undergo programmed cell death.
Definition. Apoptosis is an active, regulated form of cell death—sometimes called “cellular suicide.” It is not an accident; it is part of the normal developmental program.
Why kill off neurons? Because the nervous system adopts a “make more than you need, then keep the best” strategy:
A simplified model:
Apoptosis proceeds through characteristic stages:
Far from being purely destructive, this “sculpting by cell death” helps match the number of neurons to the size and needs of their target tissues.
Even after neuronal cell death, the nervous system is still over-connected. Many synapses are present that will not be maintained into adulthood. In synapse rearrangement, connections are refined:
This process is heavily influenced by neural activity and experience:
Pruning helps:
Thus, development follows a general blueprint:
Neural development is not a rigid assembly line but a dynamic conversation between genes, cells, and environment:
The end result is a brain whose structure reflects both a genetic plan and a history of experience. This developmental framework sets the stage for all of the physiology and neuropharmacology we have discussed—and for the remarkable capacity of the nervous system to adapt, learn, and change across the lifespan.
Just as neural development shapes wherehow circuits are wired, circadian systems shape when those circuits are most active. The Earth rotates once every 24 hours, producing a regular light–dark cycle. Life evolved in this rhythmic environment, and many organisms—from bacteria to humans—have internal timing systems that anticipate these daily changes rather than merely reacting to them.
The field that studies these daily timing processes is chronobiology, and its core concept is the circadian rhythm.
The term circadian comes from Latin:
Definition. A circadian rhythm is any biological process that shows a roughly 24-hour cycle in the absence of external timing cues. These rhythms can be:
Humans show clear circadian rhythms: we tend to sleep at night, have peak alertness during the day, and daily oscillations in temperature and hormone levels—even when external cues are reduced.
In the lab, circadian rhythms are often visualized using an actogram.
Definition. An actogram is a plot of activity (e.g., wheel running) over time, typically:
When animals are kept on a normal 12 h light–12 h dark schedule (LD 12:12), actograms show activity aligned to the light–dark cycle (e.g., nocturnal rodents run mostly during the dark phase). When these external cues are removed (constant darkness, DD), rhythms often “drift,” revealing the properties of the internal clock.
Circadian rhythms are not simply driven by the environment—they are generated by an internal endogenous clock.
Definition. A free-running rhythm is the rhythm expressed by an organism when it is isolated from time cues (e.g., kept in constant darkness). Under these conditions:
This shows that the internal clock runs “about” a day—circa dia—but needs the environment to stay perfectly aligned.
External cues that synchronize the internal clock to the outside world are called zeitgebers (“time-givers”). Common zeitgebers include:
Definition. Entrainment is the process by which the internal circadian clock is reset each day to match environmental zeitgebers. When the timing of the zeitgeber changes, the rhythm gradually shifts to realign.
A phase shift is the change in the timing of a circadian rhythm in response to a synchronizing cue (for example, when the onset of darkness is moved earlier or later).
When the light–dark schedule is abruptly shifted:
This is analogous to human experiences of jet lag or shift work: our internal clock must gradually re-entrain to a new light–dark schedule.
In mammals, the central circadian clock resides in the suprachiasmatic nucleus (SCN), a small bilateral structure in the hypothalamus, located just above the optic chiasm.
Key evidence:
Strikingly, transplant experiments demonstrate that the SCN not only generates rhythms but also determines their period.
Classic experiments in hamsters:
This shows that the SCN contains an intrinsic, transplantable clock that sets the period of the organism’s circadian rhythms.
How does light reset the clock in mammals? Light does not act only through rods and cones for vision; there is a dedicated pathway:
Definition. The retinohypothalamic pathway carries light information from the eye directly to the SCN.
In other vertebrates:
We can think of a circadian system as having three major components:
At the cellular level, the circadian clock in SCN neurons is built from a transcription–translation feedback loop involving specific clock genes and proteins.
In mammals, SCN cells produce at least two key proteins:
These proteins bind to each other to form a Clock/Bmal dimer. This dimer acts as a transcription factor to promote expression of other clock genes, including:
The Per and Cry proteins, once synthesized:
As a result:
This feedback loop takes about 24 hours to complete, generating a molecular oscillation that underlies the circadian rhythm.
Light can reset the phase of this molecular clock:
One of the most obvious circadian outputs in humans is the sleep–wake cycle.
When external cues are removed, humans show a free-running period of about 25 hours on average (with some individual variation and age dependence). This highlights the importance of daily entrainment by light and social schedules.
Human sleep patterns change dramatically with age.
Infancy. Newborns and infants:
REM sleep is characterized by:
The high amount of REM sleep in infancy may provide essential internal stimulation for the developing nervous system (when external sensory experience is limited).
Some infants are slow to entrain their sleep to the day–night cycle, leading to highly irregular sleep for the baby—and for the caregiver. Classic case studies highlight just how variable early circadian development can be.
Adulthood and aging. As we age:
During puberty, many individuals experience a noticeable shift in their circadian rhythms:
Unfortunately, many high schools require very early start times, forcing adolescents to wake up long before their internal clock says “morning.” This mismatch:
Studies of districts that have delayed school start times show:
Some individuals show extreme deviations from typical sleep timing that have a genetic basis.
These syndromes illustrate that our preferred sleep–wake timing is not purely a matter of “willpower”—underlying clock mechanisms and genetics play a major role.
Circadian rhythms demonstrate how deeply time is embedded in neurobiology:
Understanding circadian rhythms is therefore essential not only for basic neuroscience, but also for practical issues like shift work, jet lag, school schedules, and the treatment of sleep and mood disorders.
We have covered how neurons are built, how they communicate, and how their activity is timed across the day. The next question is: how does all of this circuitry allow us to sense the world? Sensory systems are the brain’s interfaces with the external (and internal) environment—they convert physical events into neural signals that the CNS can interpret and act upon.
In this section we will:
The body has specialized organs and receptor cells that detect particular kinds of stimuli. These stimuli can be:
Definition. Sensation is the detection of a stimulus by specialized receptor cells and its conversion into an electrical signal that the nervous system can use.
These receptor cells carry out sensory transduction—they translate physical events into graded electrical changes, and ultimately into action potentials in sensory neurons. The brain then integrates information across multiple sensory modalities (vision, touch, hearing, etc.) to construct a coherent representation of the world.
Sensory systems can be organized by the type of stimulus energy they detect. Examples include:
Regardless of the stimulus category, the brain faces the same problem:
How can a single “language” of action potentials represent so many different kinds of signals?
There are several recurring principles across sensory systems:
All sensory systems ultimately use the same basic currency—action potentials. An AP on a given axon looks essentially like an AP on any other axon. So how does the brain distinguish heat from a pinprick, or sound from light?
The answer lies in specialization of receptor neurons and their wiring.
This leads to a “labeled lines” organization:
Definition. In labeled line coding, each sensory neuron and its pathway carry information about one type of stimulus. The brain can infer the stimulus type simply from which line is active, even though the action potentials themselves look the same.
For example, if a particular set of touch receptors is active, the brain interprets the signal as “vibration on the fingertip,” not “flash of light,” because those axons originate in skin receptors and terminate in somatosensory pathways.
The skin is a rich example of a complex sensory organ. It contains multiple receptor types, each tuned to a particular form of mechanical or thermal stimulus (light touch, deep pressure, stretch, vibration, pain, temperature).
Sensory transduction begins when a stimulus causes a change in membrane potential in the receptor cell. In many somatosensory neurons, the receptor is a specialized ending of the neuron itself.
When a stimulus is applied:
Compare this to synaptic potentials:
The logic is similar—small, graded potentials are integrated, and if they are large enough, an “all-or-none” action potential is produced. The difference lies in the source of the graded potential: synaptic input vs. sensory receptor activation.
The Pacinian corpuscle (or lamellated corpuscle) is a mechanoreceptor in the skin specialized for detecting vibration and rapid changes in pressure.
At the molecular level:
If the receptor potential is large enough, it triggers action potentials in the associated sensory axon. Stronger vibration produces larger receptor potentials and thus higher frequencies of action potentials.
Action potentials are binary events—each one either occurs or it does not. So how does the nervous system encode the intensity of a stimulus (e.g., light brush vs. firm pressure)?
There are two main strategies:
Definition. Range fractionation is the strategy in which different cells in a sensory system have different thresholds, allowing the system as a whole to encode a wide range of stimulus intensities.
By reading out:
the CNS can infer both the quality and the intensity of the stimulus.
Sensory systems are particularly good at detecting changes rather than steady-state conditions. When a new stimulus appears:
This decline in responsiveness during a constant stimulus is called sensory adaptation.
Definition. Sensory adaptation is the progressive decrease in receptor response (and thus firing rate) during a sustained stimulus.
Adaptation helps the nervous system:
Not all sensory neurons adapt to the same degree.
Adaptation is one mechanism by which the nervous system implicitly says, “this stimulus is no longer new—shift attention elsewhere.”
In addition to adaptation, the nervous system can regulate sensory input by:
To know where a stimulus came from, the brain relies on the concept of receptive fields.
Definition. The receptive field of a somatosensory neuron is the specific region of the body surface in which a stimulus will alter the firing of that neuron.
Important consequences:
A simple demonstration of receptive fields is the two-point discrimination test.
If the two points fall within the same neuron’s receptive field, they are perceived as one stimulus. If they fall in separate receptive fields, they are perceived as two.
Results:
This difference reflects:
The somatosensory system detects body sensations, including touch, temperature, proprioception, and pain. Information from the body surface ultimately reaches the primary somatosensory cortex (S1).
This cortical “map” (often illustrated as a sensory homunculus) reflects how central the skin and body senses are for guiding our behavior and interaction with the world.
Together, concepts like labeled lines, receptor potentials, adaptation, range fractionation, and receptive fields form the foundation for understanding sensory processing. In subsequent sections we will delve more deeply into specific sensory modalities and how their detailed circuitry supports perception.
In the previous section we focused on how sensory receptors in the skin translate physical stimuli into receptor potentials and action potentials, and how concepts like labeled lines, range fractionation, and receptive fields allow the nervous system to encode what and how strong a stimulus is. We now follow these signals into the central nervous system and examine one of the most complex and important sensory experiences: pain.
The somatosensory system detects body sensations, including touch, proprioception, temperature, and pain. Several organizing principles carry over from the periphery into the brain:
Within cortex, we can distinguish between:
Thus, somatosensory processing progresses from basic detection in S1 to higher-order interpretation and integration in SII and beyond.
Pain is defined as an unpleasant sensory and emotional experience associated with actual or potential tissue damage. Despite its negative quality, pain is profoundly adaptive:
There are large individual differences in sensitivity to pain. At one extreme, complete inability to sense pain is possible—but dangerously so.
Congenital insensitivity to pain is a rare inherited disorder in which individuals cannot feel pain.
Paradoxically, the absence of pain is not a blessing—it removes a critical protective system.
Another rare inherited condition involving the same sodium channel gene produces the opposite problem: paroxysmal extreme pain disorder.
Together, CIP and PEPD illustrate how small genetic changes in ion channels can drastically alter pain perception.
Phantom limb pain is a form of neuropathic pain that occurs after amputation.
Treatments such as mirror therapy (creating the visual appearance of a controllable limb) can sometimes reduce phantom pain by re-establishing a more coherent link between intended movement and sensory feedback.
Pain is not simply a single sensation; it has multiple components. The McGill Pain Questionnaire distinguishes three major dimensions:
These dimensions help explain why two people with similar injuries can report very different pain experiences, and why psychological factors (attention, expectation, mood) strongly shape pain perception.
Nociception is the neural process of encoding and processing noxious (potentially tissue-damaging) stimuli. The receptors that initiate nociception are called nociceptors.
Some nociceptors express transient receptor potential (TRP) channels that act as molecular thermometers and chemoreceptors.
By combining information from different fiber types (Aδ vs. C) and different receptors (TRPV1, TRP2, CMR1, etc.), the CNS can distinguish sharp vs. dull pain and hot vs. cold stimuli.
Two major classes of nociceptive fibers carry pain information:
Pain and temperature information ascends to the brain via the anterolateral (or spinothalamic) system.
This dual projection pattern—toward both motor/sensory regions and limbic structures—helps explain why pain simultaneously triggers reflexive movement away from the stimulus and a strong emotional reaction.
Analgesia is relief from pain, whereas anesthesia refers to a broader loss of sensation or consciousness. Pain can be modulated at multiple levels:
Some analgesics act at or near the site of injury:
Analgesic or anesthetic agents can be delivered near the spinal cord:
The brain can actively modulate pain signals descending to the spinal cord. This involves endogenous opioids and a set of descending control pathways.
A key structure in descending pain modulation is the periaqueductal gray (PAG) in the midbrain:
Raphe neurons release serotonin (5-HT) onto neurons in the spinal dorsal horn:
Thus, top-down signals can “turn down the volume” on pain before it reaches higher centers.
There are multiple classes of pain-relieving strategies, which can be loosely grouped as:
Overall, pain is not a simple “input” but a complex, multidimensional experience shaped by peripheral receptors, spinal processing, ascending pathways, cortical and limbic networks, and powerful descending control systems. Understanding these layers is essential both for basic neuroscience and for developing effective strategies to treat pain.
As of late, we have focused on how sensory receptors convert physical stimuli into electrical signals and how the somatosensory system, including pain pathways, carries that information into the brain. Vision follows the same basic logic—receptors detect a stimulus, convert it into action potentials, and send those signals centrally—but with its own elaborate anatomy and specializations.
In this section we will:
The visual system is organized as a series of stages, each transforming the representation of the visual world:
At every stage, the system preserves critical information about where in space light originated and what features it carries.
Each eye sees part of the world, and their views overlap in the middle. The system is organized by visual field, not by individual eye:
This arrangement supports binocular vision, allowing the brain to combine slightly different views from both eyes to compute depth and spatial relationships.
For vision to be sharp, light rays from a point in the world must be brought to a focus on the retina.
Several structures contribute:
Definition. Accommodation is the process by which the lens changes shape to focus light from objects at different distances onto the retina.
Simultaneously, the pupil adjusts its size:
The retina is a thin, layered sheet of neural tissue lining the back of the eye. Interestingly, its organization is inverted:
The retina contains several major cell types arranged in distinct layers:
Horizontal and amacrine cells are particularly important for lateral inhibition and other computations that shape receptive fields and emphasize contrast.
Rods and cones serve overlapping but distinct roles in vision.
The fovea is the point on the retina where light from the center of gaze is focused. It has several specializations:
Because of this, the fovea supports:
Furthermore, a disproportionately large portion of visual cortex is devoted to processing input from the fovea:
Note. This cortical magnification means that the central visual field has far more “brain real estate” than the periphery, reflecting its importance for fine visual tasks.
Rods are almost absent from the fovea but become increasingly dense toward the peripheral retina. This arrangement:
Photoreceptors are extraordinarily sensitive to light. One reason for this is the amplification built into the phototransduction cascade.
In rods, the key photopigment is rhodopsin.
When a photon is absorbed:
The cascade allows one photon to influence many ion channels, producing a detectable change in membrane potential. This GPCR-based amplification is a crucial reason why the visual system can operate over extremely low light levels.
The visual system must cope with an enormous range of light intensities—from starlight to bright sunlight. It uses several strategies:
As a result, the visual system maintains useful responsiveness across many orders of magnitude of illumination, while still preserving contrast information.
Like other sensory systems, the visual system is especially tuned to differences and edges rather than absolute illumination levels.
We “see” edges, borders, and changes more vividly than uniform surfaces.
This emphasis begins in the retina itself, where circuits create center–surround receptive fields and employ lateral inhibition to enhance contrast.
The output neurons of the retina are the ganglion cells. Each ganglion cell receives input from a specific set of photoreceptors via bipolar and horizontal cells. The part of visual space that influences a ganglion cell’s firing is its receptive field.
Retinal ganglion cell receptive fields typically have a center–surround organization:
There are two main types:
These receptive fields are created by the pattern of connections from photoreceptors to bipolar cells and the modulatory influences of horizontal cells in the outer retina.
On-center and off-center ganglion cells are not simply “light detectors”; they are tuned to contrast—they respond most strongly when there is a difference between the illumination in center and surround.
Lateral inhibition is a circuit motif in which active cells inhibit their neighbors, sharpening boundaries and enhancing differences.
The result is that at edges, where illumination changes abruptly, ganglion cell responses are enhanced. This leads to perceptual phenomena such as the Mach band illusion, where uniform gradients appear to have exaggerated brightness differences at edges.
Center–surround receptive fields and lateral inhibition thus help the visual system act as a difference detector, prioritizing edges and changes over uniform areas.
Color vision arises because different classes of cones are tuned to different portions of the visible spectrum. The visual system compares activity across these cone types rather than reading any single cone in isolation. This comparison, together with center–surround receptive fields, allows us to detect both hue and color contrast.
Recall. Cones form the basis of the photopic system—they operate in bright light, support high acuity, and allow color vision. Rods dominate scotopic (dim-light) vision and do not provide color information.
In humans there are three main functional cone types:
Any given wavelength of light will stimulate all three cone types to different degrees. The brain interprets the pattern of activity across S, M, and L cones as a particular color. This “trichromatic” coding is only the first step. Retinal circuits then reorganize this information into color-opponent channels.
Definition. Color-opponent ganglion cells are retinal ganglion cells whose receptive fields are built from cones of different spectral sensitivities, arranged in a center–surround organization. Typical examples include:
Just as luminance-sensitive ganglion cells emphasize edges in brightness, color-opponent cells emphasize edges in chromatic content. This organization:
Note. Genetic defects that eliminate or alter one cone type disrupt these comparisons, resulting in color blindness (e.g., red–green dichromacy). The brain’s color circuitry is intact, but it has insufficiently distinct cone signals to compute certain color differences.
Signals from retinal ganglion cells travel along the optic nerve, pass through the optic chiasm, and continue as the optic tracts to the lateral geniculate nucleus (LGN) of the thalamus. The LGN is the main relay station from eye to cortex.
Several key principles are preserved in the LGN:
From the LGN, axons project via the optic radiations to primary visual cortex (V1) in the occipital lobe. In V1, receptive fields become more complex.
In V1, neurons can be categorized (classically) as simple or complex cells, based on their receptive field properties:
Thus, the cortex builds orientation-selective receptive fields by combining input from many LGN neurons, each with center–surround receptive fields. This is a classic example of hierarchical processing:
Beyond primary visual cortex, information is sent to multiple higher-order areas that specialize in different aspects of visual perception. A prominent organizational scheme distinguishes two major processing streams:
Damage to different parts of these streams leads to characteristic deficits that highlight their functions.
Ventral stream lesions: agnosias
Dorsal stream lesions: spatial and action deficits
Altogether, the ventral and dorsal pathways illustrate how visual processing is divided into specialized streams: one concerned primarily with what things are, and the other with where they are and how to act upon them. Disruptions in these circuits reveal the remarkable complexity underlying what feels like effortless visual perception.
Thus far, we have examined how sensory systems such as touch and vision detect physical stimuli and transform them into patterned neural activity. Hearing and language follow the same basic principles—specialized receptors convert mechanical vibrations of air into electrical signals, which are then interpreted by higher brain areas as sound and, in humans, as speech and language.
How does the nervous system turn tiny air pressure changes into rich experiences like music, speech, and meaning?
Sound is a mechanical wave—repeating changes in air pressure over time. Two key physical properties of sound relate directly to how we perceive it:
Our auditory system can detect an impressive range of both amplitude and frequency, from faint whispers to loud concerts, and from low rumbles to high chirps.
Before sound can be transduced into neural activity, it must be collected and mechanically transformed by the ear.
The middle ear converts large, low-pressure vibrations in air into smaller, higher-pressure vibrations in fluid. This is necessary because fluid is harder to move than air.
Note. By focusing the force from the large area of the eardrum onto the much smaller oval window, the ossicles concentrate sound energy and improve transfer of vibrations into the fluid-filled cochlea.
The inner ear houses the machinery that converts mechanical energy into neural signals. The core auditory structure is the cochlea.
The organ of Corti sits on the basilar membrane and contains three key components:
Different sound frequencies cause different regions of the basilar membrane to vibrate most strongly:
This systematic mapping of frequency onto position along the basilar membrane is called tonotopy. It is the auditory analog of retinotopy in vision and is preserved all the way up into the auditory cortex.
When the basilar membrane vibrates, it causes the hair cell stereocilia to bend. This bending is the key to turning mechanical energy into a receptor potential.
Deflection in the opposite direction slackens the tip links, closes ion channels, and hyperpolarizes the cell, reducing transmitter release. In this way, hair cells encode sound vibrations as graded changes in membrane potential and transmitter output.
Note. Inner hair cells provide most of the auditory information to the brain, while outer hair cells act as a cochlear amplifier, actively fine-tuning and amplifying basilar membrane motion.
The electrical signals generated by hair cells are carried into the brain along the auditory nerve, part of cranial nerve VIII, the vestibulocochlear nerve.
At each stage, tonotopic organization is preserved: neighboring neurons respond to neighboring sound frequencies.
How do we determine where a sound is coming from? The brain uses binaural cues, based on information from both ears. The key computations occur in the superior olivary nucleus.
By comparing inputs from both ears, neurons in the superior olive can compute the horizontal location of a sound source.
Primary auditory cortex (A1) in the temporal lobe maintains a tonotopic map of sound frequency, similar to how the primary visual cortex maintains a retinotopic map.
Beyond A1, secondary auditory areas process more complex features such as combinations of frequencies, temporal patterns, and, in humans, the acoustic structure of speech and music.
The brain organizes auditory information into partially distinct processing streams, analogous to the dorsal (“where”) and ventral (“what”) streams in vision.
Human language processing relies heavily on these temporal lobe structures, especially in the left hemisphere.
Unlike basic hearing, which is relatively symmetric, language is strongly lateralized.
Definition. Lateralization refers to the specialization of the two cerebral hemispheres for different functions.
The Wada test temporarily anesthetizes one hemisphere at a time (using sodium amytal) to determine which side supports language.
Split-brain patients—individuals whose corpus callosum has been surgically severed or is absent—provide further evidence:
Language is a highly specialized form of communication uniquely elaborated in humans.
Definition. Language is the system in which arbitrary symbols (sounds, signs, written marks) are combined according to rules (grammar) to convey an essentially unlimited range of meanings—objects, actions, abstractions, and relationships.
Key features:
Language is processed primarily in the left hemisphere, but involves a distributed network of regions in temporal, frontal, and parietal lobes.
Damage to these regions results in characteristic language deficits known as aphasias.
Definition. Aphasia is an acquired impairment of language production and/or comprehension, typically following brain injury such as stroke.
Common signs include:
Dyslexia refers to difficulty in learning to read or in processing written language, despite adequate intelligence and educational opportunity.
Although true human language is unique, animal models help us study components of language such as vocal learning and symbolic communication.
All together, studies of auditory processing, language networks, and animal models highlight how the brain transforms simple vibrations into rich perceptual experiences and complex symbolic communication.
Up to now, we have examined how sensory systems such as touch and vision detect physical stimuli and transform them into patterned neural activity. Hearing and language follow the same basic principles—specialized receptors convert mechanical vibrations of air into electrical signals, which are then interpreted by higher brain areas as sound and, in humans, as speech and language.
How does the nervous system turn tiny air pressure changes into rich experiences like music, speech, and meaning?
Sound is a mechanical wave—repeating changes in air pressure over time. Two key physical properties of sound relate directly to how we perceive it:
Our auditory system can detect an impressive range of both amplitude and frequency, from faint whispers to loud concerts, and from low rumbles to high chirps.
Before sound can be transduced into neural activity, it must be collected and mechanically transformed by the ear.
The middle ear converts large, low-pressure vibrations in air into smaller, higher-pressure vibrations in fluid. This is necessary because fluid is harder to move than air.
Note. By focusing the force from the large area of the eardrum onto the much smaller oval window, the ossicles concentrate sound energy and improve transfer of vibrations into the fluid-filled cochlea.
The inner ear houses the machinery that converts mechanical energy into neural signals. The core auditory structure is the cochlea.
The organ of Corti sits on the basilar membrane and contains three key components:
Different sound frequencies cause different regions of the basilar membrane to vibrate most strongly:
This systematic mapping of frequency onto position along the basilar membrane is called tonotopy. It is the auditory analog of retinotopy in vision and is preserved all the way up into the auditory cortex.
When the basilar membrane vibrates, it causes the hair cell stereocilia to bend. This bending is the key to turning mechanical energy into a receptor potential.
Deflection in the opposite direction slackens the tip links, closes ion channels, and hyperpolarizes the cell, reducing transmitter release. In this way, hair cells encode sound vibrations as graded changes in membrane potential and transmitter output.
Note. Inner hair cells provide most of the auditory information to the brain, while outer hair cells act as a cochlear amplifier, actively fine-tuning and amplifying basilar membrane motion.
The electrical signals generated by hair cells are carried into the brain along the auditory nerve, part of cranial nerve VIII, the vestibulocochlear nerve.
At each stage, tonotopic organization is preserved: neighboring neurons respond to neighboring sound frequencies.
How do we determine where a sound is coming from? The brain uses binaural cues, based on information from both ears. The key computations occur in the superior olivary nucleus.
By comparing inputs from both ears, neurons in the superior olive can compute the horizontal location of a sound source.
Primary auditory cortex (A1) in the temporal lobe maintains a tonotopic map of sound frequency, similar to how the primary visual cortex maintains a retinotopic map.
Beyond A1, secondary auditory areas process more complex features such as combinations of frequencies, temporal patterns, and, in humans, the acoustic structure of speech and music.
The brain organizes auditory information into partially distinct processing streams, analogous to the dorsal (“where”) and ventral (“what”) streams in vision.
Human language processing relies heavily on these temporal lobe structures, especially in the left hemisphere.
Unlike basic hearing, which is relatively symmetric, language is strongly lateralized.
Definition. Lateralization refers to the specialization of the two cerebral hemispheres for different functions.
The Wada test temporarily anesthetizes one hemisphere at a time (using sodium amytal) to determine which side supports language.
Split-brain patients—individuals whose corpus callosum has been surgically severed or is absent—provide further evidence:
Language is a highly specialized form of communication uniquely elaborated in humans.
Definition. Language is the system in which arbitrary symbols (sounds, signs, written marks) are combined according to rules (grammar) to convey an essentially unlimited range of meanings—objects, actions, abstractions, and relationships.
Remark. This is different to the definition of a language derivable mathematically.
Key features:
Language is processed primarily in the left hemisphere, but involves a distributed network of regions in temporal, frontal, and parietal lobes.
Damage to these regions results in characteristic language deficits known as aphasias.
Definition. Aphasia is an acquired impairment of language production and/or comprehension, typically following brain injury such as stroke.
Common signs include:
Dyslexia refers to difficulty in learning to read or in processing written language, despite adequate intelligence and educational opportunity.
Although true human language is unique, animal models help us study components of language such as vocal learning and symbolic communication.
In short, studies of auditory processing, language networks, and animal models highlight how the brain transforms simple vibrations into rich perceptual experiences and complex symbolic communication.
So far, we have treated the nervous system primarily as a sensing and signaling device: detecting stimuli, processing them, and transmitting information across synapses. But much of the brain’s purpose is ultimately to act on the world—to move the eyes, shift posture, speak, grasp, walk, or play an instrument.
How does the nervous system convert patterns of neural activity into coordinated, purposeful movement?
Motor control links motoneurons, muscles, spinal circuits, and multiple brain regions in a hierarchical system. At the periphery, the neuromuscular junction produces reliable contractions; centrally, the spinal cord, cortex, cerebellum, and basal ganglia work together to initiate, refine, and adapt movement.
The final common pathway from the nervous system to skeletal muscle is the neuromuscular junction.
Definition. The neuromuscular junction (NMJ) is the chemical synapse where the terminal of a motor neuron contacts a skeletal muscle fiber and controls its contraction.
Definition. A motor unit is a single motor neuron and all of the muscle fibers it innervates. Motor units are the fundamental units of control in the skeletal muscle system.
The neurotransmitter at the skeletal NMJ is acetylcholine (ACh).
Myasthenia gravis is a classic example of how altering ACh signaling at the NMJ disrupts movement.
Skeletal muscles attach to bones via tendons and can only pull (contract) or relax—they cannot push.
By selectively activating different sets of motor units in multiple muscles, the nervous system can generate a vast repertoire of movements.
To control movement, the brain must continuously monitor the body’s position and tension.
Definition. Proprioception is the sense of the body’s own movements and positions, mediated by specialized sensory receptors in muscles and tendons.
Together, these proprioceptors give the nervous system a continuous “internal map” of limb position and muscle load, allowing for smooth, coordinated adjustments during movement.
Not all movements require deliberation or cortical involvement. Some are mediated by spinal circuits alone.
Definition. A reflex is a simple, unvarying, and unlearned response to a specific sensory stimulus. Reflexes are mediated by small neural circuits within the spinal cord and brainstem.
The classic knee-jerk response illustrates a simple monosynaptic reflex arc:
This circuit provides an automatic, very rapid response that helps maintain posture and muscle tone.
Voluntary movement arises from a hierarchical organization of control systems.
Commands from the brain reach spinal motor neurons via two major pathways.
Definition. The pyramidal system consists of axons that originate primarily in the primary motor cortex (M1) and descend through the brainstem to the spinal cord, forming the pyramidal tract.
As with somatosensory cortex, M1 contains a somatotopic map of the body, sometimes depicted as a motor homunculus.
The extrapyramidal system includes several subcortical structures, chiefly the basal ganglia and cerebellum, that modulate motor commands.
Nonprimary motor areas lie anterior to M1 and contribute to the planning and organization of movements.
Within the ventral premotor cortex, some neurons exhibit a remarkable property.
Definition. Mirror neurons are neurons that fire both when an individual executes a particular movement and when they observe someone else performing the same movement.
Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease that affects the pyramidal system.
The basal ganglia are a group of interconnected subcortical nuclei that play a crucial role in the control of movement.
The basal ganglia help determine:
They are especially important for movements that are influenced by past experience and learning (habitual or practiced actions). Conceptually, the basal ganglia contain:
Huntington’s disease (HD) is a hereditary neurodegenerative disorder that prominently affects the basal ganglia.
HD is also a classic genetic disease:
The cerebellum (literally “little brain”) is another major component of the extrapyramidal system, essential for smooth, coordinated movement.
Ataxia refers to uncoordinated movements resulting from cerebellar damage.
Many brain regions participate in the selection, planning, execution, and refinement of movement. Their roles can be summarized as follows:
| Structure | Primary Functions in Motor Control |
|---|---|
| Prefrontal cortex | Selects appropriate behaviors and goals using information about the external environment and internal state. |
| Premotor cortex | Programs movements based on target location, body position, and external cues; important for learned stimulus–response associations. |
| Supplementary motor area (SMA) | Plans and sequences complex, internally generated movements; coordinates bimanual actions. |
| Primary motor cortex (M1) | Executes voluntary movements by sending commands via the pyramidal tract; controls force and direction of movement. |
| Basal ganglia | Integrates cortical and sensory information to facilitate desired movements and inhibit competing ones; crucial for movement initiation and habit learning. |
| Cerebellum | Maintains balance, refines ongoing movements, and supports learning of motor skills; contributes to error correction and timing. |
Together, these systems transform neural plans into fluid, adaptive actions—from a simple reflexive knee-jerk to the coordinated motor patterns of speech, music, and skilled movement.