Treating Neurodegenerative Diseases with BCI


If you’d asked me a short few weeks ago whether I thought neurogenesis in humans continued throughout their lifetime (as so often the topic comes up in the most casual of conversations), I’d have, with 100 percent confidence, said “yes.”

That’s right friends, strangers, guy in that chair over there… Today, we’re talking about one of my favorite subjects! Brains.

Recently, I found out that adult hippocampal neurogenesis (AHN) in humans might not, in fact, be a real thing.1 This is shocking! So then I wondered: Could we potentially use brain computer interface (BCI) as an artificial neurogenesis therapy for individuals suffering the effects of neurodegenerative diseases—such as Alzheimer’s—psychiatric disorders, and age-related cognitive dysfunctions?

But what is AHN, why is it important, and how does BCI fit in?


The Importance of Adult Hippocampal Neurogenesis in Humans

Neurogenesis is basically what it sounds like—the birth of new neurons. It starts in the womb and may continue until about 13 years of age2 or until death. Adult neurogenesis (what we’re focused on) has been corroborated in mice, songbirds, and non-human primates. While there is considerable evidence of adult neurogenesis in humans, this is where things get dicey. The methodology currently used isn’t ideal. For example:

  • Carbon dating cells can be mislabel wherein dying cells are labeled as dividing cells, giving a false positive for neurogenesis, and protein markers can mislabel cell types (glia for neuron)1
  • Studies don’t particularly account for cellular degradation in post-mortem samples, nor for cognitive health of the doner before death, which can lead to erroneous findings1

The extreme variation in findings in similar methodologies used is another head scratcher. This is why proving AHN in humans is so difficult. Finding a reliable way to measure potential AHN in real-time in living subjects via imaging seems to be the way to go but has thus far not been available.

Anyway, based on both animal and (contentious) human studies, adult neurogenesis is thought to take place in two areas of the brain: the subventricular zone, and the dentate gyrus of the hippocampus. AHN is thought to be responsible for things like learning, memory retention, and spatial memory (which is the ability to navigate your environment and remember how to get to the grocery store).



Now… neurodegenerative diseases, psychiatric disorders, and age-related cognitive dysfunctions all have something in common: in both human studies, and in studies1 using animal models in which it’s been shown AHN is present, those with the abovementioned ailments all showed decreased neurogenesis. Based on this, we could hypothesize that human AHN therapies could provide symptom alleviation (or potential condition improvement) in such conditions as depression, Alzheimer’s, and age-related memory loss. According to ADULT NEUROGENESIS IN HUMANS: A Review of Basic Concepts, History, Current Research, and Clinical Implications:

  • “Consecutive animal model studies have indicated the potential of neurogenesis-based targets in drug development for depression due to the implied role that neurogenesis plays in the mechanisms of actions of many antidepressant drugs.
  • “A neurogenic drug […] was found to reduce severity of the symptoms in patients with major depressive disorder (MDD) compared to placebo, but the robustness of the results was limited by small sample size and skewed test-control distribution of the study…
  • “Metformin—[an FDA-approved] drug for the treatment of Type 2 diabetes—was reported to induce neurogenesis in a rat model and in human neuronal cell cultures, but no clinical trials have been conducted to support these results. Prolonged treatment with this drug in humans with diabetes, however, was found to have an antidepressant effect and appeared to protect patients from cognitive decline.”1

If AHN in humans eventually is proven, endogenous cell replacement or neuronal progenitor/stem cell transplant therapies could be a viable source of treatment.6 However, regardless of the existence of AHN in humans, prevention of cognitive decline is a noteworthy effort. But what about alternate treatment solutions in the absence of AHN in humans?



BCI as a Treatment for Cognitive Disorders

BCI has been growing in popularity for some time and has been applied to both clinical and practical use for decades: cochlear implants, the Utah array, deep brain stimulation. But it seems that a lot of BCI solutions, and even studies, tend toward mobility vs cognition. For instance, BCI studies in stroke patients primarily focus on mobile rehabilitation; however, one study3 found a link between motor, cognitive, and emotion functions that revealed promising evidence of the benefits of BCI in treating post-stroke cognitive impairments (PSCI). I want to point something important out here: BCI mobility rehabilitation has yielded very good results for patients; however, patients with a certain percent of PSCI can’t participate in this type of rehabilitation. Your brain must be able to send, receive, and decode signals for BCI to work, which is why cognitive rehabilitation is so important.

Part of what led to studying BCI in PSCI is that since the “effects of BCI-based neurofeedback training have been seen to improve certain cognitive functions in neurodevelopmental and neurodegenerative conditions such as [ADHD] and mild cognitive impairment (MCI) in elderly subjects, respectively, it is therefore also likely to generalise to other dysfunctions, including PSCI.” While more research is needed in this area, the foundation has undeniably been set. BCI could potentially act as a treatment in cognitive and some psychological disorders.


A Look at Current BCI Projects

There are multiple companies in the BCI industry, though most seem focused on entertainment and mobility. For example, NextMind’s Dev Kit is a very cool product available for consumer purchase that allows individuals to interact with the digital world in a hands-free manner. I recommend watching the launch talk—very cool. While the Dev Kit is geared mostly toward entertainment—video games, interacting with the TV, and such—being able to move and communicate through digital space offers a lot of benefits for mobility- and speech-impaired individuals.

Kernel’s Flux, however, is a different beast. According to their website, “Kernel Flux is a turnkey magnetoencephalography (MEG) platform based on optically-pumped magnetometers (OPMs), which provides real-time access to the intricate brain activity underlying functions such as arousal, emotion, attention, memory, and learning.” It’s a tool that’s been used in studies to help determine areas of the brain affected by such conditions as Parkinson’s4 and mild MCI5 related to dementia of Alzheimer’s type (DAT). The conclusion of the latter study found that “MEG functional connectivity may be an ideal candidate biomarker for early, presymptomatic detection of the neuropathology of DAT, and for identifying MCI-patients at high risk of having DAT.”

If Kernel is providing the means of early detection in neurodegenerative diseases and conditions linked with cognitive decline, is it possible that same tool can be used to detect AHN in humans? And more importantly, if AHN isn’t really real, who is going to step up to the plate with BCI focused on the treatment of neurodegenerative diseases? Elon Musk? Heh. Wait…



Could Neuralink Produce a Synthetic Neurogenesis Therapy?

Neuralinkan elon musk company is working on cutting edge BCI technology. They’ve created an implant that uses tiny threads inserted into the brain to receive neuronal signals. The implant amplifies the signals, then converts them to digital code which is sent via Bluetooth to a mobile app. The threads can also send signals to stimulate neurons and identify some neurons by shape.

While Neuralink’s initial goal is to facilitate digital communication and interaction in paralysis patients, they’re ultimately hoping for potential restoration of motor function in said patients, treatment of cognitive and psychological disorders, restoration of vision, and more. I highly recommend watching the launch of N1 for a look at the science and engineering behind all of it, and I recommend watching the progress update to get a look at the Link and its specs. It. Is. Very cool. But what does it have to do with neurogenesis?

Well, “Progressive degeneration of specific neuronal types and deterioration of local neuronal circuitry are the hallmarks of degenerative neurological diseases, such as [Parkinson’s, Alzheimer’s, Huntington’s, and ALS].”6 Identification of these specific neuronal types is key in any neurogenesis therapy (kinda like gene therapy!), whether transplanting genetically engineered cells into target regions of the brain or using software programed to mimic specific neuronal signals in place of lost or damaged neurons.

Because Neuralink’s device can send, decode, and receive signals and identify neurons, and because we know specific neurons related to specific neurodegenerative diseases (i.e. Huntington’s degrades striatal medium spiny and cortical neurons), I opine that, yes, Neuralink’s device could definitely act as a synthetic type of neurogenesis therapy. There’s obviously an extreme amount of data that would have to be collected though, given that two of the same type of neuron in a person’s brain giving the same directive (or “action potential”) can do so in two different ways, and this varies from person to person. Neuralink’s data processing ability is pretty remarkable and quite robust, and since it’s already individually tuned (so to speak), it’s essentially made to be a target therapy.

Furthermore, with the ability to process so much data simultaneously, the Link could additionally help identify neurons or neural circuitry affected by neurological disorders or damage to provide effective treatment therapy. It could also help with schizophrenia, wherein erroneous information processing due to abnormal dendritic branching and synaptic connections could be corrected or overwritten.1

There’s an exceptional amount of potential with this device and, while it might sound like science fiction, it seems more to me like it’ll be reality within the next 10-20 years given where technology is at now and the rate of progress.

Whew! It took a long time, but we got there. Now, enjoy a Macaque playing Pong with his brain.


Video (and MindPong) courtesy of Neuralink


1 ADULT NEUROGENESIS IN HUMANS: A Review of Basic Concepts, History, Current Research, and Clinical Implications

2 The controversy of adult hippocampal neurogenesis in humans: suggesting a resolution and way forward

3 BCI for stroke rehabilitation: motor and beyond

4 Hypersynchrony despite pathologically reduced beta oscillations in patients with Parkinson’s disease: a pharmaco-magnetoencephalography study

5 A multicenter study of the early detection of synaptic dysfunction in Mild Cognitive Impairment using Magnetoencephalography-derived functional connectivity

6 Neurogenesis as a potential therapeutic strategy for neurodegenerative diseases

This is Your Brain on Ultrasonic Frequencies



Hallo, you beauties. It’s been a while since I’ve done a post that needed some research, but—just for you—I dusted off Dave’s research boots and put him to work. Today, I’d like to discuss ultrasound! You’ve no doubt heard of ultrasound as it applies to medical imaging—peeping the unborn babes, assessing muscle trauma, generally viewing those soft, delicate inner tissues. Huh … That came off a little weird.

Anyway, I’m not talking about ultrasound as an imaging tool. I’m talking about ultrasound as a treatment. For instance …


Ultrasonic Frequencies Stimulate Intact Brain Cells

In a 2010 study, scientists investigated “the influence of transcranial pulsed ultrasound on neuronal activity in the intact mouse brain [and in deeper subcortical circuits] used targeted transcranial ultrasound to stimulate neuronal activity and synchronous oscillations in the intact hippocampus.” Oh, kids, isn’t science fun?

This study aims at finding a non-invasive brain stimulation method that does not suffer from the limitations of current methods, which include low spatial resolution, low spatial precision, and genetic manipulation.



Using transcranial pulsed ultrasound on the motor cortex and hippocampus could have several medical applications, specifically regarding Parkinson’s and Alzheimer’s. Speaking of …


Practical Applications

In 2016, the FDA approved a similar treatment—called focused ultrasound thalamotomy—to treat people suffering from essential tremor (ET). According to the Focused Ultrasound Foundation, focused ultrasound offers the following benefits:

  • It is a non-invasive, single treatment that enables patients to recover rapidly and quickly return to activities of normal life (usually the next day).
  • Compared to radiofrequency ablation or deep brain stimulation (DBS), focused ultrasound offers a reduced risk of infection, of damage to the non-targeted area, and of blood clot formation.
  • Focused ultrasound offers rapid resolution of symptoms.
  • In contrast to lesioning performed with stereotactic radiosurgery, focused ultrasound does not use ionizing radiation, thus avoiding the side effects of exposure to radiation.
  • Because it is non-invasive, focused ultrasound could be an option for medically refractory ET patients (those who do not respond well to medication) who do not want to undergo surgery.

So, focused ultrasound thalamotomy uses MRI to aim ultrasonic waves at the thalamus. According to W. Jamie Tyler (who took part in the 2010 study), “We can focus the ultrasound through the skull to a part of the thalamus about the size of a grain of rice.” Ahem: precision. From there, the ultrasound kills ET-causing neurons in the thalamus, according to Esther Landhuis’s article Ultrasound for the Brain.

Landhuis goes on to explain that scientists are branching out to focus on treating psychiatric disorders with an emerging technology called focused ultrasound neuromodulation. This technology can boost or suppress small groups of neurons to “potentially treat other movement disorders, as well as depression, anxiety and a host of intractable neuropsychiatric disorders.”



Minimally Invasive is Still Invasive

When discussing surgery in general, the phrase “minimally invasive” is a good thing. Well, you know … It’s not a terrible thing. However, when it’s your brain you’re talking about, minimally invasive is still pretty fucking invasive. DBS, for example, is a minimally invasive form of neuromodulation. While DBS has been around longer, focused ultrasound offers a more precise, non-invasive means of treatment.

The brain is pretty special organ and it’s, you know, important to your being able to function.

Mental and Social Woes of Suspicion



Welcome back! Good to see ya, nice to meet’cha, let’s dive right in! Today’s topic is solely focused on suspicion and how it can affect your social and business life and mental processes. Of course, it’s only reasonable that I explain how this topic popped into my head.

You see, kiddies, I get extremely suspicious when certain people ask me questions. Whether it’s a stranger or an acquaintance, there are just some people I feel should not be asking me things—no matter how innocent the question. Take this conversation, for instance:

I’m in a break room, heating up food in a microwave. (OP=other person)
OP: Heating up your lunch?
Me: Yes.
OP: What are you having?
Me: …Soup
OP: What kind of soup?
Me: Homemade soup.
OP: Well, what’s in it?
Me: ::shrugs:: Vegetables and broth.



Yes, I knew what she wanted when asking what kind of soup. And, yes, I knew exactly what was in it. Yes, Dave, seriously. I can put edible things together in a bowl and pour broth over it. Anyway, the problem was this: I didn’t want to answer. Similarly, I don’t want to answer when asked about my prior weekend or my plans for the upcoming weekend. I don’t know why. My only reasoning is: It’s none of this person’s business. The next minute, I’ll turn around and tell the withheld information to a different person. And, I’m not the only one who does this. If some of you have picked this up as unconscious bias, well done. That definitely has an underlying role here.

We are all prone to unconscious bias, and I believe the type of guarded suspicion some of us have when asked questions by certain people is a symptom of this. So, why are some people more prone to suspicion, and why do certain people seem to rub us the wrong way?



Looking at the Science Behind Suspicion

Understanding suspicion through science is a ideal “where” to start with our conundrum. In order to figure out how people assess the credibility of others, scientists at the Virginia Tech Carilion Research Institute (VTC) investigated the parts of the brain that function in suspicion: the amygdala and the parahippocampal gyrus. The amygdala “plays a central role in processing fear and emotional memories and the parahippocampal gyrus […] is associated with declarative memory and the recognition of scenes,” according to an article featured on VTC’s website. The study went like this:

76 pairs of players, each with a buyer and a seller, competed in 60 rounds of a simple bargaining game while having their brains scanned [using an fMRI]. At the beginning of each round, the buyer would learn the value of a hypothetical widget and suggest a price to the seller. The seller would then set the price. If the seller’s price fell below the widget’s given value, the trade would go through, with the seller receiving the selling price and the buyer receiving any difference between the selling price and the actual value. If the seller’s price exceeded the value, though, the trade would not execute, and neither party would receive cash.

The outcome? According to Read Montague, director of the Human Neuroimaging Laboratory and the Computational Psychiatry Unit at VTC, and the leader of the study, “The more uncertain a seller was about a buyer’s credibility […] the more active his or her parahippocampal gyrus became.”

Knowing what parts of the brain are most active during a state of suspicion is the first step in understanding the emotion, as well as where the suspicion is based. Heightened activity in the amygdala would, theoretically, signify fear-based suspicion, while heightened activity in the parahippocampal gyrus would signify suspicion based on mistrust. Montague suggests the parahypocampal gyrus acts “like an inborn lie detector.”

“So, what?” you demand. “How is this actionable information and why should I care?” Good question! It just so happens that…



Suspicion can Cost You Profit… and Worse

First of all, not everything is about you. So, let’s look at the bigger picture. Like most things, suspicion in moderation can be quite good. There is a line, though. Being overly suspicious—either from fear or mistrust—can have negative consequences on financial success. According to Meghana Bhatt, one of the study’s authors:

People [taking part in the study] with a high baseline suspicion were often interacting with fairly trustworthy buyers, so in ignoring the information those buyers provided, they were giving up potential profits. The ability to recognize credible information in a competitive environment can be just as important as detecting untrustworthy behavior.

Not only can individuals with high baseline suspicion have a harder time achieving financial success, they can have a harder time achieving success in their careers. This can lead to a host of new problems, including an increase in stress and anxiety, as well as depression.

Speaking of the mental aspects, studies in suspicion can have implications for psychiatric disorders. “The fact that increased amygdala activation corresponds with an inability to detect trustworthy behavior may provide insight into the social interactions of people with anxiety disorders, who often have increased activity in this area of the brain,” explains Montague.

In short, studies such as these can help pinpoint sources of certain psychiatric disorders, which can better help scientists nail down proper treatments. But, these types of studies could also help to create a treatment or healthy way in which to promote balance for those with high baseline suspicion. Perhaps a better question is: When my internal lie detector goes off, who should I trust?


Improving Quality of Life



I’ve previously written a couple of blog posts on tech advancements aimed at aiding movement inhibited individuals. This is another one of those, focused singularly on Brain-Machine Interface (BMI, also called Brain-Computer Interface, BCI), which we’ve touched on before. The reason I’m putting a good deal of focus on these types of topics—aside from the badassness of it—is because of my own physical issues.

You see kiddies, when I was the tender age of 16, I had a horseback riding accident that left me with a rotated hip. That one injury has since plagued me with low-back movement issues that are painful, sometimes debilitating, and decrease quality of life. On top of that, I have pretty bad knee issues—which also stem from the original rotated hip problem. I’ve had three epidurals, two cortisone shots at the knee joint, and so much physical therapy I count it as my second job. The one thing I want to do, physically, compounds all the wounds.

I just want to run. I love to run. I love the way it makes me feel before, during, and after. But, even jogging ¼ mile kills my knees and stresses my back. So, what must it be like for someone who wants to walk, or even just stand? Life is a lot of things, but movement plays such a significant role in life that it’s something we think about lightly. You know, until we can’t do it anymore.

So, while robotics, neuroscience, and advancements in technology are blow-your-mind-like-a-big-league-hotdog awesome, combining the three to improve quality of life for thousands—millions?—of people is blow-your-mind-in-the-archaic-sense-of-the-word awesome. So, without further ado, let’s dig into the real meat of this blog.



Allowing Locked-In ALS Patients to Communicate

I know what ALS is, but I had never previously heard of the “Locked-In” ALS condition. Thanks to my ignorance, you’ll all get a bit of definition time. You’re welcome. ALS patients suffering from the Locked-In condition are considered to be in the severe stage of ALS, wherein they are conscious and have brain function, but are unable to move. At all.

Science is working toward giving such patients as these a way to communicate again. According to the article Locked-In ALS Patients Answer Yes or No Questions with Wearable fNIRS Device, published earlier this month in Neuroscience News:

Using a wearable system developed by SUNY Downstate Medical Center researcher Dr. Randall Barbour, a team of investigators led by Professor Niels Birbaumer at the Wyss Center for Bio and Neuroengineering in Switzerland and University of Tübingen in Germany were able to measure the brain’s hemodynamic response to a series of ‘yes’ or ‘no’ questions, thus allowing these patients to communicate.

While other tech has been used for this goal—EEG, fMRI, etc.—fNIRS (that’s functional near infrared spectroscopic) imaging has proved to be the breakthrough tech needed. But, what does this mean? Well, this is potentially the first step in bettering quality of life for Locked-In ALS patients. Communication, like movement, is a substantial part of life. It’s why we have language areas in the brain, Dave! But, this isn’t the only advance being made with BMI. Next up…



BMI Opens Doors to Paralysis Patients

In Bruce Goldman’s article, Brain-Computer Interface Advance Allows Fast, Accurate Typing by People with Paralysis, published by Stanford Medicine, we get another look into BMI advancement. I fully anticipate all the readers here will visit the original article, which means I’m not listing all the scientists involved. Instead, I’m going to refer to them as “The Team.” You’re welcome. Again.

In this study, The Team worked with three paralysis patients, using an intracordical BMI (or in the case of this study, the term BCI is preferred) to send brain signals to a computer. Goldman explains that:

The investigational system used in the study, an intracortical brain-computer interface called the BrainGate Neural Interface System, represents the newest generation of BCIs […] An intracortical BCI uses a tiny silicon chip, just over one-sixth of an inch square, from which protrude 100 electrodes that penetrate the brain to about the thickness of a quarter and tap into the electrical activity of individual nerve cells in the motor cortex.

These are the nerve cells that send the signals the brain would give off during specific movement tasks (the right hand moving and clicking a computer mouse, for instance). The signals are decoded and converted in real time by a special algorithm, which then allows the patients to control a cursor on the screen in front of them to type out words at a higher speed and accuracy than seen in previous methods. According to Chethan Pandarinath, one of the lead authors of the research report, “We’re achieving communication rates that many people with arm and hand paralysis would find useful. That’s a critical step for making devices that could be suitable for real-world use.”

This is not only exciting, it’s groundbreaking for movement inhibited individuals. Going forward, this tech could help with general household tasks we take for granted—opening doors, changing the thermostat, controlling the TV—and who knows what else?—just by using your mind. This gives movement inhibited individuals access to and a modicum of control over their surroundings. That’s seriously impressive.

Krishna Shenoy, an integral part of The Team—and whose lab pioneered the algorithm for the BMI interface—expects that around five years from now, they may be looking at a “self-calibrating, fully-implanted wireless system [that] can be used without caregiver assistance, has no cosmetic impact, and can be used around the clock.”



Improving on Improvements

Now, we head over to San Diego State University (SDSU) to learn about new electrodes with increased durability that last longer and transmit clearer signals than current electrodes. This news comes to us from the article Big Improvement to Brain-Computer Interface, published by ScienceDaily with source material provided by SDSU. Here’s a rundown of the study:

The Center for Sensorimotor Neural Engineering (CSNE)—a collaboration of San Diego State University with the University of Washington and the Massachusetts Institute of Technology—is working on an implantable brain chip that can record neural electrical signals and transmit them to receivers in the limb, bypassing [spinal cord] damage and restoring movement.

The improvement here is the material out of which the chip is made. Current “state-of-the-art” electrodes are made from thin-film platinum, but researchers with CSNE are utilizing glassy carbon. According to the article, “This material is about 10 times smoother than granular thin-film platinum, meaning it corrodes less easily under electrical stimulation and lasts much longer than platinum or other metal electrodes.” These electrodes are being used both along the surface of and inside the brain for more complete—single neuron and cluster—data.

A doctoral grad student in the lab is even taking things one step further. According to the article:

Mieko Hirabayashi is exploring a slightly different application of this technology. She’s working with rats to find out whether precisely calibrated electrical stimulation can cause new neural growth within the spinal cord. The hope is that this stimulation could encourage new neural cells to grow and replace damaged spinal cord tissue in humans. The new glassy carbon electrodes will allow her to stimulate, read the electrical signals of, and detect the presence of neurotransmitters in the spinal cord better than ever before.


I used this to represent “single” vs “cluster.” Is it working?


Every year, new advances in tech are being made. But, seeing tech advancements geared toward improving quality of life for movement inhibited individuals is… well… awesome.

Misconceptions about Night Terrors



There’s little doubt that illnesses, diseases, disorders, and the like can be scary. Moreover, they can be quite terrifying when little is widely known about them. The parasomnia (sleep disorder) known as Night Terrors (NTs) (sometimes, Sleep Terrors) is one of these misunderstood disorders. I first heard about NTs in a grossly misleading psychology class in college. The class, Motivation and Behavior Psych, was much more closely related to neurobiology or neurochemistry—it’s the class that sparked my deep love of neuroscience.

Right. Back to the topic. NTs are often confused with nightmares. It’s pretty widely known that nightmares suck donkey testicles; they’re vivid, scary, uncomfortable, and usually leave lasting impressions upon waking. In my worst nightmare, I awoke to someone standing over me while I slept. It was so real that, when I actually woke up, I thought the person was there. I couldn’t move, I was scared to open my eyes. It was only when I realized my dogs were calmly sleeping that I knew no one else was in the room.

Vivid? Check. Terrifying? Check! Seared into my memory? Super check. Gargling on the sack of a donkey? You bet! The nightmare, Dave, not me. Seriously. NT? Absolutely not.

So, what’s the difference between NTs and nightmares, and why is it important to know? I’m glad you asked!



Core Differences

Differences between nightmares and NTs range from when during sleep they occur to electroencephalography (EEG) activity. The point here is, the two are fundamentally different. Nightmares, and even nightmare disorder, are “different from NT [and consist of] a lowered motor activity […] the person is not confused on waking up, remembers the nightmares in detail, and the disordered orientation immediately recovers”.² The authors of the article, “Treatment Approach to Sleep Terror: Two Case Reports,” give a robust definition of NT:

NT is classified under parasomnias characterized with sudden attacks of fear associated with the increase in autonomic signs following crying and loud shouting during the first few hours of sleep during the delta stage (associated with the NREM period). Clinically, the person wakes up screaming, scaring, or performing sudden and self-destructive acts (like jumping, running, crashing into something, harming the person beside). The person is non-responsive to the external stimulus during this period […] The person may predominantly experience cognitive impairment signs, such as disordered orientation and memory problems, confusion, and fear on waking up. In addition to these mental symptoms, somatic symptoms associated with the overstimulation of the autonomic system, such as palpitation, sweating, shaking, skin rubor, pupillary response, may appear. While adults generally cannot remember what they experienced the previous night, children can indistinctly remember their fear.²

I think that about sums it up. So, while nightmares generally occur during REM, NTs occur prior to REM, during NREM—or non-rapid eye movement. The result of two independent sleep studies stated that NT episodes “begin exclusively during [NREM] sleep, most frequently during slow-wave sleep (SWS), and should not be considered an acting-out of a dream” and that “consciousness is altered during sleepwalking/sleep terror episodes.”¹ NT is most common in children, with a prevalence of ~3-15 percent, and decreases significantly with age, although, “it seems probable that the notion of sleep terrors is largely unknown to people, therefore different types of nocturnal attacks can be reported as sleep terrors.”¹

Difficulty in obtaining more concrete statistics pertaining to NT is a big indication that NT is a misunderstood parasomnia.



What Triggers NT?

Another great question! Both genetics and environmental stimuli play a role in NT:

It is well known that sleepwalking and night terrors run in families. Based on the study of familial incidence of sleepwalking and sleep terrors proposed that sleepwalking and night terrors share a common genetic predisposition, although the clinical expression of symptoms of these parasomnias may be influenced by environmental factors.”¹

The authors of “Treatment and Approach…” explain that “the risk of occurrence [of NT] among the first-degree relatives is ten folds more compared with those with no family history of NT.”²

Cases of NT have also been reported after stressful and/or significant life events, including divorce—personal or parental—death of a loved one, changing jobs or getting let go, changing schools, etc.


Why Does This Matter?

Part of why this matters is because additional research in NT could point to treatments aside from “making bedrooms safe” for NT sufferers or being prescribed benzodiazepine, which can cause rebounds or addiction. There is, of course, another reason it would be good to be knowledgeable about NT: “NT is highly associated with schizoid, borderline and dependent personality disorder, post-traumatic stress disorder, [and] generalized anxiety disorder.”²

Which is not to say NT sufferers have those disorders. In fact, when comparing individuals with NT to individuals who only demonstrate somnambulism (sleepwalking), only a percentage of sleepwalkers had been diagnosed as psychotic:

In contrast to sleepwalkers, [individuals with NT] demonstrate higher levels of anxiety, obsessive-compulsive traits, phobias, and depression. The Minnesota Multiphasic Personality Inventory (MMPI) profile suggests an inhibition of outward expression of aggression. A psychiatric diagnosis was established in 85 percent of patients with current night terrors. Although their psychopathology was more severe than in patients with sleepwalking, none of them was diagnosed as psychotic.”¹

Knowing the difference between NT, other arousal parasomnia, and regular ole nightmares can make a difference to the individual suffering from NT. Because a significant symptom of NT is sleepwalking, and because NT sufferers have increased mobility, they could cause damage to self or others.


¹Szelenberger, Waldemar, Szymon Niemcewicz, and Anna Justyna Dąbrowska.

…. “Sleepwalking and Night Terrors: Psychopathological and Psychophysiological

…. Correlates.” International Review of Psychiatry 17.4 (2005): 263-70.

²Turan, Hatice Sodan, Nermin Gunduz, Aslihan Polat, and Umit Tural. “Treatment

…. …. Approach to Sleep Terror: Two Case Reports.” Noro Psikiyatri Arsivi 52.2 (2015): 204-06.

Why Understanding Dreams Matters



What’s the big deal with dreams, and why is it so important we figure it out? Well, because when we dream, our brain is doing something. So, what if what it’s doing is helping or hurting us? The science behind dreaming—especially the physiology and how it relates to health—is a subject we just don’t know a whole lot about.

The topic of dreams has been a hot one for so many years you can trace it back to Ancient Greece, where they thought dreams told the future. The beliefs about dreams are numerous and range from ridiculous to plausible, including:

  • Dreams are a manifestation of the unconscious (show of hands, Freudians)
  • Dreams stimulate problem solving
  • Dreams help process negative emotions
  • Dreams are the collecting/discarding of brain trash (that’s very unjustly put, I admit)
  • Dreams consolidate short term memories to long-term memory
  • Dreams are a byproduct of neural impulses

Etc., etc., etc.

You see where I’m going with this? So, who’s right? Put your hand down, Dave, you don’t know the answer. There is no answer. Part of the reason for that is because it’s brain-stuff. I feel like I shouldn’t have to say more, but I will. Of all the sciences, neuroscience is probably the one top ones where the least amount of answers have been discovered. And that’s not a slam on neuroscience—for which I have a deep love—it’s a testament to the human brain.



Why Memory Consolidation is so Appealing

The theory of dreams being a byproduct of memory consolidation/processing makes very good sense to me, despite the nay-sayers. Part of the reason I’m so attached to this theory is because I can see it working. Take the elements in this dream I had, for instance:

  • I was fresh out of college and the only job I could get was as a manager of a local supermarket
  • I had crippling student loans
  • I had just come on shift when there was a zombie outbreak, so I had to lead my employees to safety
  • I had to run to my car to retrieve my revolver

That dream was both awesome and hilarious. It’s one of my favorites. I am also planning to write a book about it, so hands off my dream! Now, compare the dream elements with my reality:

  • When I was fresh out of college, I worked a retail job where I was in management
  • I have slightly less-than-crippling, although no less daunting, student loans
  • I had been marathon-watching Ash vs. The Evil Dead the day/evening before the dream
  • I keep a pistol in my car (this is a judgement-free zone)

This ability to connect dream elements with real world elements gives me the proof I need. But, you’re not me, so I don’t know if the same holds true for you.



Why All the Hubbub About Dreams?

Many people still believe that dreams mean something, whether it’s the expression of the unconscious mind or symbolism of what one might be stressing over, looking forward to, etc. And, if you fall into that category, that’s fine. Remember, judgement-free zone.

Learning about dreams—both causes and the result of REM sleep deprivation—can also lead to additional information on such mental health issues as depression, migraines, and the development of mental disorders. I want to note here that, in some cases, REM sleep deprivation has been shown to improve the state of depressive patients.



No matter what you believe dreams to be or not be, mean, or not mean, I’d like to think that we can all agree on this: The more we discover about the nature, physiology, and effects of dreaming, the more ammunition we may have against some types of mental health issues. And that, my friends, would be a beautiful thing indeed.

Concerning Reality



In 2005, an article regarding the importance of visual input vs auditory input in relation to spatial information gathering came out of Stanford University School of Medicine’s Department of Neurobiology. I hope that sentence was as fun to read as it was to write. To get us started on this topic, I want to throw a vocabulary term at you: visual capture. In the aforementioned article, visual capture is described as what happens when “our localization of a stimulus based on nonvisual information is ambiguous or conflicts with visual localization of the same stimulus, [leading] our nonvisual percept of location to sometimes draw to the visually identified location.”



Break it Down

This article that I keep blabbing about is called “Why Seeing Is Believing: Merging Auditory and Visual Worlds.” In it, the authors state that scientists traditionally attribute the dominance of visual capture as reflecting an inherent advantage. The article argues this: “Visual capture occurs not because of any inherent advantage of visual circuitry, but because the brain integrates information optimally, and the spatial information provided by the visual system happens to be the most reliable.”

That’s not to say visual input is always dominant. Optimization is the key point the authors are trying to make, so while visual input is optimal for spatial information, auditory input is optimal for temporal processing.

The authors give a simple, explanatory description of visual capture of auditory space. The short paraphrasing of it is this: If you’re sitting in front of the TV watching a poorly dubbed movie, it’s likely that by the end of the movie you will perceive a synchronization of what you’re seeing and what you’re hearing. The longer this auditory misalignment goes on, the greater the chances that your brain will sync the audio to the visuals for you. So, why are we syncing information by what’s on the TV screen instead of what’s coming from the speakers?

“The reason that visual information should dominate space perception,” the authors explain, “can be appreciated intuitively: visual spatial information is exceptionally reliable and precise.” I just want to note that this is likely less true for the visually impaired. Because visual input is generally more reliable where spatial information is concerned, it becomes the favored (and overriding) input format.

The authors of this article conclude that there are two possibilities for visual dominance in spatial information:

The brain could have evolved to depend more heavily on visual stimuli than on auditory stimuli, regardless of the stimulus conditions, or the brain might weigh information in proportion to its reliability and integrate it in a statistically optimal manner. Results from psychophysical studies support the idea that perception, at least, uses the latter strategy.

Speaking of perception…



Our Sensations may Seem Accurate; Our Perceptions are not

In a special edition of Scientific American, Sharon Guynup wrote an introduction about how illusion distorts our sense of perception. After learning how vital visual input is, this seems pretty acceptable. Guynup explains that, “Our brain—not our eyes—is the final arbiter of ‘truth.’ We are wired to analyze the constant flood of information from our senses and organize that input into a rational interpretation of our world.”

Illusion disrupt this process. Two of the contributors to the special Scientific American issue, Susana Martinez-Conde and Stephen L. Macknik explain: “It is a fact of neuroscience that everything we experience is a figment of our imagination.”



So, if we perceive truth from the optimal stimulus input mechanism, but our perceptions are not accurate, on what do we rely? What is an illusion, and what is simply misaligned information?

Discovering Power



Science is similar to a good book. You latch on to a subject and study it and every time you blink, there’s something new—new research or studies, new medicines, new therapies, new technologies, new, new, new. It’s like opening Ulysses, reading the word “contransmagnificandjewbangtantiality,” and coming up with a new meaning every time. Don’t pretend that’s not your new favorite word.

Over the past decade and a half, there have been scientific breakthroughs in medicine and technology that seem like—or at one point were—science fiction. Isn’t that fantastic? Can the same be said if we move a bit over to the more fantastical side of sci-fi?



Well sure, because…


There is a New Method to Levitate Objects

When I learned this, my first thought was, “There are already levitation methods?” followed closely by, “Jean Grey, here I come.” Right. So, the two means of levitation that physicists were utilizing previously are magnetic levitation and optical levitation. As the names imply, these forms of levitation have their limits—magnetic to magnetized items and optical to objects that can be polarized by light.

Frankie Fung and Mykhaylo Usatyuk, third- and fourth-year UChicago undergrad physics students respectively, must have wanted more. The two led a team of researchers to figure out this new levitation technique, which utilizes a warm plate and cold plate in a vacuum chamber. The way the technique works is:

The bottom copper plate [is] kept at room temperature while a stainless-steel cylinder filled with liquid nitrogen serve[s] as the top plate. The upward flow of heat from the warm to the cold plate [keeps] the particles suspended indefinitely.

As Fung, the study’s lead author, describes it, “The large temperature gradient leads to a force that balances gravity and results in stable levitation. We managed to quantify the thermophoretic force and found reasonable agreement with what is predicted by theory. This will allow us to explore the possibilities of levitating different types of objects.”

The goal of this research, of course, is not to find the answer of how to mimic telekinetic ability, but to explore its usefulness to space applications and “for the study of particle dynamics and interactions in a pristine, isolated environment,” according to the research team’s paper. But, as with any progress in science, third parties can use the research and technique for different purposes.



In a 2008 article in Discover Magazine explaining the claim that parapsychological phenomena are inconsistent with the known laws of physics, Sean Carroll says that “there are only two long-range forces strong enough to influence macroscopic objects—electromagnetism and gravity.” Electromagnetism is limited and impractical, but gravity? That’s getting much closer, considering Fung and Usatyuk’s research.

And, that’s not the only thing giving us a potential look into X-Men remastered, because….


Scientists are Delving into the Mysteries of Time Perception

Time perception is tricky business that scientists currently just have no answers to. It’s a subject being pursued by both journalists and scientists. Maybe one of the most useful pieces of information is that the brain’s clock can be easily swayed by anything from emotion to illness. Take tachypsychia, for example: A perceptual slowing of time during high stress situations. This afflicts many military personnel, first responders, and pro-fighters.

Then there was a series of five experiments done at University College London by Nobuhiro Hagura. Hagura found that our ability to process visual information speeds up as we are preparing to move.

What if we knew what parts of the brain—all signs point to multiple locations—work toward time perception and learned to manipulate our ability to speed up visual information processing? Could we stop time within ourselves long enough to solve problems or figure out a reactionary plan to a bad situation? Could we manufacture a drug we could give to others that would induce in them a stopped-time scenario while we moved about with impunity for the duration the drug was active?

Maybe. The possibilities are endless.


Body, or Brain?


I try to keep on top of trending topics. Short of that, I just shoot for interesting. I think this blog post hits both areas. Let’s get real: When is talking about health (read: diets) not trending? Never? Correct! So, answer these questions:

  1. Would you change your lifestyle to better benefit your brain or your body?
  2. Can you do both?

If your answer was anything other than, “I don’t know, RJ. Tell me more!” then think again! I’m going to tell you more! You see, I’ve always heard, read, and been told by personal trainers that consuming food every three hours or so—whether it’s three meals and three snacks or six small meals, really however you want to break it down—will boost metabolism and is better for your body. From a fitness or weight loss aspect, anyway. And, for years, I understood this to be basically universally agreed upon. Then, I watched Neural Stem Cell Researcher Sandrine Thuret’s presentation in the TED Talks series.




So, for those of you that may not be interested in researching neurogenesis, I’ll give you the short of it. Dr. Ananya Mandal, M.D., breaks down neurogenesis in this way:

The term neurogenesis is made up of the words “neuro” meaning “relating to nerves” and “genesis” meaning the formation of something. The term therefore refers to the growth and development of neurons. This process is most active while a baby is developing in the womb and is responsible for the production of the brain’s neurons.

The development of new neurons continues during adulthood in two regions of the brain. Neurogenesis takes place in the subventricular zone (SVZ) that forms the lining of the lateral ventricles and the subgranular zone that forms part of the dentate gyrus of the hippocampus area. The SVZ is the site where neuroblasts are formed, which migrate via the rostral migratory stream to the olfactory bulb. Many of these neuroblasts die shortly after they are generated. However, some go on to be functional in the tissue of the brain.

Evidence suggests that the process is key to functions such as learning and memory. Studies have shown that new neurons increase memory capacity, reduce the overlap between different memories, and also add information regarding time to memories. Other studies have shown that the learning process itself is also linked to the survival of neurons.

That was written back in 2014, before Thuret’s presentation. Now, we can be fairly confident that spacial recognition could be added to Dr. Mandal’s list of key functions aided by neurogenesis. Neurogenesis is good, is what I’m saying. And it’s something that you can control, to a degree, through diet, anaerobic exercise, learning, sex, sleep, etc.



So, where does the body vs brain question come into play, you ask? Well, neurogenesis and fitness have…


Conflicting Views About How and/or When to Restrict Calories

The one thing both neurogenesis and fitness (or weight loss) tips have in common is cutting calories. But, they differ in the how and when of it. As I’ve mentioned, fitness/weight loss tips—such as those from Livestrong and other fitness industry mouthpieces—glorify the grazing method. A method, I might add, that has little to no scientific basis, and thus is not the basically universally agreed upon theory I had thought. Don’t believe me? Ask the NY Times. Don’t believe them? Well, how about

It is generally the calorie cutting sometimes paired with grazing that is favorable. The same calorie cutting is desirable to aid in neurogenesis. In a blog published by Stanford University, the argument for dietary restriction (only eating about 70% of the total daily intake) is made. Here’s where we start getting our conflict:

[Dietary Restriction (DR)] is a drastic strategy: it takes tremendous willpower to limit calories to 70% of the normal diet. Furthermore, DR is difficult to implement properly; there is a risk of starvation if the diet is unbalanced, which can have wide-ranging consequences. Luckily, similar effects to DR have been found in mice by simply increasing the amount of time between meals.

Similar results by increasing time between meals, you say? Ok, cool. Let’s explore that further by looking at an article from the journal Neural Plasticity. This article explores the role of diet on neuroplasticity (also called brain plasticity). What we want, specifically, is the role of spacing out meals and how that affects neurogenesis. According to the article:

Many studies suggest that Intermittent Fasting (IF) results in enhancement of brain plasticity and at cellular and molecular level with concomitant improvements in behavior […] Furthermore, the effects of IF following excitotoxic challenge associated with lower levels of corticosterone, lead not only to decreased hippocampal cell death, but also to increased levels of hippocampal BDNF and pCREB and reversal of learning deficits.

“But RJ,” you might be saying. “What does neuroplasticity have to do with neurogenesis and where have my underpants gone?” Well friend, I can’t help you with that second part, but here’s what I’ll do. I’ll give you a wee bit of an explanation as to why I included the neuroplasticity bit. Neuroplasticity mainly concerns the strengthening of new or different pathways (or connections) in the brain. That’s an extremely unjust way to describe it, but it’s the simplest.



Neuroplasticity and neurogenesis go hand in hand. Phosphorylated cAMP response-binding element protein (that’s pCREB) promotes brain-derived neurotropic factor (that’s BDNF), “which induces neurogenesis, especially in the hippocampus,” according to Ethan Rosenbaum. “As a result, mice with decreased levels of pCREB or any other promoter of BDNF have decreased spatial navigation skills and decreased memory retention […] due to the neuronal death in the hippocampus.”

Spacial navigation? Memory retention? By God, those are products of neurogenesis! Are you following the cycle? I hope so, because I refuse to hold your sweaty hand. So, which would you change your lifestyle for? Brain, or body?

Well, I sure hope your answer was “both.” Because you can do it.