Treating Neurodegenerative Diseases with BCI

.

If you’d asked me a short few weeks ago whether I thought neurogenesis in humans continued throughout their lifetime (as so often the topic comes up in the most casual of conversations), I’d have, with 100 percent confidence, said “yes.”

That’s right friends, strangers, guy in that chair over there… Today, we’re talking about one of my favorite subjects! Brains.

Recently, I found out that adult hippocampal neurogenesis (AHN) in humans might not, in fact, be a real thing.1 This is shocking! So then I wondered: Could we potentially use brain computer interface (BCI) as an artificial neurogenesis therapy for individuals suffering the effects of neurodegenerative diseases—such as Alzheimer’s—psychiatric disorders, and age-related cognitive dysfunctions?

But what is AHN, why is it important, and how does BCI fit in?

.

The Importance of Adult Hippocampal Neurogenesis in Humans

Neurogenesis is basically what it sounds like—the birth of new neurons. It starts in the womb and may continue until about 13 years of age2 or until death. Adult neurogenesis (what we’re focused on) has been corroborated in mice, songbirds, and non-human primates. While there is considerable evidence of adult neurogenesis in humans, this is where things get dicey. The methodology currently used isn’t ideal. For example:

  • Carbon dating cells can be mislabel wherein dying cells are labeled as dividing cells, giving a false positive for neurogenesis, and protein markers can mislabel cell types (glia for neuron)1
  • Studies don’t particularly account for cellular degradation in post-mortem samples, nor for cognitive health of the doner before death, which can lead to erroneous findings1

The extreme variation in findings in similar methodologies used is another head scratcher. This is why proving AHN in humans is so difficult. Finding a reliable way to measure potential AHN in real-time in living subjects via imaging seems to be the way to go but has thus far not been available.

Anyway, based on both animal and (contentious) human studies, adult neurogenesis is thought to take place in two areas of the brain: the subventricular zone, and the dentate gyrus of the hippocampus. AHN is thought to be responsible for things like learning, memory retention, and spatial memory (which is the ability to navigate your environment and remember how to get to the grocery store).

.

.

Now… neurodegenerative diseases, psychiatric disorders, and age-related cognitive dysfunctions all have something in common: in both human studies, and in studies1 using animal models in which it’s been shown AHN is present, those with the abovementioned ailments all showed decreased neurogenesis. Based on this, we could hypothesize that human AHN therapies could provide symptom alleviation (or potential condition improvement) in such conditions as depression, Alzheimer’s, and age-related memory loss. According to ADULT NEUROGENESIS IN HUMANS: A Review of Basic Concepts, History, Current Research, and Clinical Implications:

  • “Consecutive animal model studies have indicated the potential of neurogenesis-based targets in drug development for depression due to the implied role that neurogenesis plays in the mechanisms of actions of many antidepressant drugs.
  • “A neurogenic drug […] was found to reduce severity of the symptoms in patients with major depressive disorder (MDD) compared to placebo, but the robustness of the results was limited by small sample size and skewed test-control distribution of the study…
  • “Metformin—[an FDA-approved] drug for the treatment of Type 2 diabetes—was reported to induce neurogenesis in a rat model and in human neuronal cell cultures, but no clinical trials have been conducted to support these results. Prolonged treatment with this drug in humans with diabetes, however, was found to have an antidepressant effect and appeared to protect patients from cognitive decline.”1

If AHN in humans eventually is proven, endogenous cell replacement or neuronal progenitor/stem cell transplant therapies could be a viable source of treatment.6 However, regardless of the existence of AHN in humans, prevention of cognitive decline is a noteworthy effort. But what about alternate treatment solutions in the absence of AHN in humans?

.

.

BCI as a Treatment for Cognitive Disorders

BCI has been growing in popularity for some time and has been applied to both clinical and practical use for decades: cochlear implants, the Utah array, deep brain stimulation. But it seems that a lot of BCI solutions, and even studies, tend toward mobility vs cognition. For instance, BCI studies in stroke patients primarily focus on mobile rehabilitation; however, one study3 found a link between motor, cognitive, and emotion functions that revealed promising evidence of the benefits of BCI in treating post-stroke cognitive impairments (PSCI). I want to point something important out here: BCI mobility rehabilitation has yielded very good results for patients; however, patients with a certain percent of PSCI can’t participate in this type of rehabilitation. Your brain must be able to send, receive, and decode signals for BCI to work, which is why cognitive rehabilitation is so important.

Part of what led to studying BCI in PSCI is that since the “effects of BCI-based neurofeedback training have been seen to improve certain cognitive functions in neurodevelopmental and neurodegenerative conditions such as [ADHD] and mild cognitive impairment (MCI) in elderly subjects, respectively, it is therefore also likely to generalise to other dysfunctions, including PSCI.” While more research is needed in this area, the foundation has undeniably been set. BCI could potentially act as a treatment in cognitive and some psychological disorders.

.

A Look at Current BCI Projects

There are multiple companies in the BCI industry, though most seem focused on entertainment and mobility. For example, NextMind’s Dev Kit is a very cool product available for consumer purchase that allows individuals to interact with the digital world in a hands-free manner. I recommend watching the launch talk—very cool. While the Dev Kit is geared mostly toward entertainment—video games, interacting with the TV, and such—being able to move and communicate through digital space offers a lot of benefits for mobility- and speech-impaired individuals.

Kernel’s Flux, however, is a different beast. According to their website, “Kernel Flux is a turnkey magnetoencephalography (MEG) platform based on optically-pumped magnetometers (OPMs), which provides real-time access to the intricate brain activity underlying functions such as arousal, emotion, attention, memory, and learning.” It’s a tool that’s been used in studies to help determine areas of the brain affected by such conditions as Parkinson’s4 and mild MCI5 related to dementia of Alzheimer’s type (DAT). The conclusion of the latter study found that “MEG functional connectivity may be an ideal candidate biomarker for early, presymptomatic detection of the neuropathology of DAT, and for identifying MCI-patients at high risk of having DAT.”

If Kernel is providing the means of early detection in neurodegenerative diseases and conditions linked with cognitive decline, is it possible that same tool can be used to detect AHN in humans? And more importantly, if AHN isn’t really real, who is going to step up to the plate with BCI focused on the treatment of neurodegenerative diseases? Elon Musk? Heh. Wait…

.

.

Could Neuralink Produce a Synthetic Neurogenesis Therapy?

Neuralinkan elon musk company is working on cutting edge BCI technology. They’ve created an implant that uses tiny threads inserted into the brain to receive neuronal signals. The implant amplifies the signals, then converts them to digital code which is sent via Bluetooth to a mobile app. The threads can also send signals to stimulate neurons and identify some neurons by shape.

While Neuralink’s initial goal is to facilitate digital communication and interaction in paralysis patients, they’re ultimately hoping for potential restoration of motor function in said patients, treatment of cognitive and psychological disorders, restoration of vision, and more. I highly recommend watching the launch of N1 for a look at the science and engineering behind all of it, and I recommend watching the progress update to get a look at the Link and its specs. It. Is. Very cool. But what does it have to do with neurogenesis?

Well, “Progressive degeneration of specific neuronal types and deterioration of local neuronal circuitry are the hallmarks of degenerative neurological diseases, such as [Parkinson’s, Alzheimer’s, Huntington’s, and ALS].”6 Identification of these specific neuronal types is key in any neurogenesis therapy (kinda like gene therapy!), whether transplanting genetically engineered cells into target regions of the brain or using software programed to mimic specific neuronal signals in place of lost or damaged neurons.

Because Neuralink’s device can send, decode, and receive signals and identify neurons, and because we know specific neurons related to specific neurodegenerative diseases (i.e. Huntington’s degrades striatal medium spiny and cortical neurons), I opine that, yes, Neuralink’s device could definitely act as a synthetic type of neurogenesis therapy. There’s obviously an extreme amount of data that would have to be collected though, given that two of the same type of neuron in a person’s brain giving the same directive (or “action potential”) can do so in two different ways, and this varies from person to person. Neuralink’s data processing ability is pretty remarkable and quite robust, and since it’s already individually tuned (so to speak), it’s essentially made to be a target therapy.

Furthermore, with the ability to process so much data simultaneously, the Link could additionally help identify neurons or neural circuitry affected by neurological disorders or damage to provide effective treatment therapy. It could also help with schizophrenia, wherein erroneous information processing due to abnormal dendritic branching and synaptic connections could be corrected or overwritten.1

There’s an exceptional amount of potential with this device and, while it might sound like science fiction, it seems more to me like it’ll be reality within the next 10-20 years given where technology is at now and the rate of progress.

Whew! It took a long time, but we got there. Now, enjoy a Macaque playing Pong with his brain.

.

Video (and MindPong) courtesy of Neuralink

Sources

1 ADULT NEUROGENESIS IN HUMANS: A Review of Basic Concepts, History, Current Research, and Clinical Implications

2 The controversy of adult hippocampal neurogenesis in humans: suggesting a resolution and way forward

3 BCI for stroke rehabilitation: motor and beyond

4 Hypersynchrony despite pathologically reduced beta oscillations in patients with Parkinson’s disease: a pharmaco-magnetoencephalography study

5 A multicenter study of the early detection of synaptic dysfunction in Mild Cognitive Impairment using Magnetoencephalography-derived functional connectivity

6 Neurogenesis as a potential therapeutic strategy for neurodegenerative diseases

Genome Editing Advancements

.

.

I know, I know. It’s been days. But, you can relax now. I’m here. Oh, and Dave, of course. Dave is always here. Today we (but mostly I) are going to discuss the fun, exciting, and controversial topic of genetic manipulation! I’ll hold for applause. Specifically, we’re talking about genome editing. The first quarter of this year (that’s 2017, in case you’re reading this in the future or are a time traveler) has seen exciting news coming from the genetics field, and with the help of CRISPR (that’s clustered regularly interspaced short palindromic repeats) gene editing, advancements are being made pretty swiftly.

.

CRISPR-Cas9

So, CRISPR jumped on the scene as a more affordable, more precise, and quicker way to manipulate genes. CRISPR is made up of an enzyme (that’s the Cas9 part, which is often dropped from the acronym) and a bit of guide RNA. I’m going to share with you my favorite description of the CRISPR process, which was written by Sarah Zhang in this article:

Cas9 is an enzyme that snips DNA, and CRISPR is a collection of DNA sequences that tells Cas9 exactly where to snip. All biologists have to do is feed Cas9 the right sequence, called a guide RNA, and boom, you can cut and paste bits of DNA sequence into the genome wherever you want. [… ] Cas9 can recognize a sequence about 20 bases long, so it can be better tailored to a specific gene. All you have to do is design a target sequence using an online tool and order the guide RNA to match. It takes no longer than few days for the guide sequence to arrive by mail.

.

.

The benefits of using CRISPR gene editing extend from the agricultural side of things to health and wellness in humans, our pets, and potentially our future children. According to this article in New Science, “David Ishee, a dog breeder from Mississippi, told the US Food and Drug Administration that he planned to use CRISPR gene editing to fix a mutation that makes Dalmatians prone to kidney disease.” Want more? In a Wired article by Amy Maxmen, we can see a bigger run down of the goings on with CRISPR:

Using the three-year-old technique, researchers have already reversed mutations that cause blindness, stopped cancer cells from multiplying, and made cells impervious to the virus that causes AIDS. Agronomists have rendered wheat invulnerable to killer fungi like powdery mildew, hinting at engineered staple crops that can feed a population of 9 billion on an ever-warmer planet. Bioengineers have used CRISPR to alter the DNA of yeast so that it consumes plant matter and excretes ethanol, promising an end to reliance on petrochemicals.

A lot of good could come from the ongoing study and use of CRISPR, but I know the one thing you’re all wondering…

.

.

The Question of Designer Babies

Will this breakthrough lead to the ability to produce designer babies? This is the $64,000 question, right? And, also… Is it ethical? When will it be possible? What are the consequences? Let’s start with the ethics aspect.

Previous studies using CRISPR gene editing in human embryos have been done using only abnormal embryos—as in, embryos that couldn’t actually become children. But, this route was ineffective. The embryos’ genetic abnormalities don’t give an accurate look at what might be achievable in healthy embryos. So, when all else fails, there must be a compromise.

At the Third Affiliated Hospital of Guangzhou Medical University, a team has switched from abnormal embryos to “normal embryos derived from immature eggs donated by people undergoing IVF,” according to Michael Le Page. “Immature eggs like these are usually discarded by IVF clinics, as the success rate is much lower than with mature eggs. However, children have been born from such immature eggs.”

Toeing the ethics line? Maybe. But, as is the case with a deceased organ donor’s organs, if one person isn’t using it, someone else can.

While using CRISPR gene editing could lead to designer baby manufacturing, we’re a long way from that. Which means we’re a ways off from discussing the consequences. For the most part, current embryonic studies are focused on isolating and editing genetic disorders. The aforementioned team at Third Affiliated Hospital, for instance, is focused on the genetic disorders causing favism and betathalassemia, both of which affect the blood.

At the current stage, these types of studies are running into their own problems—primarily mosaicism. Mosaicism is when, during cell division, both repaired and unrepaired DNA is present.

.

.

While progress is being made in the genome editing arena, there is still quite a road that needs to be traveled. Fortunately (or unfortunately, depending on your outlook) science in traveling that road on a high-speed rail instead of a horse-drawn carriage.

Advancements and Mobility in Robotics

.

.

We’ve all heard stories or seen movies or TV shows about the government making super-soldiers with the use of genetic splicing or enhancement drugs or exoskeletons. It’s a subject both troubling—ethical and moral implications aside—and mesmerizing. Who wouldn’t want to be super-strong or super-fast or have enhanced senses? Well, guess what? No, not that. Probably not that, either. Goddamnit, Dave, you’ll only ever be able to do half a cock pushup. Come on. I’m sure the rest of you are getting pretty close, so I’ll help you along.

Massive steps in functional robotics have been taken that can improve quality of life for those with limited or minimal mobility. And not everyone knows it.

.

.

Roam Robotics will Blow Your Mind

Roam Robotics is recreating the exoskeleton design—as in making it lightweight, affordable, and multifunctional. Their products will be applicable across the board, including industrial assist, mobility assistance, and performance enhancement. Founder and CEO, Tim Swift, breaks everything down in this 8-minute video, which is very interesting and, closer to the end, a little unnerving.

.

.

So, maybe the performance-enhancing aspect is exciting. If you love to hike, this exoskeleton can help you high farther. Like to climb? Then climb higher! Are you a runner? Run better—or at least look less stupid doing it. Yes, this application is pretty cool, but it’s also a bit superficial. For military use, I can see how exoskeleton use gets both more unnerving and has more potentially beneficial uses. But, for mobility? Absolutely, 100-percent yes.

It’s nice to be extra-capable of movement and it’s even nicer to think our soldiers have an advantage, but knowing there is a viable product heading to the market that can better someone’s quality of life—and I mean someone who really needs it—is awesome. In the biblical sense.

.

.

Which brings us to…

.

IHMC and the Cybathlon

IHMC—Institute for Human and Machine Cognition—is a Florida University System not-for-profit research institute pioneering in “technologies aimed at leveraging and extending human capabilities [utilizing a] human-centered approach […] that can be regarded as cognitive, physical, or perceptual orthoses, much as eyeglasses are a kind of ocular orthoses,” according to IHMC’s website.

These systems fit the human and machine components together in ways that exploit their respective strengths and mitigate their respective weaknesses. The design and fit of technological orthoses and prostheses requires a broader interdisciplinary range than is typically found in one organization, thus IHMC staff includes computer scientists, cognitive psychologists, neuroscientists, linguists, physicians, philosophers, engineers, and social scientists of various stripes, as well as some people who resist all attempts to classify them.

IHMC’s research covers any and all things that will eventually become Skynet, including:

  • Artificial intelligence
  • Cognitive science
  • Knowledge modeling and sharing
  • Human interactions with autonomy
  • Humanoid robotics
  • Exoskeletons
  • Advanced interfaces and displays
  • Cybersecurity
  • Communication and collaboration
  • Linguistics and natural language processing
  • Computer-mediated learning systems
  • Intelligent data understanding
  • Software agents
  • Expertise studies
  • Work practice simulation
  • Knowledge representation
  • Big data and machine learning

And more…

.

.

Back in November of The-Year-that-Killed-Every-Celebrity-We-Loved, IHMC teamed up with 26-year-old Mark Daniel for the highly-recognized-as-a-thing-that-exists Cybathlon. Last year marked the first ever Cybathlon—but seriously, please recognize it as a thing, and we’ll get to why—which was held in Zurich, Switzerland, “where 70 robot-aided athletes from 25 countries competed against one another,” according to “A Robotic Exoskeleton Powered this Disabled U.S. Athlete to a Prize in the ‘Robot Olympics,’” by Luke Dormehi.

Are you ready for why you should remember the Cybathlon in the future? In the aforementioned article, Danial explains: “We needed that kind of publicity and exposure in both the robotics and disabled community. I can’t tell you how many people I’ve spoken to who didn’t even know this was being explored. They’re blown away that this technology exists at all.”

Seriously? You mean to tell me it’s 2017 and we’re closing in on tech that can help those suffering from paralysis to be functionally mobile again and these same people know nothing about it?! Crazy, right?

We’ve arrived at a time when scientific and technological advances that can increase quality of life are being explored, researched, and made better, functional, and more affordable. Isn’t it about time the whole world was in on this news? Scientific developments like these need to be widespread knowledge.

.

Concerning Reality

.

.

In 2005, an article regarding the importance of visual input vs auditory input in relation to spatial information gathering came out of Stanford University School of Medicine’s Department of Neurobiology. I hope that sentence was as fun to read as it was to write. To get us started on this topic, I want to throw a vocabulary term at you: visual capture. In the aforementioned article, visual capture is described as what happens when “our localization of a stimulus based on nonvisual information is ambiguous or conflicts with visual localization of the same stimulus, [leading] our nonvisual percept of location to sometimes draw to the visually identified location.”

.

.

Break it Down

This article that I keep blabbing about is called “Why Seeing Is Believing: Merging Auditory and Visual Worlds.” In it, the authors state that scientists traditionally attribute the dominance of visual capture as reflecting an inherent advantage. The article argues this: “Visual capture occurs not because of any inherent advantage of visual circuitry, but because the brain integrates information optimally, and the spatial information provided by the visual system happens to be the most reliable.”

That’s not to say visual input is always dominant. Optimization is the key point the authors are trying to make, so while visual input is optimal for spatial information, auditory input is optimal for temporal processing.

The authors give a simple, explanatory description of visual capture of auditory space. The short paraphrasing of it is this: If you’re sitting in front of the TV watching a poorly dubbed movie, it’s likely that by the end of the movie you will perceive a synchronization of what you’re seeing and what you’re hearing. The longer this auditory misalignment goes on, the greater the chances that your brain will sync the audio to the visuals for you. So, why are we syncing information by what’s on the TV screen instead of what’s coming from the speakers?

“The reason that visual information should dominate space perception,” the authors explain, “can be appreciated intuitively: visual spatial information is exceptionally reliable and precise.” I just want to note that this is likely less true for the visually impaired. Because visual input is generally more reliable where spatial information is concerned, it becomes the favored (and overriding) input format.

The authors of this article conclude that there are two possibilities for visual dominance in spatial information:

The brain could have evolved to depend more heavily on visual stimuli than on auditory stimuli, regardless of the stimulus conditions, or the brain might weigh information in proportion to its reliability and integrate it in a statistically optimal manner. Results from psychophysical studies support the idea that perception, at least, uses the latter strategy.

Speaking of perception…

.

.

Our Sensations may Seem Accurate; Our Perceptions are not

In a special edition of Scientific American, Sharon Guynup wrote an introduction about how illusion distorts our sense of perception. After learning how vital visual input is, this seems pretty acceptable. Guynup explains that, “Our brain—not our eyes—is the final arbiter of ‘truth.’ We are wired to analyze the constant flood of information from our senses and organize that input into a rational interpretation of our world.”

Illusion disrupt this process. Two of the contributors to the special Scientific American issue, Susana Martinez-Conde and Stephen L. Macknik explain: “It is a fact of neuroscience that everything we experience is a figment of our imagination.”

.

.

So, if we perceive truth from the optimal stimulus input mechanism, but our perceptions are not accurate, on what do we rely? What is an illusion, and what is simply misaligned information?

Body, or Brain?

.

I try to keep on top of trending topics. Short of that, I just shoot for interesting. I think this blog post hits both areas. Let’s get real: When is talking about health (read: diets) not trending? Never? Correct! So, answer these questions:

  1. Would you change your lifestyle to better benefit your brain or your body?
  2. Can you do both?

If your answer was anything other than, “I don’t know, RJ. Tell me more!” then think again! I’m going to tell you more! You see, I’ve always heard, read, and been told by personal trainers that consuming food every three hours or so—whether it’s three meals and three snacks or six small meals, really however you want to break it down—will boost metabolism and is better for your body. From a fitness or weight loss aspect, anyway. And, for years, I understood this to be basically universally agreed upon. Then, I watched Neural Stem Cell Researcher Sandrine Thuret’s presentation in the TED Talks series.

.

.

Neurogenesis

So, for those of you that may not be interested in researching neurogenesis, I’ll give you the short of it. Dr. Ananya Mandal, M.D., breaks down neurogenesis in this way:

The term neurogenesis is made up of the words “neuro” meaning “relating to nerves” and “genesis” meaning the formation of something. The term therefore refers to the growth and development of neurons. This process is most active while a baby is developing in the womb and is responsible for the production of the brain’s neurons.

The development of new neurons continues during adulthood in two regions of the brain. Neurogenesis takes place in the subventricular zone (SVZ) that forms the lining of the lateral ventricles and the subgranular zone that forms part of the dentate gyrus of the hippocampus area. The SVZ is the site where neuroblasts are formed, which migrate via the rostral migratory stream to the olfactory bulb. Many of these neuroblasts die shortly after they are generated. However, some go on to be functional in the tissue of the brain.

Evidence suggests that the process is key to functions such as learning and memory. Studies have shown that new neurons increase memory capacity, reduce the overlap between different memories, and also add information regarding time to memories. Other studies have shown that the learning process itself is also linked to the survival of neurons.

That was written back in 2014, before Thuret’s presentation. Now, we can be fairly confident that spacial recognition could be added to Dr. Mandal’s list of key functions aided by neurogenesis. Neurogenesis is good, is what I’m saying. And it’s something that you can control, to a degree, through diet, anaerobic exercise, learning, sex, sleep, etc.

.

.

So, where does the body vs brain question come into play, you ask? Well, neurogenesis and fitness have…

.

Conflicting Views About How and/or When to Restrict Calories

The one thing both neurogenesis and fitness (or weight loss) tips have in common is cutting calories. But, they differ in the how and when of it. As I’ve mentioned, fitness/weight loss tips—such as those from Livestrong and other fitness industry mouthpieces—glorify the grazing method. A method, I might add, that has little to no scientific basis, and thus is not the basically universally agreed upon theory I had thought. Don’t believe me? Ask the NY Times. Don’t believe them? Well, how about Nutrition.org?

It is generally the calorie cutting sometimes paired with grazing that is favorable. The same calorie cutting is desirable to aid in neurogenesis. In a blog published by Stanford University, the argument for dietary restriction (only eating about 70% of the total daily intake) is made. Here’s where we start getting our conflict:

[Dietary Restriction (DR)] is a drastic strategy: it takes tremendous willpower to limit calories to 70% of the normal diet. Furthermore, DR is difficult to implement properly; there is a risk of starvation if the diet is unbalanced, which can have wide-ranging consequences. Luckily, similar effects to DR have been found in mice by simply increasing the amount of time between meals.

Similar results by increasing time between meals, you say? Ok, cool. Let’s explore that further by looking at an article from the journal Neural Plasticity. This article explores the role of diet on neuroplasticity (also called brain plasticity). What we want, specifically, is the role of spacing out meals and how that affects neurogenesis. According to the article:

Many studies suggest that Intermittent Fasting (IF) results in enhancement of brain plasticity and at cellular and molecular level with concomitant improvements in behavior […] Furthermore, the effects of IF following excitotoxic challenge associated with lower levels of corticosterone, lead not only to decreased hippocampal cell death, but also to increased levels of hippocampal BDNF and pCREB and reversal of learning deficits.

“But RJ,” you might be saying. “What does neuroplasticity have to do with neurogenesis and where have my underpants gone?” Well friend, I can’t help you with that second part, but here’s what I’ll do. I’ll give you a wee bit of an explanation as to why I included the neuroplasticity bit. Neuroplasticity mainly concerns the strengthening of new or different pathways (or connections) in the brain. That’s an extremely unjust way to describe it, but it’s the simplest.

.

.

Neuroplasticity and neurogenesis go hand in hand. Phosphorylated cAMP response-binding element protein (that’s pCREB) promotes brain-derived neurotropic factor (that’s BDNF), “which induces neurogenesis, especially in the hippocampus,” according to Ethan Rosenbaum. “As a result, mice with decreased levels of pCREB or any other promoter of BDNF have decreased spatial navigation skills and decreased memory retention […] due to the neuronal death in the hippocampus.”

Spacial navigation? Memory retention? By God, those are products of neurogenesis! Are you following the cycle? I hope so, because I refuse to hold your sweaty hand. So, which would you change your lifestyle for? Brain, or body?

Well, I sure hope your answer was “both.” Because you can do it.