The worst fear I remember experiencing when I was younger was not the fear of death.  I’ve always been fine with the notion of dying.

It was the fear of doom: the fear that things will stay the same, forever.

Because a reality that never changes is a reality from which there’s no escape.  As it so happens, that very fear was what motivated me to change everything in my life…

Special thanks goes to…

My brother. Apart from that, there aren’t really many people to thank. It took much longer than I wished, but I pulled this whole thing off – almost entirely – by myself. I came up with the every element of the hypothesis, on my own. I learned the relevant subjects on my own (most of which was done outside of school). I taught myself the skill of writing (all through trial-and-error, since I had no tutor/mentor). I taught myself the skill of editing, and edited my own work (which is not as easy as one might think). And lastly, I handle all aspects of this book’s “public face” – including the book’s marketing, as well as the design & maintenance of this website.

In the end, there was no writing team, no partner, and no nepotism involved in the making of this terrible document.

If there’s anyone else to thank, it’s you, for taking the time to read this.  So thank you.


Why This Book Is Free & Where To Find Me

In my time writing this, there’s a harsh reality I’ve had to accept: nowadays, most people don’t read books.

Of course I want people to see what I’ve written. But expecting a large number of people to purchase an outdated type of product, made by an unknown author, which covers a highly-technical subject, would be like expecting to win the lottery… twice.  At this point, I recognize how unlikely that outcome is.

So, after a long time contemplating this issue, I’ve decided to make all digital versions of this book totally free.  That’s right: as long as I have a say in it, all non-physical copies of “Value Assignment: The Primary Function Of The Brain” will be COMPLETELY FREE to read, download, and distribute* for the rest of time.  This includes any audio and/or video versions of this book that may be released in the future, as well as all translations of these versions.

Physical copies of this book will be sold, as normal.  The great thing is, if you just wanna read the book, you don’t have to buy a physical copy.

I realize that giving away the entirety of this book’s content for free, especially in the internet world, means I may not profit from this work at all.  And as someone who’s been in financial straits, I can tell you: it’s difficult dedicating your life to something, knowing there may be zero “reward” at the end of the rainbow.

But ultimately, I didn’t see this ridiculous vision all the way to the end – through continual, devastating failures, through ever-shifting environments, through the shattering of my perceptions & beliefs, through entire stages of development & maturity, through the alienation of nearly all my family & friends, through years of isolation, through years of poverty, and through the eventual re-construction of my whole personality – solely for the money.  I did it because I believed in it.

And it’d be hard to forgive myself if those efforts went unseen, simply because I wanted to make a buck. So whatever happens from here, I’ll at least know that I depleted this book’s potential: that I did everything in my power to realize my vision.

The entire, free version of this book will always be available on primaryfunctionofthebrain.com

Amazon (Paperback) Version: https://www.amazon.com/dp/B0D1R2LC49

Follow me on Twitter/X: Sebastian Rey (@SebReyWriter)

*This work may be distributed & re-distributed as long as a.) no original content is modified, b.) the work (if non-physical) is given away for free, and c.) the original author is credited.  (If the book’s content must be consumed wholly or partially through electronic means, I consider it “non-physical.”)

Copyright © 2024 Sebastian Rey. All rights reserved.


Preface (a.k.a. “The Dramatic Opening Monologue”)

____________________

Throughout the last decade, I pained myself trying to describe the nature of the strange, distant glint I was pulled to long ago.  I wrote every sentence in this manuscript tens, if not hundreds of different ways.  I altered each paragraph so many times, I can’t even tell you the words they first contained.

And when no clear path forward could be seen, I waited.  And I continued waiting.  I waited until the passage of time was indiscernible, and my age all but disintegrated.  Because I knew that in this waiting, a new route would, eventually, be revealed.

These practices, along with each error I’ve made & the bitter lessons I’ve had to learn over these years, have led me here: to publish what you’re reading.

Make no mistake, these words were born from struggle.  And whether or not you agree with me, you can be certain that not a single letter in this text was passed over.

***

Now right off the bat, I should address one, big elephant in the room: this book isn’t being handled in a typical, scholarly fashion.  Typically, scientific works on brain function are written by people with a catalogue of endorsements.  And those works are usually printed in a scientific journal, or something of the sort, to promote legitimacy.

The point is, folks doing what I’m doing usually either have degrees or credentials they’ve fostered through years of rigorous schooling; or went through some official channel(s) to have their works distributed.

Well I didn’t.

It’s not that I believe academic accreditation is a bad thing, per se; it’s just that I didn’t take that route, to do what I’ve done here.

So if you’re going to read this book, you’ll have to accept the author/source: I’m not an “expert,” and this book hasn’t gotten any “official stamps of approval.”  The good thing is that one does not require approval, to reach good scientific conclusions. In fact, good scientific conclusions should speak for themselves.

Of course, even if you’re coming into this with an open mind, you still might not like my work.  I’m conscious enough to know that many will simply find this book boring.  And those who are interested, might not like how it’s written.

Thankfully, being liked is not my goal.  My concern lies with veracity.

This book, as a whole, may not be perfect.  It may not contain all of psychology’s answers or secure the Nobel Prize.  But I spent enough years developing the arguments within to recognize their potential.

The impact that this hypothesis has on the scientific community is not for me to decide.  It could very well have none.  But when it comes to assembling, refining, taking apart and re-structuring, researching, looking at alternate views, looking FOR alternate views, weighing my thoughts against one another, hunting down contradictions, dwelling on concepts, and being encyclopedically thorough in the examination of my own perspectives; I can say I’ve done my best.

And yes, even after all the trouble I’ve gone through, it’s still possible that my conclusions are wrong.  You’ll just have to judge them for yourself.

Due to the current state of society, it’s common for people to treat opinions, suggestions, and purported “facts” with a high degree of distrust.  If you’re one of those people, I can’t say I totally blame you.  However, while skepticism is often necessary in moving towards truth, engaging in skepticism poorly can result in faulty judgements – and those faults may be hard to catch because they’ve taken on a mask of reason.

For instance, if a person were to question only things they disliked or disagreed with, they’d probably have incomplete (if not totally wrong) views on a great many issues.  You could call this kind of thinking “selective skepticism.”

Ultimately, it’s my hope that the reader of this book isn’t just skilled at one-sided skepticism, but is familiar with intellectual fairness as well.

I’m not telling you to accept my verdicts blindly.  I’m only asking that you evaluate this text with an impartial mind; and not on it’s appearance, the preferences you may have, it’s reputation, or my reputation (whatever that happens to be at the time you’re reading this).

Remember, when it comes down to it, the only reason this book exists is to communicate an idea.  That’s it.  The idea may or may not be correct, and it may or may not be scientifically significant.  But I believe there’s good logic behind it, and even some preexisting evidence to support it.

***

In any case, whether or not my work becomes known, this will be my final attempt at publishing or promoting it.  Believe it or not, I’ve written about this book’s idea before.  More than once.  But up until now, those efforts have been largely unnoticed.

At the age of 18, when I started this project, I had immense hopes about this idea and the effect it could have on science/society.  Now, 15 years later, I feel like a man who’s thrown away his prime years to compose an artpiece no one can see.  When all is said and done, I won’t keep trying to show people something they don’t care about.  It’s wasted energy.

I’ve bled youth onto these pages.  And that I can never get back.

So whether or not anyone reads what I’ve written, I’ve decided that this is the last time I’m going to write about it.

It’s not so bad, though – it seems like eons this abysmal text has been in development.  The thought of finally being done with it, does bring some relief.

At the same time, it’s impossible to deny that stepping away from it will be difficult.  I’ve been working on this book for so long that at this point, it’s simply become woven into my day-to-day life.  And to tell the truth, it’s the only part of my life that’s ever provided a sense of stability.

But that’s not the only the reason why letting go of it is hard, I think.  See, during my endeavor to finish this work, youth wasn’t the only thing I lost…

Before I ever wrote a draft of this book… before I cared about psychology at all… there was (what I can only describe as) a strange, overwhelming impulse in the back of my mind.  This impulse compelled me, without restraint, to chase down novel ideas and to seek unconventional ways of thinking, in order to attain new understandings about the universe we find ourselves in.  The impulse affected nearly everything I did, like a fire burning through my body at all times.

I don’t know where exactly this impulse came from.  But it was there for as far back as I can remember: shaping my instincts, informing my thoughts, and leading me to question what others told me.  Even in my lowest moments, I could feel that impulse in the depths of my consciousness, nudging me forward.

Somewhere along the way, though, it changed.

Maybe it was my lack of success: after all, constant failure could affect any person’s ambition.

Perhaps my isolation was the cause: one can only do so much by themselves, and years of solitude, re-hashing the same arguments, might deaden anyone’s instincts.

Maybe a shift in focus was responsible: indeed, you must become good at the skill of writing, to be a successful writer – and due to my condition, that skill was likely always going to require inordinate time & energy (forcing me to give up other things in the process).

Or maybe the change was inevitable.

Whatever the case, the very impulse that led me here – the impulse which made me sprint, unwavering, toward the glint I still fail to describe – had slowly vanished during those years of labor.  Beneath my awareness, that flame had become a flicker.

Only now do I realize what I lost, and how invaluable it was.  Through that fire, I saw endless potential.  Because of it, I rejected any doubt or uncertainty which would’ve otherwise stopped me.  It made me completely unafraid to explore, and from this exploration I saw entirely different worlds: all within arms reach.  It did not matter to me what others thought was true or possible.

As far as I’m concerned, this book (and the ideas in it) did not come from me, they came from that strange ember.

But now it’s gone, and I can only assume it will never come back.

It was such an integral part of my experience that I don’t even know if I can call myself same person: it feels as though the individual who originally wrote this was someone else entirely.  So all that’s left for me to do, it seems, is to realize the plans that individual made & the visions they had, as best I can.

There is one thing I’m certain of: this work will be my life’s greatest feat.  The way I see it, regardless of what I do next, I’ll never surpass this book in terms of achievement.  Not because it was written with extraordinary skill.  Not because it carries some fantastic or unique property.  But because of what I gave for it.

So if you wish, continue forward.

Chapter 0: Introduction (What You Need To Know Before Reading This Book)

____________________

Before we get into this book’s subject matter, there are a few things you’ve gotta know.

1.  First off, the idea promoted in this book is a hypothesis, not a theory.  And, yes, there’s a difference.

Some of you might already be aware, but in the world of science, a theory is not “an unfounded possibility” (although that’s how most people use the term).  Instead, a theory (in science) is something that’s been widely-accepted.

Generally, for an idea to become a scientific theory, that idea must be well-substantiated, must cover a wide range of cases, must have predictive power, and must be broadly used by scientists.

A hypothesis, on the other hand, is something that’s been speculated or suggested, but hasn’t been proven or widely-accepted.

The main idea expressed in this book is, therefore, a hypothesis.  It’s entirely my own work, and as of the time writing this, it hasn’t been scientifically proven (at least as far as I know).  I will be citing studies throughout the chapters, to support my arguments.  But these studies will be limited in number, and aren’t meant to be taken as definitive proof.

2.  Secondly, this hypothesis is not meant to be a “unified” or a “catch-all” sort of thesis.  In other words, it’s not meant to answer all the big questions brain scientists currently face.

At first, I thought my hypothesis might be a “unified theory of psychology” – I certainly wanted it to be that – but over time I’ve had to acknowledge that the concepts I’ve developed just don’t meet that criteria.

Even if my ideas are correct, there are still major issues in psychology that need to be solved/explained.

For instance, this hypothesis doesn’t explain the phenomenon of Consciousness (I discuss why in Chapter 6).  My thesis also doesn’t give a solution for The Problem of the Decision-Maker: a problem I’ll go over in Chapter 5.  On top of that, my book isn’t going to chart all the different ways that the brain can vary in both structure and behavior… and there are many, many ways it can vary.

So what does this book do?  As I said in the Preface, this book simply proposes & explains an idea: the idea that nervous systems evolved as a way for organisms to assign values (that “Value-Assignment” is the brain’s first, and most fundamental function).

Why does this idea matter?

Well the truth is, on a purely mechanical level, many aspects of brain function are actually well-understood by scientists and psychologists today.

What isn’t yet well-understood (or well-agreed-on) is how a nervous system’s mechanical activity translates into thoughts/actions/experiences (this is also referred to as “higher-level” or “higher-order” brain function).  There’s much speculation about how that “translation” works, and even some research to back it up… but there isn’t massive consensus about it in the scientific community.

If my thesis is correct, it might aid with (or possibly simplify) certain elements of Translation.  And of course, this thesis could also answer the age-old question: why do brains exist (as in, what’s their evolutionary purpose)?

But this thesis won’t be a “panacea.”  It won’t fully decipher the puzzle that is our brain.  At best, it will only orient a few pieces.

3.  The third point I want to make (in case you didn’t read the Preface) is that I’m not an “expert” on the brain.  I don’t have a doctorate in Psychology, Neuroscience, or any related fields.  I didn’t even finish college.  I’m just somebody who wanted to understand my own experience of the world.

So how did this book come about?

Well obviously I can’t give you the complete autobiographical re-telling, since that would take too long.  But if you want to know the basic story behind it, I can give you an overview:

(If you’re not interested in reading my whole origin story, skip down to point 4.)

I have autism.  No, not the sarcastic kind.  Not the kind that people use as an insult, to belittle other’s intelligence in online “debates,” or as a meme.  The kind which would generally be diagnosed, by a licensed psychiatrist.  Only I had the misfortune of not knowing this until I was 28.

Growing up, Psychology wasn’t something that people around me talked about.  As in, ever.  There were no Psychology classes I went to in elementary or high school (possibly because my highly religious school didn’t prioritize it).  There were no internet forums where I could go to voice my troubles (I grew up “pre-internet,” and even as the internet became more widespread, I didn’t really have great access to it until college).  And my parents definitely didn’t help… for a variety of reasons.

As far as I was concerned, before the age of 18, the entire concept of “psychology” didn’t exist; even though others had clearly treated me like I was “different” my whole life.  I just didn’t know why.  But when I took a Psych 101 class during my freshman year of college, something interesting happened.

After attending that introductory class – after realizing there were aspects of our mind & behavior that’d been scientifically researched, dissected, and explained – I thought Psychology could be my way out: my path to “becoming normal.”  I was still unaware I had autism, but I figured, “If no one is gonna help me with whatever it is I’m going through, I’ll just have to fix it myself.”

Little did I know how much this decision would impact my life.

After that 101 class, my interest in Psychology skyrocketed.  I was completely obsessed, and I wanted to know everything I could.

The issue is, Psychology isn’t really one subject.  There are many different areas-of-research and analysis which go into it… but at the time, I was unaware of that.  It was all new to me, and I was ignorant of the true complexities that went into studying human psychology.

In my naiveté, I thought there might be a way to cut to the heart: a “basic framework” for understanding the mind, which might dramatically simplify the learning process. Essentially, I wanted to get my hands on some kind of model or theory, which explained the most basic elements of Psychology, and which could be applied universally, to all people.

But where do you go to find that?

Well back then, I thought, “If you want to understand any kind of system, including Psychology, you should probably start at the smallest level possible.”  And Psychology (assumably) starts with brain function – neurons being the smallest component.

So I began searching for a “universal theory” of brain function.

I didn’t even think it would be hard to find.  I thought surely, at this point in human history, someone must’ve invented a unified model which explains the basics of what the brain does & how it operates.  If I could just get a hold of that, I’d have a foundation for learning Psychology, and from there, I might be able to get some answers; and “correct” whatever was wrong with me.

However, as I went over the literature, it became clear that there were many more unsolved problems than I was expecting. Instead of a “scientific consensus,” what I found was that the models & frameworks brain theorists used to explain brain function (as well as their general ideas about Psychology) differed from each other; sometimes drastically.

The “simple, universal framework” I was looking for, simply wasn’t there.

Still, I couldn’t shake the feeling that there was some solution to these problems.  It couldn’t just be that there’s no answer – I couldn’t accept that.  I believed (and still believe) that all observable things in the universe must have some explanation, even if we haven’t found it.

So I did something unusual, and something I wouldn’t necessarily advise others to do: I started crafting my own ideas about psychology.  (In my defense, I was still practically a kid – and it wasn’t like I was intending to show anyone these ideas… at least not at the time.)

Initially my ideas were just notes I wrote to myself and worked on when I had a moment.  Even small thoughts, I’d write down and save somewhere.

But eventually (almost inevitably) I wanted to make my ideas better.  So I started weighing my ideas against each other, to find contradictions or make improvements.  I would then compare my updated ideas with relevant science.  Once I felt like they were solid, I wanted to merge my ideas into a single framework.  So I began constructing entire psychological models, and then revising those models, all in my spare time – until after 3 years in college, I had built the basics of this thesis.

At a certain point, it became clear to me that these “models” I’d started (along with various other ideas I’d written down) were what I really cared about, more than anything I was doing in school.  But of course, once I realized this, I had to make a choice: continue down the road of a traditional education, or quit school to hone in on my writing…

I think you can guess which one I chose.

In fact, it was the day I turned 21 when I dropped out of college and drove two states, back to my parent’s house, to focus on this.  I didn’t even take my final exams.  What I didn’t realize was that it would take far, far longer to finish this book than I originally thought, even with my undivided attention.

There’s a whole separate story about what I did after college, but the short version is: I spent the next 4 years living with parents, whose abusive inclinations were becoming unignorable, trying to learn how to write. At 25, when I could no longer stand living in those conditions (with only 400 dollars in my pocket & knowing no one), I moved to another city. I then spent the next 8 years doing manual labor jobs, trying to survive, while working on my thesis/book when I could.

It was during this time that I finally found out I had Autism Spectrum Disorder, formerly known as Asperger’s Syndrome.  Too little, too late, I guess.

If I’d been aware of this earlier, maybe I could’ve accelerated this book’s writing process (in my case, autism had a substantial impact on my ability to understand communication, which made it extremely difficult to learn to write).  But if I’d known, I may also never have come up with this thesis.  It’s all speculation at this point.

In any case, the book is now done… and it’s taken nearly half my life to complete.

But as lovely as this story might be, there’s one thing it doesn’t change: a scientific hypothesis can always be wrong.  My ideas have never been peer-reviewed; I cobbled them together entirely from my own efforts and ingenuity.  As a result, there could be flaws in my perspective that I don’t see, factors which I failed to consider, or scientific evidence out there which goes against my hypothesis.

And of course (like I said in the last point), this thesis is not a “universal theory.”

Ultimately, no matter what I do, I can’t avoid the fact that I took a highly abnormal route to reach these conclusions.  So in good conscience, I must advise you to take this whole thing with a grain of salt.

But despite that, I think you’ll find my arguments are clear, cogent, and follow a simple, logical path.  Which means, even if I’m wrong, the stuff I’m talking about should at least make sense.

4.  Fourth, this book wasn’t just written for people who do science, it was made to be widely-accessible.  In other words, I wrote it so almost anyone, anywhere could read it and easily understand it.

Why did I write it that way?

Well to be honest, I thought no one would actually take the time to read this book unless it was written in a manner that was easily-understandable. (Since I’m not taking a traditional route to publish this thesis, I figure I need some way of getting it to spread.) But also, I think simplifying one’s own ideas for explanation purposes can actually increase the general comprehension of those ideas (as long as those ideas haven’t been over-simplified).

Basically, I want my thesis to be available & useful to the most number of people.

So if you’re worried that this book is gonna be overly “science-y” and hard-to-grasp, rest assured that (almost) every part of it has been painstakingly revised so that even an eighth grader can comprehend what I’m talking about.

Of course, the book is still meant to be scientific at the end of the day, so there will be some situations where I use technical definitions & wording, in the main text.  However, anytime I do this, the thing(s) I’m talking about will be explained in clear terms right afterward.

Also, the complexity of the book will increase as the chapters go on.  But as long as you pay attention to the beginning and middle parts, you should have no problem reading the final chapters.  I promise I’m not gonna throw you straight into the deep end – we’ll build up to it slowly, so that nothing is confusing.

Another thing to mention: as you continue reading, you’re gonna come across blue-colored sections like the one above, that are sandwiched between double-karats (“<<” these things “>>”), with a number attached to them (“#0.1”, “#3.5”, etc.).  These sections will be for either a.) adding further commentary to a topic, or b.) discussing definitions and technicalities.

As for what the number means: the first digit indicates the chapter number, and the second digit indicates the karat section number, within that chapter.  For instance, the karat section above is #0.1, because it’s the first karat section in Chapter 0 (the Introduction).  If I added a karat section after this paragraph, that karat section would be #0.2… and so on.

These karat sections are the only parts of the book that are NOT necessarily going to be easily-readable.

The good thing is, if you’re only interested in the basic ideas of the hypothesis, these sections aren’t mandatory to read. I’ve written the book in such a way that it works without them. Still, for those who want a fuller picture of the thesis, the karat sections will pretty much be a requirement.

How do you know if you should read the karat sections?

Well if you find yourself going through the main text and constantly asking what terms mean, criticizing arguments, and thinking “there’s gotta be more to this,” then you should definitely read the karat sections; because you’re exactly who they’re written for.

Like I said, I’ve been working on this book for over a decade. If you have a question about some aspect of the thesis, chances are I’ve already addressed it (either in one of the karat sections, or in a later part of the book).

5.  Lastly, to conclude the Intro, there’s one more thing I have to mention: the whole concept that this book revolves around – the concept of Value-Assignment – is not actually new.

Before I ever started this project, the notion of “values” already existed in the world of Psychology.  Although it doesn’t get a whole ton of media attention (Value-Assignment isn’t exactly a “sexy” theory in the world of science, like Gravity or Consciousness), Value-Assignment is a process that psychologists/brain theorists have, in fact, speculated about before.

So if others have already talked about it, what makes my idea(s) unique?

Two things:

A. First off, within the realm of Psychology, the people who discuss “values” are often defining “value” in linear or numeric terms.  In other words, they’re treating value as something negative or positive, or something with a number attached to it (e.g. “a value of 0” or “a value of 1”).

These same guys also usually think there’s some sort of “value-ranking system” in the brain – where “positive” (or “higher number”) values are placed above “negative” (or “lower number”) values.

Unfortunately, I don’t believe value works like this… at all.  I don’t believe value (in the brain) is linear or numeric in nature; and I don’t think brains/nervous systems “rank” values in a universally-predictable or universally-consistent manner.

I think that brains can sense information in a (relatively) consistent manner, across a species.  I also believe that certain psychological predispositions, such as the fear response, can appear consistently within a species (most likely because brain structure & neural circuitry can be affected by genetics).

Furthermore, I think it’s possible for people to describe or model values in linear & numeric ways (which is something a programmer might do, in order to create an artificial intelligence system).

But I believe that, in biological systems, value is a fundamentally non-linear trait.  And I think there’s a very straightforward argument for why that’s the case: an argument which will be detailed in the upcoming chapter.

B. Secondly, my book asserts that Value-Assignment may be the the brain’s primary function.  And no one else, as far as I’m aware, has made that same claim.

The few times I’ve heard psychologists/brain theorists talk about “value-assignment,” they mention it as just one of the functions the brain performs, or may perform.  Some people might view “value-assignment” as having high importance, others might see it as having low importance – but I’ve never heard someone suggest that Value-Assignment is the main function of nervous systems.

(Yes, you could argue that brain cells serve a whole host of “functions;” and that “Value-Assignment” is one of them. But I believe that Value-Assignment is far more significant than most people – including scientists – realize.  I believe it’s so significant that we might be able to think of the brain as a system which evolved, mainly, for the purpose of carrying out Value-Assignment.)

Of course, someone could have already made that claim.  There might be some obscure essay buried deep in the halls of Transylvania outlining my exact idea(s).  I don’t know.  But in all my efforts – in all the sleepless nights I’ve spent studying this subject – I haven’t found a single article, paper, or publication making such a proposal.

So that’s why this book exists: to explain why Value-Assignment may be the primary function of the brain.

And if such a hypothesis has already been laid out by someone else, well then this book can simply serve to support it.

Now, with all that out of the way, we can begin.

*The mobile version of this book will have links to chapters and karat sections listed in the top-left and top-right corners of the screen, respectively.  Buttons look like this:

Value-Assignment: The Primary Function Of The Brain

Sebastian Rey

Chapter 1: The Definition of “Value”

____________________

The title of this book is “Value-Assignment: The Primary Function of the Brain.”  So to start everything off, it would probably make sense to explain what “value” is first.

The word “value” has many definitions in the English language, and the things it can refer to are pretty wide-ranging.  If you tried to sit down and list all of those things, that list might get a bit long.

You have…

  • financial “value”
  • sentimental “value”
  • mathematic “value”
  • binary “values” used in computer programming
  • moral “value”
  • “value,” as in the general worth of something
  • values,” as in the ideals someone has & the things they prioritize
  • the “values” of a company or organization

… just to name a few.  There also appears to be a psychotic number of fast food advertisements where the term “value” is used as a selling point.

But for the purposes of this book, the word “value” will have only one definition:

the autonomous, relational interpretation of sensory information by a subjective observer.

Now, yes, at first glance this definition probably sounds highly complicated.  But once you get past the flashy terms, it’s actually a pretty simple and easy-to-comprehend idea.

I’ll break the definition down into two parts.  First I’ll explain what a “subjective observer” is, then I’ll explain what “autonomous, relational interpretation of sensory information” means, because that’s the most daunting part of the definition.

In short, a Subjective Observer is “any being, entity, implement, or instrument (alive or not) that can detect things around it.”  Simple.  There are three main characteristics a subjective observer has:

A.) It takes up a limited space,

B.) It relies on some specific mechanism (or mechanisms) to detect information,

C.) And it can only sense a limited amount (as well as a limited type) of info in it’s environment, using that mechanism(s)

(You don’t have to memorize these characteristics, they’re just here for the sake of accuracy.)

Technically, there are many, many things in the world that can count as a “subjective observer” (any sensory unit, any whole organism, and any non-living thing which senses information, can be a “subjective observer”)… but there’s a specific word here we need to focus on, which is “subjective.”

Now, in general, I think most people are familiar with what Subjectivity is.  But just so we’re all on the same page, I’ll give a brief explanation:

Imagine if you put two fish together side-by-side, and pointed their eyes at a rock.  Even though they’re both looking at the same thing; technically, their perception of the rock would NOT be the same.  Because there’s a tiny distance between the heads of the two fish, the angle at which they’re viewing the rock will be slightly different – causing their sensory perception of the rock to be slightly different.

No matter where you place the two fish, this will always be true… even if both fish can detect the same types of information.  For the two fish to see the exact same thing, at the exact same moment, they would have to be the exact same fish.

(Of course, in reality, there are more factors involved in sensory perception.  But if anything, those factors will just create even more differences between the perspectives of the two fish; so the example still holds true.)

Basically, all things in the universe are physically separate from each other.  And because of that separation, different things will almost always have different “perspectives” and/or traits.  That’s Subjectivity.

At this point you might be wondering, “Why are you talking about Subjectivity?  We all know that everything in the universe is separate from everything else.  It’s obvious.”

But the reason I’m bringing this up is because Subjectivity has to do with value.

See, every organism has to achieve certain “big” goals if it wants to keep itself and it’s species alive.  For example, nearly all living creatures spend energy simply by existing, so they need some way of replacing that energy, in order to survive.  (This is commonly done through a process called “eating.”)

Also, organisms can’t usually stay alive forever, so they need to spawn more of themselves, to keep the species alive.  (We call this “reproduction.“)

These “big” goals (survival and reproduction) can also be called biological goals – and in order to achieve biological goals, sometimes, organisms will need to complete small or indirect tasks (such as hunting or farming)…

… the problem is, Subjectivity doesn’t exactly make it easy to accomplish biological goals & tasks.  In fact, because of Subjectivity, there are two difficulties ALL creatures must deal with:

  • Because everything in the universe is separate from everything else, the things that creatures need (to achieve their goals) will always exist in a separate place, at a separate time, or in a separate state, from their current selves.  In other words, organisms can’t usually just lie there motionless, and still get their needs met – at least not in the vast, vast majority of cases.
  • Also, not all information will be (or can be) helpful in achieving an organism’s goals. Basically – because we live in such a gigantic & varying universe with so much stuff – some information is always going to be irrelevant, when it comes to any goal or task.

So since every creature must deal with these two difficulties, it means that every creature (to accomplish goals & tasks) will have to either seek out what it needs, or at least filter out irrelevant information that gets thrown it’s way.

But most importantly, if an organism happens to come across what it’s seeking, there needs to be something inside the organism that tells it that it’s found what it’s looking for (something that basically says “hey, this information over here isn’t irrelevant”)… so that the organism can then take appropriate action.

Because if you can’t tell which info is important to act on (i.e which things will help you accomplish your goals), you can do very little with the information you sense.

This is where value comes into play.

Let’s say I’m a little animal that’s just popped into existence, knowing nothing about this universe.  Right now, I just want to figure out where I am, what’s going on, and (ultimately) how I can survive in this universe.  But almost immediately, I run into a strange-looking clump of… stuff.  And it appears to be moving.

I’ve got no clue what this clump of stuff is made of or what it’s doing here, but I now have a predicament: I have to decide what to do about this thing in front of me.

The problem is, I’ve never been to this universe before, and I know absolutely nothing about it – so how can I possibly make a decision that’s advantageous or beneficial to me?

Well, in order to take some action (or actions) that might benefit me, I need something to base my actions on.  (Otherwise, anything I do will be a guess.) Which means, before I make any moves, the first thing I should probably do is figure out how the information in front of me relates to me and my goals.

(Is this “clump of stuff” dangerous?  Is it completely harmless?  Is it actually alive?  Is something else causing it to move?  Will it try to kill me if I flee?  Is it something I can eat?  Does the existence of this clump, in any way, have to do with my survival?  If so, how?)

By determining that relationship – the relationship between a.) the information I’m sensing, and b.) my goal(s) – I’ll have something to base my actions on.

So where does that “relationship” come from?  How do I obtain it?

I could try to extract the relationship from… maybe… the universe (or from the clump of stuff, I guess).

But how would that work?  What tools would I use to perform that kind of “extraction?”  Is there a specific bone in my arm which does that?  Or maybe the relationship could just teleport into my mind (it would make things a lot easier).

This whole “extraction” thing sounds like it should be simple & straightforward – but it actually wouldn’t be.  And the reason why has to do with something I call “neutrality:”

When it comes down to it, all things in the universe are ultimately just collections of data; including this “clump of stuff” in front of me.  And data (on it’s own, at least) doesn’t really know about, or care about, my goals.  Data can’t know what I want, or what will help me survive.  Data is neutral.

This means – no matter what information/data I’m looking at – the data by itself can’t do anything for me.

So if data is neutral, then unfortunately I won’t be able to extract any “relationships” (meant to help me achieve my goals) from the clump, or from my environment, or from the universe.  Even if a relationship like that does exist somewhere out in the ether, I have no good way to access it.

Okay, fine, fine… maybe I can’t perform an “extraction”… but I’ve still gotta determine that relationship SOMEHOW, or else I’m stuck.  Without it, I can’t do anything that might help me survive, in my current situation.

This leaves me with only one other option: I have to create the relationship myself.

Maybe I could invent a new relationship from scratch (for example, I could try doing different things, seeing how the clump reacts, and then come up with the relationship from there).

Or maybe I could use a relationship that I created in the past (for example, there could somehow be a system or mechanism built onto my body which has already determined the “relationship” beforehand).

Either way, after I independently craft that relationship (or use one which I crafted before), I can finally decide how to deal with this clump of stuff…

Well that “independently-crafted relationship” is essentially what a value is.  “Autonomous, relational interpretation of sensory information by a subjective observer” is just a technical way of defining it.

“Autonomous,” meaning “independent” or “in a self-generated manner;

“relational,” meaning “having to do with relationships” or “relationship-based;

and “interpretation,” which is just “defining/understanding something from your own perspective.

(I told you it wasn’t hard.)

To put it another way: value is the most basic mechanism/apparatus that lets an organism know which data is “important” (i.e. how sensory info relates to it’s goals), so the organism can then take appropriate action (action meant to help the organism achieve it’s goals).

However, there are two extremely important facts here that must be noted:

1. If values are made & used by subjective entities, for subjective purposes, then values themselves must also be subjective.

2. If all creatures must determine when and how to act on information, and if those determinations cannot be made through information alone, then all organisms – not just organisms with brains – will need to create & use values, so they can react appropriately to the data they sense.

It might be weird to think of creatures without brains (like viruses & bacteria) as using “values;” but if the premises up to now have been true, then that must be happening.

Therefore, if creatures without brains create & use values, then we should hypothetically be able to observe that process.  And by observing that process, it might actually help us understand the function of the brain.

So that’ll be the topic for Chapter 2: how organisms without brains (hypothetically) use value.

Value-Assignment: The Primary Function Of The Brain

Sebastian Rey

Chapter 2: How Organisms Without Brains (Hypothetically) Use Value

____________________

In the last chapter I explained what value is, that it must be created, and that it’s something all living beings (including ones without brains or nervous systems) should have to use.  And if organisms without brains/nervous systems are creating & using values, then we should be able to observe them doing that (which might ultimately help us understand the brain).

Of course, this all sounds well & good… there’s just one problem:

Millions of species of organisms without brains currently exist in the world; and they each come equipped with their own unique collection of habits, characteristics, and structural features.

These creatures are so diverse that it would probably be immensely difficult & time-consuming to find some characteristic or set of traits that all of them share.

So how exactly do we observe the process of “value-creation” or “value-use” in these organisms?

Einstein claimed that no problem can be solved from the same level of thinking that created it.  Well there’s a similar concept that I’ve found helpful: generally-speaking, when working on a problem, if you want to reduce your work, you must reduce the problem.

Basically, if we want to figure out how organisms without brains create & use value, we need to reduce “value-use” or “value-creation” to it’s purest/simplest format(s).  In other words, we must predict the most basic types of value-use or value-creation that could hypothetically exist. Then all we need to do is examine real creatures, to see if their traits & behaviors line up with our prediction.

And since organisms must create values before they can use them, we’ll start by looking for the most basic types of “value-creation.”

Now we know, from the last chapter, there are two things that organisms would (hypothetically) need, to create value: sensory info and a goal.

So let’s start with the “goal” part.

It’s most likely the case that all organisms are born with “goals” (survival/reproduction) as part of their body, behavior, or functioning – even if they’re never aware of it.  We could argue forever about where these “goals” originate from, and the truth is, we may never know.  But one thing we do know is that if organisms didn’t accomplish these goals, life wouldn’t be possible.

I wouldn’t be here today, if most of my ancestors hadn’t tried to keep themselves alive and/or keep humanity going, by having children.  The same goes for every other species.

(Where, exactly, are these “goals” located? Well that’s a more complicated a subject – one which I’ll address in Chapter 6, Note 2.)

Therefore, if all creatures (that we know of) are essentially born with their “goals” built-in, then the only other thing needed for value-creation, is sensory info.

However, “sensory info” isn’t really just one thing.  There’s an extremely wide array of data/information in the universe that organisms are capable of sensing.  And oftentimes, different species don’t even detect the same types of information.

“Okay, so if (basically) every organism senses different things – and if values are based on what organisms sense – then every organism is gonna be creating different values, right?  So how are we supposed to figure out ‘the most basic types of value-creation’ when that’s the case?”

It’s a difficult question.  But maybe, in this circumstance, it’s not about the “what”…

If we can’t figure out “the simplest types of value-creation” by looking at what creatures sense, then maybe we need to look at when creatures sense things.

“But organisms could sense information at any point in their lives, couldn’t they?  So how will that help us?”

Well, yes, the exact moment when organisms detect information is gonna change constantly, and will be different for every creature.  But, if we assume that “info-detection” and “value-creation” are separate procedures (that can occur at different times), then it’s possible these two procedures could happen in different orders:

In other words, organisms could

a.) determine values BEFORE sensing info (a.k.a. “Predetermining” value), or

b.) determine values AFTER sensing info (a.k.a. “Assigning” value).

(i.e. [create values → sense info] or [sense info → create values])

So if these two “value-creation points” are different in terms of when they happen, maybe they’re also different in terms of how they happen – or how organisms perform them.  Which means “Predetermination” and “Assignment” could be the two “most basic types of value-creation.”

***

Alright, cool… that wasn’t so bad, was it?  Now that we have two contenders for “the simplest types of value-creation,” we need to look at how these two “strategies” might work.

We’ll start with “Predetermination” (since we kinda have to, for this book to make sense).

If Predetermination is a real “strategy” that happens, how would it occur?

Obviously the idea of Predetermination isn’t hard to understand (it just means applying value to something before you sense it), but how would Predetermination actually work, in the real world?  And more importantly, how could organisms “predetermine values” in a way that allows them to survive/reproduce?

This is where we run into a slight obstacle (a predicament, if you will):

Let’s say you’re a Predetermining creature, and you’ve got a tiny bit of data (we’ll call it “Data X” or “X“) that you want to determine the value of.

Well, if you’ve never come across Data X before, how would it be possible to know it’s “value?” How exactly are you supposed to figure out the relationship between X and your goals, when you’ve never encountered X in your life?

It would be pretty hard.

But let’s just say, for the sake of the example, that you choose some random value for X… in that case, it’ll probably be impossible to know if that value (the one you just created) is accurate.

So basically – if you’re trying to “predetermine” the value of Data X – any value that you pick will essentially be a guess.

To make the problem worse: there’s an insane amount of information in our universe… and therefore, most of the data you run into probably won’t be X (which matters if you’re trying to survive/reproduce).

If Data X is something your body needs for survival, you’ve gotta be sure that when you act, the thing you’re acting on is definitely X, and not some other random bit of data out there. You also need to be sure that you’re acting on X the right way (because if not, you won’t be able to accomplish your goals). And unfortunately, when your values are a guess, any actions you take (based on those values) will also be a guess.

So how can you, as a Predetermining creature, deal with this “predetermining predicament?” (How can you create accurate values for X before sensing it?  And how can you make sure you’re acting on the right info, the right way?)

Well the reality is, as long as Predetermination is your main value-creation strategy, your values will always be somewhat of a guess – which means it’ll always be impossible to know (with total certainty) if you’re acting on the right thing, in the right manner…

However…

If you conveniently happened to have a Unique Sensory Mechanism – so unique, that the only thing this mechanism is capable of sensing (in the entire universe) is Data X – then you might not need total certainty.

With a Unique Sensory Mechanism, it would be extremely easy to at least find the data you’re looking for.  And if you can find the right data, it should also be easy to “predetermine” the right value for that data.

How?

Well remember, to create values, you need sensory info and a goal – and you already have the goal part.  But although you might not have sensory info right now, if you know exactly what info you’re going to sense in the future, you’ll have something to base your actions on (which gives you the two ingredients you need to create accurate values).

It’s still possible that your values will be inaccurate (because, as a predetermining organism, you’re still relying on “guesses” to some degree) – but a Unique Sensory Mechanism will make it a lot more likely that you’ll find the right stuff & create accurate values.

Of course, you won’t be able to see color, or hear music, or recognize complexity, using a Unique Sensory Mechanism.  (If Unique Sensory Mechanisms are your only way of interacting with the world, then the things you detect will probably have to be things which are essential to your survival/reproduction.)

However, if you’re in a “predetermining predicament” like the one we just talked about, Unique Sensory Mechanisms (or USMs) are one way you could deal with that predicament.

So as long as Data X actually exists in your environment, and as long as you have an appropriate response ready-to-go, you’ll be able to react the moment you detect something. (Basically, by using a USM, you won’t ever have to think or hesitate – you can just take immediate action anytime that USM goes off – because you already know what it is you’re sensing, and you’ve already determined it’s value.)

***

At this point you might be thinking, “This is all way too convenient – there’s no way a creature could just be born with a mechanism which detects the exact right information it needs to survive/reproduce.”

Except, that’s precisely what we can see happening, in certain organisms.

In fact, there’s a whole bunch of organisms that use (what could be considered) USMs to interact with the world…

… and they all just happen to be organisms without brains.

Take, for example, the HIV virus. Basically, the virus has tiny proteins (which function as sensors) covering it’s body, that are only capable of “detecting” certain types of cells: immune cells with a CD4 receptor on their surface ([1] nih.gov, “The HIV Life Cycle,” 2021).

Because of this, HIV cannot infect many organisms.  It can only bind to, and infect, specific cells with specific characteristics.  When HIV detects one of those cells, it immediately attempts infection.  It doesn’t “think about” whether or not to act, it just carries out an automatic & specialized response whenever it senses something (almost like that response was “pre-set” – before any info was detected).

Or take the Neutrophil: the most common type of white blood cell in the human immune system. Like many other immune cells, Neutrophils are responsible for killing infections and controlling inflammation in the body. But in order to find infection and inflammation, they must use a process called chemotaxis ([2] Nuzzi, Lokuta, & Huttenlocher, “Methods of Molecular Biology,” 2007, p.23-35).

How does chemotaxis work?  Well basically, Neutrophils have receptors on their body (called “chemoreceptors“) which only bind to specific chemicals.  When these receptors bind to a chemical, the Neutrophil starts moving to where the chemical is most concentrated (which allows them to locate their targets with relative ease).

Unlike HIV viruses, Neutrophils can have a variety of different chemoreceptors; and each receptor may “sense” a different substance.  However, when a Neutrophil detects one of these substances, it has a consistent & predictable reaction: it starts migrating toward the source of the chemical.

If you wanted to, you could label each chemoreceptor as a separate USM – which would mean that Neutrophils carry multiple USMs on their body.  But even though they have multiple chemoreceptors, each chemoreceptor still “senses” highly specific info – and the moment something is detected, the Neutrophil has a highly specific response (suggesting that it might’ve “predetermined” the value of that sensory information).

Of course, HIV and Neutrophils (as well as all the examples in the above karat section) are more complicated when studied in detail; and carry other anatomical, genetic, and molecular systems on/in their body.  However, most of them do seem to use – what could be described as – “Unique Sensory Mechanisms” to detect info in the world.

But the truth is, Unique Sensory Mechanisms are NOT the only kind of mechanisms a creature could use to solve the “predetermining predicament.”

If you read karat #2.3, you may have noticed there was one example that stood out: the carnivorous sponge.

Basically, the carnivorous sponge’s body is covered with hook-like filaments, which the sponge uses to catch & consume prey. These hooks (which also function as sensory mechanisms) are able to “catch” many different things; they don’t exactly discriminate.

In this case, it’s obvious that the sponge’s sensory system doesn’t detect “highly-specific & unique information,” but instead detects a more broad type of data (physical touch/disturbance)… which means we can’t really label this as a “Unique Sensory Mechanism.”

But what is it?

Well there’s one very important thing to note about this particular setup: when an organism (such as the carnivorous sponge) has a sensory system that detects a broad type of data – as long as no nervous system is present – the organism tends to use the same response (or sequence of responses) for all info it detects.

For example, when the sponge catches something with it’s hooks, the sponge’s cells react by surrounding whatever was caught, then attempting to digest it. ([8] Vacelet & Duport, “Zoomorphology,” 2004, p.179-190.) This happens no matter what the hooks catch: meaning the sponge is using the same response for essentially everything the hooks “detect.”

So while the sponge may not sense unique or specific data, the sponge still basically treats all sensory data as if it has a specific “value” (food).  And since the sponge also acts as soon as it detects something, it looks as if the “value” of that sensory data might’ve been “predetermined.”

Still, we can’t exactly call these hooks “Unique Sensory Mechanisms,” due to how they function.  If we want a better term to describe them, we could call them “Generic Response Mechanisms” (or “GRMs”), because the sponge uses the same, generic response (or set of responses) for all information the hooks sense.

“But why would GRMs work? Why would a creature with a GRM actually be able to survive/reproduce?”

Well, in the case of the sponge, it all comes down to the “response systems” (i.e. digestive systems) the sponge carries.

See, even if an organism detects extremely broad information – as long as the creature has “response systems” which are able to handle most of the things it detects, and as long as the creature is able to meet it’s basic needs through this process enough of the time – GRMs could hypothetically be just as useful as USMs.

This means both Unique Sensory Mechanisms and Generic Response Mechanisms could be viable ways for an organism to “predetermine” value.

Both USMs and GRMs are built so that the creature using them has a (seemingly) “pre-set” reaction to sensory data.  Also, these sorts of mechanisms appear to be the primary way that organisms without nervous systems detect & respond to info in the world.

Yes, some organisms might have multiple “predetermining” sensory mechanisms on their body… maybe even mechanisms which work in totally differently ways. (The Salmonella virus, for example, has multiple systems for invading and infecting eukaryotic cells – [9] Boumart, Velge, & Wiedemann, “FEMS Microbiology Letters,” 2014, p.1-7.)

But even still, it appears that Predetermination may indeed be happening in organisms without brains.  And not just that, but it might be the main strategy by which these organisms create & use value.

But why does all this matter?

Well it matters for one, simple reason:

If we know (or have hypothesized) how organisms without brains use value – and if we know that organisms WITH brains behave differently from creatures without brains – then maybe the difference between these systems/creatures stems from a difference in how value is used.

So that’ll be the next chapter’s topic: how organisms with brains (hypothetically) use value.

Value-Assignment: The Primary Function Of The Brain

Sebastian Rey

Chapter 3: How Organisms With Brains (Hypothetically) Use Value

____________________

In Chapter 2, I discussed two basic strategies of creating value: Predetermination and Assignment.  I also explained how organisms without brains might be “predetermining” values, using certain kinds of sensory mechanisms: “Unique Sensory Mechanisms” (“USMs”) and “Generic Response Mechanisms” (“GRMs”).

“But what does all this have to do with the brain?”

To answer that, we need to look at what “predetermining” systems can’t do (i.e. how they’re limited).

See, USMs can only detect small amounts of highly specific info.  This means that USMs will always have a highly limited sensory capacity.

(Of course, this limitation is what allows them to function.  If you’re a creature that can only detect very specific items in the world, then whenever your sensors detect something, the info you’re sensing is statistically likely to be the items that you’re looking for.  As a result, you can basically “assume the identity” of any sensory data you come across.  And thus, you can also “assume” how that data is related to you & your goals – a.k.a. it’s value.)

Creatures with GRMs can detect non-specific info (a.k.a. “broad types” of info); but the creatures using GRMs don’t really “identify” the things they act on, and will generally treat all sensory data in the same ways (with a few exceptions).  In other words, GRMs will always have a highly limited ability to discriminate (i.e. tell the difference) between the various objects, creatures, & bits of info they sense.

(Again, this limitation is part of how they work.  If you can’t “identify” anything, then one way you can accomplish your goals is to just apply the same values to all sensory data.  So instead of assuming the identity of the stuff you’re sensing, you’re basically assuming the relationship that stuff has to you/your goals… which allows you to use the same set of responses on everything you detect.  And as long as this strategy pays off enough of the time – as long as you have response systems which can handle most things you detect – you won’t really have to identify what you’re acting on.)

But what if you wanted a large sensory capacity and the ability to discriminate (i.e. a sensory system which could detect large amounts of specific info, while being able to tell the difference between all those things)?  In that case, you couldn’t use USMs or GRMs; because neither of them are set up for that.

It’s possible for a creature to carry multiple USMs and/or GRMs – but unless these mechanisms were somehow working together, it still won’t achieve that result (high sensory capacity & high discrimination-ability).  Because as long as each mechanism is still “predetermining” values, each mechanism should still have (and rely on) the limitations I just described.

(And even if your mechanisms were tied together – if the information you’re sensing is diverse enough – you’ll need some way to “integrate” or “calculate” the different signals coming from your USMs and GRMs.  And unfortunately USMs and GRMs, by themselves, can’t “calculate” anything.)

This means, in order to have a large sensory capacity & ability to discriminate, you’ll most likely need a new type of system.

The problem is, that new system won’t be able to utilize “Predetermination”… and there are two reasons why:

A. When you can sense large amounts of info and discriminate between things, it becomes impossible to “assume the identity” of what you detect (like USMs do).

The only reason why USMs could get away with “assuming the identity” of things is because they sense such tiny amounts of data overall. But when you can sense wide varieties of info, that “identity-assumption” is no longer viable. And since the values of these Predetermining systems/mechanisms are based on “identity-assumptions,” it means these new systems won’t be able to create value the same way.

B. When you have a high-capacity, discriminatory system, you can’t really apply the same values to everything you detect (like organisms with GRMs).

The whole reason why a creature with a GRM would (hypothetically) apply the same value to most/all sensory info is because it can’t discriminate between the various things it’s sensing.  But if you have the ability to discriminate – and you can sense lots of stuff – you’re inevitably gonna be much more “aware” of your surroundings.  Some things might be necessary for you to act on (like a tasty fish), but other things might not be (like a motion-less rock).

When you’re aware of things like this, it simply becomes impossible to apply the same values to all info you sense, because you’re gonna know that different things will have different values/relationships to you.  You could still use the same responses on everything you detect, but there’d be no reason to do that (since different things have different values).

So if your “high capacity, high discrimination” sensory system can’t predetermine values – and if values are required for all organisms to operate – then in order to use this new system, you’ll need to come up with a different value-creation strategy.

And if what I discussed in the last chapter is correct, there’s really only one other strategy you can utilize: Value-Assignment.

***

“Okay, then – let’s just suppose, for a moment, that Value-Assignment is a real thing happening in some organisms.  How would it work?  What are the most basic traits that a Value-Assigning system or creature would have?”

Sure, it’s possible that a Value-Assigning system might have a “high sensory capacity and ability to discriminate,” but that doesn’t actually tell us anything about how that system would function.

There’s one thing we can conclude, though:

Predetermining organisms don’t really have to do much, to create values (because their values are basically created & set-in-stone “at birth”/before any info is sensed)… but Assigning creatures would probably be different.

Assigning organisms determine values after sensing info – which means their values will have to “adapt” to the current situation or environment in some way.  As a result, Assigning creatures will need to take some kind of action, or carry some kind of system, whose purpose is to actively create new values.

But how would that system work?  What structures, behaviors, or characteristics would that system have?  And what traits would we see in the creatures using such a system?

Let’s start with what we already know about value.  My hypothesis states that, fundamentally, a value is a relationship.

Well in order to have a relationship, you need at least two separate things to come together and form some kind of connection.

So the first thing we might see, in a Value-Assigning system, is a way of creating and maintaining “connections” of some sort (maybe even multiple connections at once).

“How would those ‘connections’ take place, though?  What things would be getting connected?”

Potentially, one way these connections could work is by taking separate sensory units, and creating some kind of association between those units.

For instance, let’s say that Data Y is a creature that’s a threat to you (like a predatory animal).  In order to “assign value” to Data Y, you could take the sensory unit(s) which detect Data Y, and associate them with some other sensory signal that indicates “danger” (we can call the “danger signal” Data R).  By associating Y with R, you get a specific value: Value Z.  (Remember, value is not the sensory signal itself, value is the relationship between the two signals.) Therefore, Y + R = Z.

Hypothetically, this “value-association” would tell you, in a crude manner, how Y is related to you & your goals.  And as a result, you’ll have a rough idea of what to do about Y (a basis for your actions).

Depending on the situation, Value Z may even enable you to survive – or at least act in a manner that promotes your survival.

“Okay, so maybe a Value-Assigning system would create values through ‘connections.’  And maybe those connections would come in the form of ‘associations’ between sensory units.  But that STILL doesn’t tell us a whole lot.  At the very least, there are other questions we could ask, to get a better picture of how this ‘association system’ works.”

For example…

1.  Would these “associations” be permanent?  Because, if so, your Value-Assigning system won’t be very versatile.

If Y is permanently attached to R, then Value Z will be unchangeable.  And if that connection was made by mistake, or if Y‘s relationship to you changes, you won’t be able to do anything about that.

So one of the characteristics we might see in a real-life “value-association” system is the ability for associations to be modified.

2.  Secondly, would these “associations” be instant?  Because that could also be a problem.

If you’re dealing with enough sensory data that you’re frequently encountering new stuff (or new combinations of stuff), you may not always be able to determine the value of things right away.  In some cases, you might need a bit of time to figure out how info is related to you/your goals (if at all).

However, if your “value-associations” are always made the instant you sense something, there’s a good chance your values will often be inaccurate (see karat #2.1 for the definition of “value-accuracy”).

So, on top of modifying associations, you may also want the ability to increase or decrease the strength of your associations (a.k.a. the ability to “grade” associations), as time passes & more info comes in.

3.  Third, would you be able to remember your “associations?”  Because if not, you’ll constantly have to create new values… and all the values you do create will basically be “one-time-use” (making it impossible for you to learn anything).

So another quality/trait we might see in a Value-Assigning system, is some method, mechanism, or structure (or maybe even several mechanisms/structures) that enables “memory.” Those methods might vary depending on the species, and depending on the scale you’re looking at… but in any case, if your values aren’t “set-in-stone” (like with Predetermining creatures), then some type of “memory” would probably be good to have.

4. Fourth, would all your associations be direct connections?  Because that might not be ideal.

Let’s say you want to create an association between Sensory Unit A and Sensory Unit B, but the only way to create that association is by using some kind of string, which goes directly from A to B.

Maybe the “string” can work in some cases… but what if A‘s relationship to you changes? What if you find out that A actually needs to be connected to Sensory Unit C? Now you’ll need to either create a new string between A and C, or cut the first string & reattach it to C.

This sounds like it would be pretty easy to do – except, in an actual organism, this way of doing things would be highly inefficient.

If you have to create a direct connection between every sensory unit you associate, not only will you be using up significant amounts of energy to do this (especially if you need to create lots of values), but this process will most likely be slow as well (especially if your sensory units are located far apart from each other).

In an environment where anything can change at a moment’s notice, it’d be much better to have a system that can form & modify values quickly.  But in order to do that, you’d need a slightly different approach…

Maybe, instead of direct connections, you could have a system where many of the sensory units on your body have a representative (kind of like a friend or family member, who takes actions & makes decisions on your behalf).

Basically, instead of forming associations directly between sensory units, this system could (often) form associations between representatives.  These representatives could even be bundled together, so that associations can be formed/changed more quickly.

(For example, Sensory Unit A might have a buddy called Friend of A, who can take actions in place of Sensory Unit A.  If Sensory Unit B also has a friend, then instead of creating an association between Sensory Unit A and Sensory Unit B, your system could create an association between Friend of A and Friend of B.  And by having all “friends” positioned close together, the system can create connections more quickly & easily.)

5. Lastly, how would your motor & response mechanisms work, with a system like this?

Remember: when you have a high sensory capacity & ability to discriminate, it doesn’t make sense to treat everything the same way – because different things are gonna have different values.  But this creates an issue:

Most “predetermining” organisms are only built to perform a limited number of actions.

If you look at any organism using a USM or GRM, the total amount of things that these creatures can actually do is always small/highly limited.  The HIV virus, for example, only has the ability to attempt infection… that’s it.  It can’t walk, it can’t swim, it can’t bite or grab.  All it can do is detect a specific kind of protein on a specific kind of cell – and once it detects that protein, it can attempt to place it’s own RNA into that cell (infection).

“Why are Predetermining creatures like this?”

Well presumably, it’s because each Predetermining organism only deals with a small range of values, in general.  And since actions (under this thesis) are usually based on values, any creature utilizing the strategy of Predetermination will only need to take a small number of actions, in general.

Therefore, the physical structures/mechanisms Predetermining creatures use to carry out actions will only need to be built to do a small number of things.

But when you have a high sensory capacity (and can tell the difference between things), the amount of values you may have to create could be very large.  If your “systems-of-response” are only able to perform a small number of actions, those systems won’t be of any help when you run across something they aren’t made for.

So ultimately, if you have a sensory system which is “wide-ranging” in terms of what it can detect (and what values it can create); you may also need motor/response systems which are more wide-ranging in what they can do.

For example, rather than being able to eat one thing, it might be necessary for a Value-Assigning organism to be able to eat a whole myriad of things; and to have digestive systems which allow this.

Or maybe, rather than having motor systems capable of one, single-direction movement (like swimming forward), it might be better for a Value-Assigning creature to have motor systems capable of several types of movement (dodge, duck, dip, dive, etc.).

And rather than having appendages meant for one kind of task (like infection), it might be better to have appendages (like hands or tentacles) capable of performing a variety of tasks.

These “broad-functioning” motor & response systems wouldn’t just allow organisms to handle a larger variety of situations, environments, and values; but they’d also make it easier for creatures to adjust their behavior when environments, situations, and values change.

***

So if Value-Assignment is occurring in real creatures, these are just some of the basic traits we might see.

However, just because the above traits might sometimes be advantageous, it doesn’t necessarily mean that they’d occur in all Value-Assigning systems, at all times.

In fact, there could be situations where Assigning organisms are better off without some of the above characteristics. For example, certain “value-associations” could be more effective if they were (essentially) instantaneous and permanent; because it would allow the organism to have quick, reflexive reactions to specific kinds of sensory information.

Or there could be cases where Value-Assigning creatures use specialized motor & response systems (rather than “broad-functioning” ones) because it might be necessary for dealing with specific entities/info/circumstances the organism frequently encounters.

There may also be cases where the “association” mechanisms are NOT bundled together, but are instead scattered or dispersed (sort of like a “net” of sensory units).

Still, it’s probably no coincidence that the above traits look very much like the traits of nervous systems/brains, and organisms which carry them.

Not only do nervous systems have a higher sensory capacity & ability to discriminate than “predetermining” organisms, but nervous systems also often carry the above traits: the ability to form “associations,” the ability to modify (and change the strength of) associations, “representative” sensory units, and some kind of memory.

It’s also probably no coincidence that organisms with nervous systems typically have more “wide-ranging” motor & response systems (such as legs, arms, or tentacles) on their body – capable of performing a broader set of tasks, and allowing the organism to move/behave in a larger variety of ways, than “predetermined” systems.

“So hypothetically (if nervous systems are in fact Value-Assigning systems), when and where would we see nervous systems show up, in nature?”

Well according to this book’s argument, a Value-Assigning apparatus (like a nervous system) should be required when an organism senses large amounts of information, and must also discriminate between inputs in some way.

But what exactly is a “large amount” of info?

In karat #3.1, we talked about how jellyfish stingers (“cnidocytes”) are a special kind of sensory mechanism which can sense both “broad” and “specific” information at the same time.

However, there are a couple reasons why cnidocytes (on their own) may not necessarily require a nervous system.

First off, we know that sensing “broad”/”non-specific” info does not count as “large amounts of information.”

If you think about it, the term “large amount” (at least in this context) suggests variety or diversity… which suggests different kinds of data being sensed at the same time.  Since GRMs work by sensing a single type of data, like physical touch, you can’t really say that GRMs sense a “variety” or “diverse range” of information.

But what if you added another type of sensory data into the mix?  For example, let’s say you had a special sort of GRM that could not only detect physical touch, but could also detect a specific chemical as well (which is usually how cnidocytes work).

In that case, your sensory mechanism would be able to detect a wider variety of information than a normal GRM.  (Rather than one type of info, you can now sense one type of info + one specific item in the world.)

Would this count as a “large amounts of information?”  My guess is, no, it wouldn’t.

Of course, the exact amount of information that has to be sensed before a nervous system appears might never be known.

But maybe – if nervous systems are required when an organism can sense a wide enough variety of info – then the more kinds of information an organism can detect, the more likely it is that organism will be carrying a nervous system.

Well if we were to study the anatomy of cnidocytes, we’d see that not all of them are attached directly to neurons…

However, interestingly enough, the organisms which carry cnidocytes always have a nervous system of some sort ([13] study.com, “Nervous System in Cnidarians | Features & Specialized Cells,” 2023) – a nervous system which often works in conjunction with these specialized cells.

“Okay, so MAYBE value is something all organisms have to create & use… and MAYBE brains perform the function of Value-Assignment… but that doesn’t mean Value-Assignment is the MAIN function of the brain!”

Well yes, that’s true.  So far I’ve only really argued that nervous systems perform Value-Assignment, not that Assignment is the brain’s primary function.  There could be some other operation or function which falls under the category of “main function.”  It’s also possible that the brain has no main function.

But after I make my case, maybe your mind will be changed…

So that’s what the next chapter will deal with (and what this whole thesis has been leading up to): why Value-Assignment (may be) be the primary function of the brain.

Value-Assignment: The Primary Function Of The Brain

Sebastian Rey

Chapter 4: Why Value-Assignment (May Be) The Primary Function of The Brain

____________________

Hi everyone.  Look, I know some of you opened this book, saw the Table of Contents, and went straight to this chapter.  I get it: it can be tempting to skip to the big finale, because attention spans are short and no one wants to waste their time doing crazy things like “reading.”

But skipping over the first 3 chapters of this book would be like starting a movie and immediately jumping to the end.  You CAN technically do that… the problem is: the movie’s end will usually only make sense if you saw the beginning and middle parts.

Well it’s the same with this book.

If you want to start this chapter anyway, I can’t stop you.  But without grasping my previous arguments, it’s gonna be difficult to follow everything I’m talking about.  Alright, that’s it – I’m done ranting.  Now get out of here.

***

In the last 3 chapters, I laid out my argument for why Value-Assignment could be *a* function that nervous systems (including the brain) perform.  But one big question I haven’t answered yet is, “Why would Value-Assignment be the brain’s PRIMARY function?”

Well this is the chapter where I finally address that.

Unfortunately, to answer that question, I need to explain something called the “order-of-operations” (which was mentioned in karat #2.2).

Now I have to warn you, most of Chapter 4 will be spent describing & explaining these “order-of-operations.”  But if we want to see why Value-Assignment (may be) the primary function of the brain, we need to look at the other major functions the brain (may be) performing… which means discussing the order-of-ops.

It’s gonna be a bit of a journey, but just like before, we’ll take things one step at a time; and by the end of this chapter, it’ll all make sense.  So here we go.

Let’s start by talking about what the “operations” are.

Under this book’s hypothesis, there are six fundamental operations which all living creatures either can or will use to accomplish their biological goals.  They’re:

  • Information Detection, a.k.a. Sensation (universal)
  • Signal Confirmation, a.k.a. Identification (non-universal)
  • Determination of Value (universal)
  • Determination of Effective Action (universal)
  • Action (universal)
  • Decision-Making (non-universal)

These operations are what I call “first order functions” – meaning that, in every biological system (whether small or large), these operations should be the first priority.  Biological systems, like the brain, might perform a whole range of different tasks, functions, and activities; but all observable tasks/functions/activities should either exist underneath, or be part of, one of these six operations.

Four of the operations will be necessary for all organisms to carry out (making them “universal” operations), while two operations will only be used by some creatures, some of the time (making them “non-universal” operations).

Also, since these operations can’t occur all at once, every creature will have to perform these operations in a particular order.  However, not all organisms will need to perform the operations in the same order, necessarily.

In fact, if I’m correct, the order-of-operations should be different for Predetermining and Assigning creatures.

For Predetermining creatures, the order-of-operations should be:

[determine the value of some information → determine an effective action for that info → detect that info → (potentially) confirm the signal → action]

For Assigning creatures, the order-of-operations should be:

[detect information → (potentially) confirm the signal → determine the value of that information → determine an effective action → (potential) decision → action]

Assuming the above two order-of-operations are accurate, there are a few things we can notice.

First off, when comparing the two “order-of-ops,” we can see that some of the steps are switched around.  For example, in the Predetermining order-of-ops, Sensation & Confirmation are the 3rd and 4th steps.  In the Assigning order-of-ops, Sensation & Confirmation are the 1st and 2nd steps.

We can also see, in the Assigning order-of-ops, that there’s an extra step labeled “(potential) decision.”

If the brain is indeed a Value-Assigning system, then the Assigning order-of-ops is obviously the one we need to examine.

Now before we move forward, there are a couple important details I need to clear up: when I use the term “Determination of Value,” I’m not necessarily talking about “Value-Assignment.”

Under this thesis, both Predetermining and Assigning organisms create values; which means that “Determination of Value” would be happening in both systems, in some capacity. The difference is that Predetermining creatures can’t determine values actively (in other words, they don’t spend energy on the value determination process, while alive), whereas Value-Assigning creatures can determine values actively.

So since Value-Assignment is an active operation, we could say that Value-Assignment is an actual “function” of the brain.

Simply put, Predetermination and Value-Assignment are just different ways that “Determination of Value” can occur.  But in this chapter, when I talk about the “functions” the brain may be performing, I’m talking about the active operations.

“Which operations are active?”

Predetermining creatures have only three: Sensation, Signal Confirmation, and Action.  In Value-Assigning creatures, all operations will hypothetically be active.

Ok, now let’s move on.

In the previous chapter, we discussed how Value-Assignment (might) work… but what about the other five operations?  What even are “Signal Confirmation” and “Determination of Effective Action” anyway?  And if all these operations are active in the brain, is it possible that one of them might be “the brain’s main function?”

Well let’s go over each one and find out; starting with the first operation on the list: “Information Detection, a.k.a. Sensation.”

***

SECTION 1: Information Detection, a.k.a. Sensation

Sensation is “the detection of information by an entity or sensory unit.” Sensation is what enables any organism or independent being to interact with it’s surroundings, and is the main task that Subjective Observers accomplish.  Also, to achieve Sensation, you generally need some kind of device or apparatus which is capable of sensing things (a sensory mechanism).

As the list of operations states, Sensation is something that every creature must do.  If organisms couldn’t detect things around them, they’d have no way of responding or adapting to their environment – which means that life as we know it wouldn’t exist.

It’d be pretty hard to argue that Sensation is the main function of the brain, since all life forms in the universe (and possibly even non-life forms, such as advanced A.I.s) would need to perform Sensation in order to survive/reproduce/function.

But since we already know all this, let’s move onto the next operation: “Signal Confirmation, a.k.a. Identification.”

***

SECTION 2: Signal Confirmation, a.k.a. Identification

I’m defining Signal Confirmation as “the process wherein an entity senses an object or unit of data, then uses some separate, functionally-unique mechanism (or mechanisms) to detect an additional element (or elements) of that particular object/unit of data, in order to confirm the state or identity of the initial signal.”

So umm… what is this, exactly?  And why did I include it in the list of operations?

Well there’s a reason this operation is also named “Identification.”

Let’s say you’re a single-celled organism, and you’ve just sensed something.  Now being a creature this small, you don’t really have a whole ton of energy to spend, in general.  So you probably don’t want to waste your energy acting on whatever this thing is, unless you’re sure it’ll help you accomplish your goals in some way.

Even if you have a Unique Sensory Mechanism – it’s possible that your USM is somehow getting tricked, or just having a bad day.

So in this situation, it might be good to have an extra mechanism or system on your body which can verify that this info is indeed what you’re looking for: a mechanism which helps you identify the object you’re currently sensing.

Essentially, there are creatures in the world that might be doing this.

HIV, for example, must bind to two separate receptor proteins in order to infect cells. It’s a complex process, but the basic rundown is: after HIV binds to the CD4 receptor, HIV must bind to a second receptor (either CCR5 or CXCR4)… which then permits infection. ([14] Alkhatib, “Current opinion in HIV and AIDS,” 2009, p.96-103.)

Now it’s possible that this “secondary binding mechanism” is simply a physical requirement for infection.  But it’s also possible that one of the reasons for this “secondary binding” is to confirm or verify that the thing HIV detected actually is a cell it can infect.

You might be wondering, “Okay, but even if Signal Confirmation is a real thing… how is it any different from Information Detection?”

Well technically, all Signal Confirmation is a form of Information Detection.  But not all Information Detection is Signal Confirmation.

Here’s the important difference: Info Detection is an operation which can (hypothetically) occur by itself, without being connected to a separate structure or system.  Some “information detection systems” (such as the camera in your phone) don’t even need to be attached to a living creature, in order to technically function.  But that wouldn’t be true for Signal Confirmation.

In this thesis, Signal Confirmation cannot occur by itself.  By definition, for Signal Confirmation to happen, there must be some other thing (i.e. some sensory info) that has already been detected beforehand… which means the mechanism or mechanisms that perform Signal Confirmation must always be connected to a separate thing.

Also, the purpose of Signal Confirmation wouldn’t just be to detect data (per se), but to “make sure” that a piece of data is what it seems to be.

Now if we were to take a more in-depth look at the various creatures mentioned in Chapter 2, we’d see that most of them don’t seem to have any “identification” or “signal confirmation systems.”  So we can probably conclude that Signal Confirmation isn’t necessary for all organisms to perform (which is why I labeled it as a “non-universal” operation).

Of course, it’s also possible that Signal Confirmation is an operation NO creatures perform, and is simply something I dreamed up in a state of enhanced delirium.  But if it is a real process, due to what it is, I don’t believe it can exactly be classified as basic “Information Detection.”

“Why can’t Signal Confirmation be the main function of the brain?”

Because brains probably don’t perform it.

If Signal Confirmation happens at all, it’s most likely a small (predetermining) organism phenomenon.  Think about it: the only reason why a creature would ever have to perform Confirmation is because it can’t sense much info, and must rely on strange or unique systems to help “identify” things.

But since nervous systems have the capacity to sense wider varieties of info, there should be no need for a separate “identification” process.

In fact, the way brains/nervous systems would likely “identify” things is by using neural connections (which, if you read the last chapter, *may be* how Value-Assignment happens).  Therefore, in the brain, Value-Assignment would probably replace Signal Confirmation.

The only reason I included Signal Confirmation in the Value-Assigning order-of-ops is because it’s hypothetically possible (in some alternate universe or unknown case) for an Assigning creature to perform Signal Confirmation. But in our universe, it appears Assigning organisms don’t do that.

***

SECTION 3: Determination of Effective Action

If you look at the operation list, you’ll see the next one is “Determination of Value.”  But since we’ve already technically covered that, I’m gonna skip it for now (we’ll circle back to it at the end).

After “Determination of Value” is “Determination of Effective Action.”  Determination of Effective Action is another universal operation, which means all organisms will have to perform it in some way (even if it doesn’t always occur actively or consciously). But what exactly is “effective action?”

I define it as “action that an entity would take to achieve, or bring itself closer to, it’s goal(s).”

Now the first thing to mention is this: when I use the term “goal(s)” in the above definition, I’m not just talking about biological goals (although biological goals can be part of it).  In this definition, the term “goal(s)” can also apply to “smaller,” “less important,” or more arbitrary goals that an entity might have.

Let’s say you’re playing a game of basketball, you’ve got the ball, and you’re trying to score – but there’s somebody in front of you, blocking you from shooting.  In this scenario, there are a few options you have for achieving this “small” goal: you could pass the ball to a teammate (who could then score), you could try to move away from the blocker and then shoot, you could try taking the shot in place… etc.

Before you do anything though, you’re gonna have a short moment to figure out which action is best for you – based on your position, your goal, and your understanding of the game (i.e. learned experience).  Whatever option you’d go with (whatever option you determine is best), is basically the “effective action” in that moment.

(Some of you might be thinking “that sounds an awful lot like Decision-Making”… don’t worry, we’ll discuss that here.)

Of course, in life, there are usually better ways & worse ways of achieving goals.  For example, if you’re playing chess, there are almost always gonna be better moves you can make, and worse moves you can make.  (And sometimes you’re not even gonna know which moves are better, and which are worse.)

But “effective action” does NOT mean the best possible action you can take, to achieve a particular goal.  Effective Action simply means the action that you’ve determined is the best, based on your position, goals and/or understanding/perspective.

Another thing to mention is that effective action can exist without any action happening.  Effective action is simply the action you would take in order to achieve a goal.

Back to the basketball example: let’s say you quickly scan your surroundings, and you judge that your best move is to take the shot in place. But before you start to shoot, the blocker steals the ball. In that situation, you didn’t actually take the shot you wanted (i.e. the effective action). But since you would’ve taken that shot to reach your goal, it means you still determined the effective action… at least in this scenario.

Lastly, to execute an effective action, you must have some physical system or body part, which is capable of carrying out the effective action.  And not only that, but the way that system/body part works, moves, or is built should depend on what the effective action is.

“Why can’t Determination of Effective Action be the main function of the brain?”

Because it relies too much on value.

To formulate effective action, you need something to act on (a.k.a. sensory info), and a goal or goals to act toward. But that’s not all, no! You must also figure out the relationship between that sensory info and those goals… so you can determine which actions are (or will be) effective.

Well what’s another word for “the relationship between sensory info and goals?” Yup, it’s our old buddy “value.”

Ultimately, this tells us four things:

a.) It won’t be possible to come up with effective action without the use of value.

b.) Value must always be determined before or alongside effective action.

c.) The actions which constitute “effective action” should always (in some way) be linked to a specific value or values.

d.) Also, these two operations (Determination of Effective Action and Determination of Value) are always going to “travel together” within the order-of-operations: so when you move or change one, you must always move/change the other. (This is why Effective Action Determination happens right after Value Determination in both order-of-ops.)

Because these two operations are so closely connected, it means that Predetermining and Assigning creatures will most likely “determine effective actions” differently.

When it comes to Predetermining organisms, we know that values are fixed.  So we can probably conclude that, for these creatures, effective actions will be fixed as well.

Which means, for these organisms, the process of determining effective action should be pretty straightforward.  In fact, Predetermining creatures won’t need to consciously “determine” effective actions at all.  Instead they’ll simply need to build the systems/body parts which carry out effective actions (i.e. the “response mechanisms” of the organism)… which is something that happens automatically, at birth.

Also – because this process is “automatic,” and because these effective actions are “fixed” – the systems/body parts (which carry out these effective actions) will probably be tailored to specific values, and will probably be unchanging in terms of how they function.

For Value-Assigning organisms, “determining effective action” might often be more like the basketball example, where an entity must actively “figure out” the best course of action in a particular circumstance.

The “response mechanisms” of Assigning creatures probably won’t be tailored to specific values (at least in many cases), and will probably be more adaptable.

(There’s one exception: if an Assigning system were to have ” genetically predisposed” values/neural connections, the effective actions attached to those values – and the mechanisms which carry them out – would also likely be “predisposed.”  More on this in Chapter 6, Note 3.)

“Hold on a minute, you just said that effective action requires sensory info and a goal – but according to order-of-operations at the beginning of the chapter, Predetermining organisms create effective action BEFORE SENSING INFO.  How can these creatures ‘determine effective action’ when no sensory info is there?”

Easy: if these organisms can create values before sensing info, then they should be able to do the same thing with effective actions.

Predetermining organisms may not have sensory info right now – but if a Predetermining creature already “knows” (or has determined beforehand) what info it’s going to act on, and what the value of that info will be, then it should be possible to determine an effective action for that info… even if it hasn’t sensed anything yet.  Because again, effective action depends directly on value.

***

SECTION 4: Action

The next operation on the list is “Action.”

For the purposes of this book, Action means “any broad physical movement, concerted change, or patterned behavior an entity engages in.” Technically, Action does not have to serve the biological goals of an organism, although it often does.

Out of all the operations, Action is probably the one that needs the least discussion.  We all know what an “action” is.

Still, there is one thing I’d like to mention about “action” before we move on (something I haven’t talked about yet).

In Predetermining creatures, actions/behaviors are very predictable (due to their systems being “fixed”).  And in Assigning systems, we know that actions/behaviors would generally be less predictable & more “varied.”

However, there should always be patterns of behavior & physical commonalities we can observe in Assigning species (including humans, if we’re indeed an Assigning species) – and there’s one reason why:

Ultimately, even though values may be more “diverse” in Assigning systems, it’s still true that all organisms must accomplish biological goals (survival/reproduction) to continue inhabiting this planet.

And since those biological goals are unchanging, the actions of Assigning creatures should often coincide (i.e. line up) with those goals – even if it isn’t always direct, obvious, or explained that this is happening.  In other words, despite having more “adaptable” systems, there should still be some predictability to the actions/traits of all Value-Assigning organisms.

(If behavior has no predictability, you’re probably not looking in right place, or through the right lens.)

“Why can’t Action be the primary function of the brain?”

There are two main reasons; you can pick whichever one you like.

a.) All organisms, including ones without brains, must perform actions to survive/reproduce.

b.) Without value, Action is almost always wasteful for biological creatures – because Action, by definition, is an expenditure of energy.

Consider this: all biological organisms have a limited amount of energy they can spend before dying.

Organisms could hypothetically take actions which have no goal behind them.  But if a species of creature only took actions with no goal, all they’d be doing is spending more and more energy.  And thus, that species would almost certainly die off.

Actions must be connected to values, at least on a regular basis, for an organism to accomplish it’s goals (survival/reproduction) and keep the species going – which means that said action is going to depend on value (along with whatever process was used to determine value).

Plus, if Action, by itself, was the primary function of the brain (or of any biological system), you’d probably see a lot more random and needless behavior in such systems (i.e. actions with no goal).

Thus, I think we can safely conclude that Action is NOT the main function of the brain.

***

SECTION 5: Decision-Making, Determination of Value, and The End of The Main Thesis

The last operation on the list is “Decision-Making.”

Now unfortunately, the discussion on Decision is so big and multi-layered that I can’t really delve into the whole thing here (or this would turn into a chapter which is mostly about Decision-Making).  So instead, I’m gonna dedicate the next chapter to that discussion.

However, there are a couple questions about Decision-Making that I will address here:

First off, “Why is ‘decision’ part of the Assigning order-of-ops at all – couldn’t Assigning creatures accomplish their goals without ever making decisions?” Also, “Why can’t Predetermining creatures use Decision-Making?”

I’m gonna try to answer both of these questions at the same time.

As we’ve discussed, Predetermining organisms are set up in such a way that their values are locked-in-place (i.e. fixed).  So every operation which directly depends on those values (such as Action) should also be “fixed” – at least in terms of how they’re executed.

What this ultimately means is that the actions/behaviors of a Predetermining creature should always be “automatic:” occurring without any intentionality or decision.

But Assigning systems will be a bit different.

Value-Assigning systems, in general, are going to sense wider varieties of info.  When a Value-Assigning system is dealing with enough sensory info, inevitably, much of that info is gonna be “noise” (meaning it’ll have no relevance to the creature’s goals, and will thus be unnecessary for the creature to act on.)

The problem is, as I stated above, creatures only have a limited amount of energy to spend.

So when “important” info is mixed in with a bunch of “noise,” and when there’s only limited energy to spend, an organism may need some way of selecting what to act on, and what not to.

This is where decision enters the picture.

Presumably, the main purpose of Decision-Making is to allow certain entities (entities dealing with lots of sensory “noise”) to “select,” “choose,” or “intentionally control” what they act on (as well as when & how to act), so that they’re not wasting energy responding to every trivial thing they sense.

How much info does an Assigning system need to detect before decision is necessary? That I can’t tell you. Also, how decision works (as well as where it originates in the brain) is a totally separate issue, which will be discussed in the next chapter.

But this is the basic explanation of why decision would happen in Assigning organisms & not Predetermining ones.

“Why can’t Decision-Making be the primary function of the brain?”

Because, in most cases, Decision-Making will be dependent on the process of Value-Assignment.

Just like Action, the only way for an organism to make an effective decision (i.e. a decision which achieves/moves it closer to it’s goals) is by using some kind of value.  And just like Action, decision without value is biologically wasteful most of the time (at least when that decision results in an action).

It is possible for a creature to make decisions without goals behind them (at least hypothetically).  But unless those decisions have goals at least some of the time, they’ll simply be exhausting energy.

Besides, if I’m correct, the entire purpose of Decision-Making is to support the Action operation (allowing actions to occur “selectively”).  And in order to do that, you generally need value.

Of course, there are a couple key differences between Decision-Making and Action that should be pointed out.  One is that Actions can happen “automatically,” whereas decisions always appear to have a “non-automatic” quality to them (a topic which’ll be discussed more in Chapters 5 & 6).

Also, unlike Actions, it’s possible that some Value-Assigning systems won’t perform Decision-Making.

If decision is only needed when there’s high amounts of sensory “noise,” then maybe some Value-Assigning systems are so small that “noise” is not as much of an issue. In these instances, Decision-Making might not be necessary, or might be restricted to specific circumstances.

Now on one hand, it’s true that we don’t really see decision occurring outside of Value-Assigning systems – meaning that Decision-Making is an operation which is unique to the brain/nervous systems.

But just because decision might be a unique operation (happening only in nervous systems), doesn’t mean decision is the brain’s “main function.”  Remember, if what I’ve argued is correct, Value-Assignment is also a unique operation.

Which leads us to an important observation:

We’ve now gone over every operation on the list, and all of them are either

a.) not happening in the brain,

b.) not unique to the brain, or

c.) highly dependent on Value-Assignment…

… that is, except for Value-Assignment itself. Under this hypothesis, Value-Assignment is the only unique operation in the brain, which doesn’t depend on some other (unique) operation.  In other words, it’s “one of a kind.”

If the main difference between Predetermining and Assigning systems is that Determination of Value (and all operations which depend on it) is active in Assigning systems – then it might suggest that this operation’s “active-ness” is the reason why Assigning systems exist in the first place (i.e. the “main function” of the brain).

But I’m getting ahead of myself.  Maybe this argument isn’t compelling or convincing to you.  Maybe this all just sounds like a bunch of hoopla.

I mean, aren’t all operations in the brain dependent on each other?

Yeah sure, Action and Decision-Making might depend on Value-Assignment… but doesn’t Value-Assignment technically depend on Sensation?  (It’s not like the brain could function without Sensation.)

According to the beginning of this chapter, creatures need to complete all (universal) operations, to work/function/run properly – not just Value-Assignment.  So if the brain needs all these operations, couldn’t you say that the brain has no main function?

Indeed, one could look at it that way.  But here’s something you might not have thought about:

Value-Assigning systems always cost more energy than Predetermining systems.

Remember, in Assigning systems, there are an extra two operations which become active (Determination of Value and Determination of Effective Action).  This means any creature using an Assigning system must spend extra energy performing those two operations.

Why does this matter?

Because, if we assume that the fundamental goals of all biological systems are survival & reproduction, and if all biological systems have limited energy to spend, it brings up a patently critical question:

If Predetermining systems (the old systems) are perfectly capable of surviving & reproducing on their own, without modification, then why would Assigning systems (the new systems) exist at all?

The conditions of the natural world don’t exactly have a history of being pleasant and forgiving – quite the opposite in fact.  How could a new system succeed (and even transcend the older, more “experienced” competitor), when this new system spends more energy to accomplish the same goals?

It doesn’t make sense: a system which is less efficient at spending energy should, by definition, be less viable/able to survive in nature…

… that is, unless there’s something the new system can do, that the old systems cannot (i.e. some kind of major advantage the new system gives to the organism).

Well if everything I’ve argued is true, Assigning systems don’t just have one advantage, they have a few.

Not only do Value-Assigning systems have the obvious traits we discussed (e.g. the ability to discriminate & detect more info), but – by allowing creatures to form values on the spot – the process of Value-Assignment itself would allow organisms to adapt to new environments & information quickly… whereas Predetermining systems (because they have a “fixed” structure) could take generations to adapt to a new environment.

I don’t know about you, but that seems like a pretty big advantage to me.

So if the brain is indeed a Value-Assigning system, then all the previous arguments, when taken together, seem to point to the idea that the brain has a “fundamental function” which it (may have) evolved for.

Under this thesis, Value-Assignment is the first & foremost reason why a system like the brain would ever be necessary.  Plus, if we take the operations to mean anything, Value-Assignment is the only operation which would be unique to brains, which doesn’t depend on other unique operations.

And that is why Value-Assignment (may be) the primary function of the brain.

***

Now the book doesn’t end here, of course.  There’s still plenty left to discuss, questions that must be answered, and topics that must be explored.

But the biggest question that still needs addressing is, “How would Decision-Making work?”

This is one subject I’ve avoided talking about so far; and for good reason.  Although this thesis has been (mostly) straightforward up to now, the conversation about decision is not so clean & simple.

Fortunately, there may be a specific reason why Decision-Making is so hard to solve: a problem, at the root, which might be complicating things.  That very problem is what the next chapter’s about (it’s also the most interesting chapter, in my opinion).  Up next, Chapter 5: The Problem of the Decision-Maker.

Value-Assignment: The Primary Function Of The Brain

Sebastian Rey

Chapter 5: The Problem of The Decision-Maker

____________________

So we’ve now gone over this book’s main hypothesis: why Value-Assignment (may be) the brain’s primary function.  It’s finally done.  Finished.  Through.

This chapter will be a little different from the others.  Rather than leading to a single conclusion, I’m simply going to talk about a problem that exists.  Specifically, it’s one of the biggest remaining problems that must be solved if we want a complete understanding of the brain’s fundamental operations (according to how I’ve outlined them).

Like the chapter’s title states, I call this problem, “The Problem of The Decision-Maker” (or “PDM” for short).

However, since this problem involves so many elements & topics, I’m gonna be splitting this chapter into “parts” (kind of like the “sections” from the last chapter, but longer).

In the 1st Part, I’m gonna define the term “decision.”  In the 2nd Part, I’ll describe The Problem of The Decision-Maker itself.  And in Part 3, I’ll answer various (important) questions people might have after hearing about the PDM.

Now you don’t necessarily have to read the first four chapters to understand what’s in this chapter.  But I’d recommend doing so, because it will give greater context to what’s being discussed, and because some stuff from previous chapters will be referenced here.

This chapter will be the longest continuous chapter of the whole book (Chapter 6 is technically longer, but Ch.6 is really just a bunch of separate topics/discussions rolled into one).  This chapter is also, in my opinion, the most interesting chapter.  And of course, I’ve written it in such a way that nearly anyone can understand it.

So get your frontal lobes ready, and let’s start this business.

***

PART 1: The Definition of “Decision”

Before we can talk about The Problem of The Decision-Maker, there’s a question we need to answer: “What is a decision?”

Well unfortunately, the term “decision” is difficult to define accurately, because we don’t yet understand exactly how decisions work in nervous systems, from a scientific standpoint.  (It’s hard to precisely define something when you’re not totally sure of what it is, mechanically-speaking.) This means that any definition I use is going to rely heavily on my own assumptions and interpretations of what’s going on.

Of course, there are various dictionary definitions of “decision” we could use: definitions which are less technical.  But since this a book about brain function, and since I’m aiming to discuss Decision-Making on a technical level, I’m not going to use any popular dictionary definition.

Instead, I’m going to use my own definition, which will hopefully work for the purposes of this book.  I can’t claim that my definition will satisfy everyone (especially since it’s not complete), but it should at least be a good starting point for discussing Decision-Making in nervous systems.

That said, I’m defining “a decision” as

“an action-selection process – normally resulting in an action, which

a.) can’t be wholly explained by factors, forces, or processes outside of (or not originally part of) the action-taker,

and which

b.) appears non-inexorable to the action taker.”

The reason this definition is incomplete is because it doesn’t tell us anything about the neurological or mechanical components which go into decisions.  Also, terms like “outside of” and “not originally part of” can be interpreted in different ways.  And also, the definition might sound too science-y & long-winded.

But just like with everything else in this book, I’ll take things slowly and explain it one step at a time – so that we can all understand what’s going on here.

Let’s start with the first part of the definition: “an action-selection process – normally resulting in an action”

I probably don’t have to tell you what an action is.

I defined Action in the last chapter as “any broad physical movement, concerted change, or patterned behavior an entity engages in”… but that’s just technical speak for something that you do.  In fact, pretty much anything you do (or that any organism does) can be considered an “action.”

Therefore, action-selection is obviously just the process by which an entity selects actions.  And of course, when you select an action, you usually end up carrying out that action, if you can… which means decisions normally result in actions.  (Because of this, people often think of decisions as being a certain kind of action.)

However, the actions which come from decisions aren’t just any old actions.  Instead, decisions result in special kinds of actions.

WHAT kinds of actions, you ask?

Actions… “which can’t be wholly explained by factors, forces, or processes outside of (or not originally part of) the action-taker.”

Here’s the easy way of thinking about this: if something in the world makes you act a certain way (as in, physically makes you), that action was definitely not a decision, or a result of decision.

For instance, if Person A is sitting still at a desk (reading a comic book or something), and Person B comes along and kicks Person A‘s foot over to the side, Person A‘s foot movement clearly wasn’t Person A‘s decision.

The only actions which could possibly be “decisions” are actions which were not “forced to happen” by some outside factor or process.

(Decisions can, of course, be influenced by outside factors.  For example, if you heard a sudden, loud sound behind you – you’d probably want to turn around to look at who or what is making that sound.  But when a behavior is completely a result of outside factors… if the entire movement has an external, physical cause, or was “forced” to happen… then we know that the behavior wasn’t decision-related.)

The last part of the definition says “action… which appears NON-INEXORABLE to the action-taker.”

So… what the hell does this mean?  What is “inexorable?”

Really, “inexorable” just a fancy word for “unstoppable.”  Therefore, “non-inexorable” means “not unstoppable.”  So basically, if something “appears non-inexorable,” that thing appears like it can be stopped.

This is crucial, because every action we associate with “decision,” is something that appears like it can be stopped (by the person or thing taking the action).  And I can prove it.

If we saw someone tossing a ball up and down, most of us wouldn’t view the ball, itself, as a decision-maker: because the ball is simply reacting to physical forces enacted on it.  The ball cannot stop itself from being tossed.  However, the person tossing the ball is normally viewed as a decision-maker, because (we assume) the person can stop themselves from tossing it, at any point, if they wanted to.

When something can’t be stopped, we never view it as a decision.

If a meteor was hurtling towards a planet, and then crashed into it, we wouldn’t view the meteor (or the planet) as decision-makers, because neither the hurtling nor the crashing could (presumably) be stopped by the entities/objects involved in the crash.

Or how about this: let’s say you’re walking down the street at night… then out of nowhere, someone approaches you, points a [laser pointer] at you, and tells you to hand your wallet over.  And instead of resisting, you proceed to give them your wallet.

Most people would look at that behavior (you handing your wallet over) as a decision.

Why?  Because, even though your actions are being coerced (a.k.a. influenced) in the moment, it is still possible for you not to give the thief your wallet… even though it may mean getting [lasered].  (In other words, you could technically stop your behavior, if you really, really wanted to.)

But let’s say, instead of holding a [laser pointer], the thief is wielding a futuristic, high-tech device that can control people’s minds.  And then (using that device) the thief commands you to hand the wallet over.

In that case, most people wouldn’t view your behavior as a decision.  Because in that hypothetical situation, it’d probably be impossible for you not to hand them your wallet.  (You cannot stop yourself.)

In both cases, you’re performing the exact same action.  But only one of the actions is viewed as a decision: the one that you can (presumably) stop from happening.

“What if EVERYTHING IN THE UNIVERSE is unstoppable (including everything that happens in our brain)?  What if everything we do is essentially involuntary, automatic, or ‘forced’ to happen?”

For example, our reality could just be a digital simulation, where all our thoughts & behaviors have been pre-programmed by a high powered computer intelligence.  Or it could just be the case that everything in the universe is deterministic & predestined, including brain function.

If one of these possibilities was true, would “decision” still exist?

I’m glad you asked.  The answer is, yes, decision would still exist.  Here’s why:

If you paid attention, the definition of “decision” didn’t just say “appears non-inexorable”… it said “appears non-inexorable TO THE ACTION TAKER.”

This is the most important part of the definition, and what differentiates “decisions” from all other things in the universe.

Think about it: whenever you do something that you think of as a decision (like picking up a glass of water), that action won’t just seem “stoppable” to others – it will also seem stoppable to you.  When you move your arm to pick up a drink, even you unconsciously & instinctively assume that this movement can be stopped.  In other words, the action appears to the action-taker (you) as non-inexorable.

Even if everything in the universe was fundamentally unstoppable, the fact that certain actions appear (even to the entity taking the action) to be “stoppable,” is what ultimately causes us to recognize & think of these actions differently from other actions.  These are the actions which many call “decisions.”

Now for a variety of reasons (which I explained in previous chapters), creatures with nervous systems seem to be the only creatures in the world capable of making – what I consider – decisions.

With this, we can move on…

***

PART 2: The Problem

So now that we have a basic definition of “decision,” we can ask the next question:

“Why has Decision-Making been so difficult for Scientists & Psychologists to get a grasp on?”

I mean, shouldn’t it be pretty easy for us to figure out how it works?  Really, what’s so hard about deciphering the brain mechanisms behind how a chimpanzee picks up a berry, or how a bird chooses to tilt it’s head, or how a person selects what to eat?

These behaviors all appear extremely simple; so it might seem like they’d be easy to map out, scientifically.  But such is not the case.

Scientists & medical professionals have, of course, spent plenty of time studying the brain.  There is no shortage of research (especially in the last century) showing how certain patterns of brain activity correlate with certain actions, emotions, psychological states, etc.

Psychologists have also spent lots of time studying the outward behavior of organisms with brains.  Because of this, they can identify general trends in the behavior of an individual, a species, or a group; as well as the likelihood of specific actions under certain conditions, in certain environments, or when subjects are exposed to a particular stimulus.

Unfortunately, in order to fully grasp Decision-Making, it’s not enough to study the general likelihood of behaviors… you need some kind of model, framework, or algorithm which consistently (and accurately) predicts decisions.  In other words, you need a Theory to go along with the observations you’re making: a Theory which explains exactly how environment, brain activity, and decision/behavior are linked.

And at the current moment, outside of highly-controlled settings, no one has a theoretical model which can consistently & precisely predict a large percent of decisions any organism with a brain will make.  At least not that I’m aware of.

“So why has a Theory on decision been so difficult to create?”

Well some of it is due to the limits of Neuroscience.

There are a giant number of variables which factor into brain function, and that alone is going to make it tough to build any psychological Theory.

In the average human brain, there are billions of neurons & millions of neural structures – and all those structures are constantly shifting, in terms of their activity & communications.  To develop some kind of “theoretical framework” which predicts a large percent of decisions, you’d have to come up with a framework that accounts for a large percent of these constantly-shifting variables.

On top of that, (because of this complexity) it can be hard to tell exactly which brain activities & structures relate to which behaviors & characteristics.

With modern technology, neuroscientists have made good progress in mapping out these structures and activities – at least in humans.  And yet, to this day, psychologists don’t agree on how to model all this, theoretically (in many cases, I’ve watched brain theorists flat-out contradict each other’s conclusions).

This is where The Problem of the Decision-Maker comes into play.  See, I suspect there’s another reason why Theories on decision are so difficult to construct/agree on…

When people attempt to build Theories on Decision-Making, most start from the assumption that all brain processes are deterministic.  (In other words, they assume that a strict set of rules is guiding the brain’s activities & decisions; similar to how a set of rules would guide the behavior of a robot or computer program.)

If all brain processes are deterministic, then all you’d have to do, to solve Decision-Making, is come up with the correct mathematical formula or set of rules… easy, right?

Even I assumed this at first.

There’s just one, big issue with this kind of approach: the number of sensory inputs in most nervous systems is so large that it usually makes deterministic solutions absurdly difficult.

I’ll show you.


Let’s say you’re a programmer, trying to program a tiny robot.  And on this robot’s head are two tiny sensors, which can detect different colors.

One of the sensors can detect the color Red, when something Red is in front of the robot.  The other sensor detects the color Yellow, when something Yellow is in front of the robot.

Basically, to the robot, Red means “threat,” and Yellow means “food.”  When the robot sees something Red, the robot is programmed to move backward.  When the robot sees something Yellow, it’s programmed to move forward.

Simple.  No problems here…

… but what happens if the robot sees nothing at all – what’s it supposed to do in that situation?  And how should the robot react if it sees both Red and Yellow at the same time?

Well if your goal is to create a robot that can function in the world by itself (without humans needing to change & update the robot all the time), you probably want your robot to be as versatile as possible. Which means you have to maximize the number of environments, situations & scenarios the robot can handle…

Therefore, with just two sensory inputs (Red and Yellow), the number of scenarios you, as the programmer, must take into account is actually four: Red, Yellow, Red + Yellow, and None.

If you add a third sensor (let’s say Green), things become even more complicated – because you must now include each of the combinations involving Green.

So with three inputs, the number of scenarios you must now program the robot for is now eight: Red, Yellow, Green, Red + Yellow, Red + Green, Yellow + Green, Red + Yellow + Green, and None.

If you add a fourth sensor (Blue), the number of scenarios becomes sixteen: Red, Yellow, Green, Blue, Red + Yellow,Red + Green, Red + Blue, Yellow + Green, Yellow + Blue, Blue + Green, Red + Yellow + Green, Red + Yellow + Blue, Red + Blue + Green, Blue + Yellow + Green, Red + Yellow + Blue + Green, and None.

Five sensors will give you thirty-two scenarios.  Six gives you sixty-four… etc.

If you notice something, there’s a pattern here: with each added input, the amount of scenarios you have to manage doubles.  This is because, (when you’re building a robot to navigate as many situations & environments as possible) the sensors alone aren’t the only thing that matters.  What also matters is the fact that multiple sensors (or none) can be active at once.

Of course, when your robot has a very small number of sensors, this isn’t too much of a concern.  But things quickly get out of hand the more inputs you add on.

For instance, with just eighteen sensors, the robot will have to account for 262,144 possible scenarios.  More than thirty sensors, and you’re talking billions of scenarios to manage.  You can check this all for yourself if you want.

Now for those who aren’t aware, the average adult human brain has around 86 billion neurons.  And a very large percent of them are what neuroscientists call “sensory neurons” (neurons capable of detecting some kind of information outside or inside the body, a.k.a. the “inputs” of the brain).

Needless to say, this creates a quandary.  When you have that many sensory inputs in a system, the number of scenarios a deterministic Decision-Maker will have to account for is… well, ludicrous.  So coming up with a set of rules that accounts for all those scenarios is gonna be insanely, inhumanely difficult.

If that wasn’t bad enough, this kind of system would also need some way of computing the different actions the robot could take, in each relevant scenario – and must also have a way of picking which action to take.

If the robot is highly advanced, it might even need a way of switching the goals it performs, as well as updating how it performs those goals, so it can continue operating without human intervention.  (Because if the robot’s environment changes drastically enough, the robot’s goals – or at least the way it performs those goals – will likely need to change as well.) And this creates an even larger number of things to calculate.

And of course, the robot must also “view it’s own actions as non-inexorable.”

All of this combined is what I refer to as The Problem of The Decision-Maker (or “PDM”).  You could also call it “The Many Sensors Problem,” or “The Doubling Problem”… whatever floats your boat, man.  (I’m sure there’s a more official name for it somewhere.)


Still, some people might not see this as a very big hurdle.

And I admit, the problem itself is actually quite simple, when you boil it down.  It’s so simple, an intelligent person might think, “This whole PDM thing sounds like a bunch of hootenanny.  You can probably just solve it with a few basic tricks.” Well that leads us to the next part of the chapter…

***

PART 3: Important Q&A

Here are a few (important) questions some of you might have, after hearing about The Problem of The Decision-Maker.

1.  "What if the robot's sensors don't always have to be on?  In fact, what if ALL of mister robot's sensors are 'off,' except for a small area/region where the sensors are basically 'active' (sort of like a 'Point-of-Focus')?  (Maybe the robot could even be built so that it can change it's Point-of-Focus when needed.)

If you programmed it this way, the robot would only need to process the sensors which are in the 'active' region... so wouldn't that reduce the total amount of scenarios the robot has to calculate?"

Technically yes, it would.  But this then begs the question: what set of rules would drive that “Point-of-Focus?”

Also, how will the robot know which inputs to turn on & off, and when?

These sound like easy questions to answer, but the second one leads to a paradox.

Imagine, for a moment, that you have a magical closet; and in this closet are many magical shirts.  Now these shirts (being magical & all) will sometimes morph in shape, size, color, and design.  However, there’s a catch.  The interior of the closet is almost pitch black, so you can’t see any of the shirts… except for the one you’re wearing.

You can swap shirts whenever you want, and once you pick out a shirt, it stops morphing; but this closet also makes it so you can only be touching one shirt at a time (meaning you have to take off the shirt you’re wearing and put it back on the rack if you want to pick out another).  So the conundrum is: if you’re inside this magical closet, how can you determine which shirt is the best to wear… if the only shirt you can ever see, while in the closet, is the shirt you’re currently wearing?

It’s a paradox: if you want to pick the best shirt, you’d have to see all the shirts currently available, which you can’t do.

The only practical solution to this paradox is to keep picking shirts at random until you find one you like, which is also the right size.  But if you’re in a hurry & need to get dressed at this very moment, that solution probably won’t be optimal.

This, in essence, is the issue with our robot having a “Point-of-Focus” where certain sensors are active, and the rest are inactive.  How can the robot know when to change the information it’s looking at, if the only thing it can base that change on, is the information it’s currently looking at?

The robot could keep switching it’s “focus-point” at random until it finds something it needs to act on.  And this might even work occasionally.  But if you’re trying to create a robot which achieves goals in an efficient and effective manner, that tactic probably won’t be ideal…

2.  "Okay, but what if the 'Point-of-Focus' works differently than that?  What if the sensors outside the robot's focus don't necessarily have to be 'off' or 'inactive?'

Our minds seem like they can focus on things, right?  Well when a person focuses on something, it's not like the sensory info outside our focus-point just completely & totally shuts off (at least, it doesn't appear to).

Maybe the robot could be built so that all of it's sensors are 'on' - but only SOME of it's sensors (the ones inside the focus-point) are being 'processed' or 'calculated'… while the other sensors (the ones outside the focus-point) can be used for some other task.  (For example, the sensors outside the focus-point could be placed on a sort of 'standby' mode, where they won't be actively processed, but can still inform the robot about various things in the vicinity that might be important to focus on.)

The robot could even have some type of 'memory' of past inputs (so that it doesn't need to constantly look around to figure out what's in it's environment)."

Now we’re getting somewhere.  Yes, it’s possible that this could work.  This may even be something that occurs in organisms with nervous systems, at least roughly-speaking.

Unfortunately, even if we know this, it still doesn’t answer our previous question: what mathematical formula or set-of-rules would drive the robot’s “Point-of-Focus?”  (If “processing” can only occur inside the focus-point, then the focus-point will need to be big enough that your robot can actually “see” what it needs to see, but small enough that the system won’t be “overworked.”  And getting that balance just right won’t be easy.)

Also, where and how would that memory be stored?

Remember, the PDM isn’t just about the number of scenarios the robot must calculate – it’s also about the number of possible actions that must be calculated (as well as when & how to take those actions). And if your robot is able to have memory of these scenarios & actions as well, you may now have a problem which is even more complicated than the one I described in Part 2.

Lastly, even if you could build a system like this, that works with small robots/systems, it still might not solve The Problem of the Decision-Maker in the human brain.

Because even when you reduce the number of sensors being processed, (since the human brain is so large) there could still be millions – potentially even billions – of inputs which are active inside a human brain’s “Point-of-Focus” at any given moment… which means you’ll still have to calculate an unreasonable number of scenarios & actions.

3.  "Okay, well what about other species?  Animal brains/nervous systems don't all have as many neurons as the human brain does, so maybe it would be easier to figure out how SMALLER nervous systems make decisions.  And if we can figure out how they make decisions, maybe we could use that knowledge to solve human Decision-Making, deterministically."

Sure, this could be true.  And yes, other species do have different nervous systems, with different numbers of sensory inputs.  But generally-speaking, most animal nervous systems still have a large enough amount of sensory inputs that it’s a problem.

For instance, the average ant is estimated to have about 250,000 neurons in it’s brain ([15] syfy.com, “To Make Collective Decisions, Ants Behave like Individual Neurons in a Larger Brain,” 2022) – which is far fewer than the human brain.

Well hypothetically, if only 20% of an ant’s neurons (50,000) were “sensory neurons,” and if the ant could only “focus on” about 5,000 of those sensors at one time, that’s still…


141,246,703,213,942,603,683,520,966,701,614,733,366,889,617,518,454,111,681,368,808,585,711,816,984,270,751,255,808,912,631,671,152,637,335,603,208,431,366,082,764,203,838,069,979,338,335,971,185,726,639,923,431,051,777,851,865,399,011,877,999,645,131,707,069,373,498,212,631,323,752,553,111,215,372,844,035,950,900,535,954,860,733,418,453,405,575,566,736,801,565,587,405,464,699,640,499,050,849,699,472,357,900,905,617,571,376,618,228,216,434,213,181,520,991,556,677,126,498,651,782,204,174,061,830,939,239,176,861,341,383,294,018,240,225,838,692,725,596,147,005,144,243,281,075,275,629,495,339,093,813,198,966,735,633,606,329,691,023,842,454,125,835,888,656,873,133,981,287,240,980,008,838,073,668,221,804,264,432,910,894,030,789,020,219,440,578,198,488,267,339,768,238,872,279,902,157,420,307,247,570,510,423,845,868,872,596,735,891,805,818,727,796,435,753,018,518,086,641,356,012,851,302,546,726,823,009,250,218,328,018,251,907,340,245,449,863,183,265,637,987,862,198,511,046,362,985,461,949,587,281,119,139,907,228,004,385,942,880,953,958,816,554,567,625,296,086,916,885,774,828,934,449,941,362,416,588,675,326,940,332,561,103,664,556,982,622,206,834,474,219,811,081,872,404,929,503,481,991,376,740,379,825,998,791,411,879,802,717,583,885,498,575,115,299,471,743,469,241,117,070,230,398,103,378,615,232,793,710,290,992,656,444,842,895,511,830,355,733,152,020,804,157,920,090,041,811,951,880,456,705,515,468,349,446,182,731,742,327,685,989,277,607,620,709,525,878,318,766,488,368,348,965,015,474,997,864,119,765,441,433,356,928,012,344,111,765,735,336,393,557,879,214,937,004,347,568,208,665,958,717,764,059,293,592,887,514,292,843,557,047,089,164,876,483,116,615,691,886,203,812,997,555,690,171,892,169,733,755,224,469,032,475,078,797,830,901,321,579,940,127,337,210,694,377,283,439,922,280,274,060,798,234,786,740,434,893,458,120,198,341,101,033,812,506,720,046,609,891,160,700,284,002,100,980,452,964,039,788,704,335,302,619,337,597,862,052,192,280,371,481,132,164,147,186,514,169,090,917,191,909,376

… scenarios that a deterministic Decision-Maker would have to account for at any given moment.  That’s pretty bananas, if you ask me.

You could go smaller than an ant’s brain, to solve Decision-Making… however, when you’re dealing with animals that small, it’s gonna be hard to find commonalities/corrolaries with the human brain.  Which means, even if you can figure out how decision works in such tiny creatures, it will be be hard to carry that knowledge over into human Decision-Making.  (I’m not saying it couldn’t be done, though.)

4.  "Okay, but can't today's technologies (such as modern computer systems, modern phones, internet services like Google, and modern A.I.) process large amounts of data?  And aren't they usually able to process that data extremely quickly?  So isn't it still possible for them to solve the PDM - just with sheer computing power?"

The answer to this is a bit more complicated.

Yes, it’s true that sophisticated technological systems & devices exist the current day.  And yes, they can generally handle very large amounts of info, very rapidly (and they’re only getting faster each year).

But even though modern technologies can do this, they still can’t (as far as I’m aware) solve the PDM – even with increasing amounts of computational power.  And it has to do with how these systems function.  More specifically, it has to do with how they process data.

I’m about to use a weird analogy, but it’ll make sense once I explain it.

Let’s say you’re in a gun-building competition with a dozen other people, and each of you have 24 hours to build a gun that must fire 1,000,000 bullets.  After the 24 hours are up, the person whose gun fires all 1,000,000 bullets the fastest, wins.

From a builder’s perspective, would it be easier to build a gun that fires all 1,000,000 bullets simultaneously, or would it be easier to simply build a gun that fires one bullet at a time?

Obviously the second option would be easier.  Why?  Because, all else being equal, building something that performs one action will usually cost less energy (and require fewer components) than building something that performs large amounts of that same action.

Plus, even if your gun only fires one bullet at a time, as long as you can get that gun to fire really fast, you might stand a chance at winning…

… then again, maybe you don’t need to build a gun that fires all one-million bullets simultaneously.  What if you could just build a gun that fires lots of bullets (like 10,000) at the same time?  In that case, you’d probably have a serious advantage over the “single-fire” guns (even if your “simultaneous-fire” gun could only shoot about once every minute or so).

It sounds odd, but the way brains & technology process data is sort of similar.

Modern computers & technological systems (although they’ve gotten faster over the years) are kind of like the “single-fire” gun: when you go down to the smallest level, the info-processing components of these systems (transistors, logic gates, memory cells, etc.) can generally only process information one bit at a time.

Of course, if you put billions of these “one bit at a time” components together, make them efficient, and get them to work quickly, you can achieve remarkably high processing speeds…

… but what if you had a component that was different? What if you had a tiny info-processor which – unlike computer technology – could handle many “bits” (as in, thousands) at the same time?

If you put billions of those components into one system, made them efficient, and got them to work quickly, you’d probably have a major processing advantage over computer-based technologies (or at the very least, you’d have a system with different advantages than a computer).

Well in a nutshell, that’s how the brain works.

Neurons (the smallest data-processing components in the brain) can typically “process” multiple signals at once: in some cases, a single neuron can receive signals from up to 10,000 other neurons simultaneously ([16] Zhang, arXiv:1906.01703v1, 2019). Even the most advanced technological systems, programs, and devices today simply can’t do that.

5.  "Okay, well what if you just put thousands of super-fast computers together?  Couldn't you 'process' more info than a brain that way?"

Technically, yes, you could… but only if you’re looking at the total amount of information processed.

Since brains and computers handle data in fundamentally different ways, it leads to fundamental differences in the way these systems operate, and what they can actually do with that data.

Even if you put many high-speed computers together, there will still be limitations that computers have, which brains don’t.  For example, a human can often glance at another person’s facial expression, and – in a fraction of a second – get a general sense of what that other person is feeling (I’m talking about basic emotional state, not psychological conditions such as depression).  Computer programs today still have a hard time with this.

Of course, there are plenty of limitations brains have, that computers don’t.  For example, there are computer programs which can – in a fraction of a second – examine hundreds of thousands of news headlines, pick out all headlines containing the word “bank,” and then list those news articles by date.  And they can do this with extremely high accuracy.  No human today is capable of that.

Basically, these differences stem from how information is handled in these systems.

Sure, when it comes to fictional gun-building competitions, maybe the only thing that matters is the total number of bullets being fired.  But when it comes to solving the PDM, it’s probably not just about the total amount of data processed, it’s probably also about the way your system processes data.

6.  "What if you're wrong?  What if modern tech is getting SO advanced that it can solve The Problem of the Decision-Maker, even with the limitations you just talked about?"

Well that might be the case.  Humans have certainly developed many ingenious devices and technology in the modern day.  I won’t deny that the PDM could be solved in the future, using some clever innovation(s).

But if you want evidence that the PDM is still currently unsolvable, all we need to do is look at some examples.

Take any group of technologies today (they can be the most advanced devices, programs, computers, or A.I.s you can think of), and most likely, they’ll all have one very specific commonality:

Every technology available to us today requires an outside party (either user or programmer) to give it a goal to perform.

Humans don’t seem to have that limitation.  If I wanted to drop everything I’m doing, and spend the rest of my life studying the art of trapeze, I could.  No one has to tell me what goals to perform, and nothing is preventing me from changing the goals I’m performing, at a moment’s notice.

This may sound silly or trivial, but it’s extremely significant, because all current-day technological systems (no matter how advanced) require goals to be administered by someone/something outside the system.

Take, for example, Stockfish, which is one of the strongest Chess programs in the world. Currently, Stockfish can beat any human player in the game of Chess… even the human world champion.  One thing Stockfish cannot do, however, is change what game it’s playing.  It’s programmed to play a specific game, using a specific set of rules (and typically requires another party to interact with it, in order to play).  If you wanted Stockfish to play a different game (like Othello), or perform different kinds of tasks, a programmer would have to re-work the system to accomplish these goals.

We could also look at ChatGPT, which is currently a popular A.I. interface.  At the moment, ChatGPT is capable of producing a wide array of outputs: from generating code, to teaching you a new language, to coming up with business ideas… it can even be programmed to control robotic movements in a limited capacity.

Yet, ChatGPT still requires a human to ask it a question or give it some kind of prompt (i.e. a goal), in order to generate an output/answer.  And when it does generate an answer, there are specific sets of rules (which programmers had to create) determining how it answers.  (So basically, the way goals are performed AND goals themselves must be determined by an outside party, at some level.)

Even in the cases where ChatGPT can move robots, a user or programmer still needs to give ChatGPT goals & tasks to perform while “in” the robot.

Humans, on the other hand, can not only decide our own actions/movements, we can also (seemingly) decide our own goals & priorities, to some extent.  (Other animals with brains may be capable of this as well.)

Yes it’s true that all humans & organisms have biological goals – which there’s no way for us to control.

But even then, it’s still possible for us to act against our own biological goals.  (I can choose to starve myself or to not have babies, for instance.) We can also create new & unique goals, which exist separately from those biological goals.  (I can learn to drive race cars or play sports… or even invent a new sport… for instance).

The ability to create & choose goals is such a fundamental part of human Decision-Making, that it would be hard to imagine our lives without it.

And yet, no technology today can do the same.  No technology can determine it’s own goals (unprompted), create it’s own new & unique goals (outside of it’s programming), or act against it’s programmed goals.  And this is evidence that the PDM is still unsolvable, through modern technology.

7.  "One last question: what if you had a way of programming values into the robot?  Basically, the robot could associate sensory inputs with other sensory inputs, in order to create computeristic 'values' (similar to how a brain does).

You could even have some sort of 'hierarchy' that determines which values are the most important to act on (and which ones are less important), as well as how to act on them.  Then all the system has to do is compare 'values' to other 'values,' to determine the robot's actions.

So instead of calculating all the scenarios the robot might encounter, the robot only has to calculate values.  Couldn't this solve the PDM?"

Well believe it or not, people have built similar kinds of systems before: they’re called “Neural Nets.”  And Neural Nets are responsible for a great many technological innovations.

But there’s still a fundamental flaw with this technology, preventing it from “solving” the PDM… and it’s exactly we talked about above: Neural Nets (along with all other modern technologies) still don’t/can’t determine their own goals & priorities.  The “hierarchies” that Neural Nets use to generate actions (i.e. the system that says which “values” are most important to act on & how) must still be determined by an outside party.

For example, if I want my Neural Net to control self-driving cars, I’ll normally need to spend time and resources “training” my Neural Net to accomplish that specific objective/goal.

But let’s say I get bored of self-driving cars & want my Neural Net to do other things (like recognize & categorize faces, by age).  In that case, I’ll most likely have to re-train the system; using different sorts of data, different “values,” and a different hierarchy (which I create).

I could make a Neural Net which performs many goals & tasks (like ChatGPT does), by building a complicated hierarchy, and by training the system with data of different kinds.  But for my system to take any actions, someone will still need to tell it what to do/what tasks to perform.

To put it simply, the best neural nets right now are more like sophisticated “pattern-identifiers,” rather than Decision-Making machines.  They don’t determine their own objectives, and they’re generally still reliant on a programmer to define the system’s rules/hierarchies.  This is probably why you never see these types of machines gaining “sentience.”

Now it is imaginably possible that, in the future, a Neural Net (or Neural Net-type system) could be able to “select it’s own goals,” or even create new goals (unprompted).  Maybe, somehow, a technology could even be built that’s able to “act against” it’s own programmed goals.

But…

… even if Neural Nets could do this…

… The Problem of the Decision-Maker still wouldn’t be solved.

Remember, to solve the PDM, your system must be able to make decisions – which means there must be some action-selection process inside the system, resulting in actions, that appear non-inexorable to the action taker. (Basically, the system must be able to view it’s own actions as “stoppable.”)

I think many people assume that if we recreate a system which can behave & select goals like a human does, that this would automatically result in Sentience or Consciousness (turning it into a Decision-Making machine).  But that’s not necessarily the case.  The ability to take actions & select goals may exist separately from the ability to make decisions & be conscious.

Just because you can build a system which is able to act within a large & complex sensory environment, does not necessarily mean that the system will view it’s own actions as non-inexorable.  Because in order to “view one’s own actions” as anything, there must be something there “doing the viewing.” (In other words, it’s possible that some level of Consciousness may already need to be present in the system.) There must be something inside the robot which isn’t just sensing information, but is aware of (experiencing) that information.

However, you might not need to recreate decision itself to solve most of the PDM.  If you can create a system which is able to process large enough amounts of info (in a manner comparable to a brain), pick it’s own goals, create new goals, and take effective actions in nearly any sensory environment, that might be good enough to mimic human Decision-Making.

My definition of decision could also be wrong (at the very least we know it’s incomplete).  So take of this what you will.

***

Of course, the other possibility is that decisions in the brain are not deterministic.  But I think, since so many people dedicate their lives to finding deterministic solutions & creating deterministic systems, that possibility often gets ignored.

It seems to me that the truth of the matter could cut either way: decisions in the brain might be deterministic, but they also might not be.  If neither case has been proven, then I think both possibilities should be considered.  At least, that would be the scientific approach.

Plus, even if decisions aren’t deterministic, there could still be some route to modeling/solving decisions technologically – it’ll probably just require some innovation & creative thinking.

In any case, I believe the discussion on decision will always be an interesting one worth exploring.

And that’s curtains for this chapter.

But of course, the book still isn’t done.  There are still thesis-related questions I haven’t answered (like how genetic predispositions can exist in a Value-Assigning system), subjects I’ve promised to discuss further (like Consciousness), and topics I want to talk about (like how Value-Assignment relates to emotions).

This is why the next chapter is titled “Chapter 6: Everything Left To Discuss.”

Value-Assignment: The Primary Function Of The Brain

Sebastian Rey

Chapter 6: Everything Left To Discuss (a.k.a. “The Lost Scrolls of Yore”)

____________________

Okay, so the main hypothesis of the book (along with the chapter about decisions) is finally done.  I’ve talked about Value-Assignment and Decision-Making so much that you’re probably sick of terms such as “value” and “decision.”

But there’s still important stuff to talk about, that I never got around to.

Like, where are the brain’s “goals?”  And, what role do emotions, memory, and other “functions” play, when it comes to the thesis?

Well this is the chapter where I go over those things… along with a few other topics.

Unlike the other chapters, this chapter won’t have a specific through-line or theme.  Instead, I’ll be going point-by-point, discussing various subjects & questions that I want to (or have previously promised to) talk about.  Some questions will have answers, some won’t.  Some points will be open-ended, some won’t.  Some of the topics are just things I find interesting.

You can think of it almost like a collection of “notes,” in the form of a chapter.

All of these “notes” will be connected to the main hypothesis in some way, but they won’t all be connected to each other, in terms of subject-matter.  They’re just various matters & materials I want to address, which I couldn’t fit in the main text.

Still, if you made it this far without quitting, this chapter will probably be worth a read.

Plus, once this chapter’s finished, the pain will all be over.  So let’s begin.


Note 1: The Consciousness Discussion

Let’s start off with one of the most significant topics I said I’d address: Consciousness.

Now in order to discuss Consciousness, the first thing we need to do is define the term “consciousness.”  And this is where we run into the first issue – because it seems that oftentimes, people who use the term “consciousness” are, in fact, referring to different things.

Some people use the term “conscious” to mean “being awake or aware of your surroundings” (e.g. “during surgery she was sedated, but it only took a few hours for her to become fully conscious”).  Others might use the term “conscious” to refer to a person’s intentions, or the deliberateness of someone’s actions (“he was conscious of the crime he committed”).  Others, still, might use the term “consciousness” to mean “one’s ability to think about & verbalize their own mental/emotional states.”

However, when psychologists talk about “the hard problem of consciousness,” the type of Consciousness they’re referring to is generally the experience of sensation.

(Why is pain something we feel?  Why is color something we see?  Our bodies could just robotically respond to information in our environment, without needing to experience that information… so why don’t we do that?)

The experience of sensation (a.k.a. “conscious experience”) is – believe it or not – one of the most difficult problems to solve in psychology, which is why it’s called the “hard problem.”

Now unfortunately, like I mentioned in the Intro, my thesis won’t be solving any “problems of consciousness.”  However, there are certain ways in which my thesis connects to these problems (and hopefully, that can get us closer to some solutions).

1.  For example, the experience of sensation might be linked to the order-of-operations, somehow.

Yes, all organisms use an order-of-operations (according to this thesis)… but not all organisms have a conscious experience (as far as we know)… which means conscious experience can’t be part of all order-of-ops.  So how could conscious experience and the order-of-ops be linked?

Well one possibility is that conscious experience is tied to the “top-level” order-of-operations happening in our brain (I discuss the concept of “operation levels” in #4.1).

Under this thesis, there’s an order-of-operations present in every biological system, at every level of function (including one at the “top” level, governing whole-body functions).

If there is a “top-level” order-of-operations happening in the human brain (as well as other Value-Assigning systems), it would at least make sense for “conscious experience” to be attached to it.  (If conscious experience was tied to “lower-level” systems – like the peripheral nervous system, for instance – the way we interact with the world would be very different.)

Sure, it could be that each of your body’s systems have their own “conscious experience.”  But when you think about your experience of the world, I’d venture to guess that you don’t feel like a hair or a pimple.

And maybe not all brains/nervous systems have a conscious experience.  (It’s hard to say, but there might be plenty of creatures with nervous systems – such as worms and tiny insects – which can’t experience sensation.  If that is the case, we’ve gotta ask: why does the experience of sensation happen in some brains and not others?  And that’s a question I can’t answer.)

But even if conscious experience only occurs is some brains, as long as we know it’s happening in our brain (and is tied to a “top-level” order-of-ops), it would explain certain things about the human condition.

For one, it would explain why we can make decisions.

According to my definition from Chapter 5, to make decisions, an entity must be able to “view it’s own actions as non-inexorable.”  And for an entity to “view it’s own actions as non-inexorable,” you must have a “viewer” (something which is experiencing sensory information.)

If there’s a single, “top-level” order-of-ops in our brain, which conscious experience is connected to, it would explain why there’s a “viewer” for decisions, and why that “viewer” seems to be like one, single thing (because there’d hypothetically be only one “order-of-operations” playing out at the top level, in our brain).

This “top-level” hypothesis would also explain why our decisions appear unified (technically, there are millions of “tiny actions” being taken in your body when you “decide” something… but you still probably feel as if your decisions are tied to one “big” action, or at least a small number of “big” actions).

Of course, my thesis doesn’t explain how conscious experience happens (or even how a “top-level” order-of-operations arises, in the first place).  It also doesn’t explain how conscious experience becomes attached to that “top-level” order-of-ops.

But ultimately, if these two things were attached, it would make sense of the above phenomena.

2.  It’s also possible that certain “types” of Consciousness are tied to specific operations occurring at the top level.

The first operation in the Value-Assigning order-of-ops is Sensation… so it probably wouldn’t be surprising if the experience of sensation was connected to that first op.

However, like I mentioned above, some people use the term “consciousness” to refer to intentions or the deliberateness of someone’s actions.

Well both “intentionality” and “deliberateness” seem to be related to Decision-Making (in fact, “intention” and “deliberation” may just be different parts of, or terms for, the Decision-Making process)… because both of them have to do with action-selection.

3.  It’s also possible that some “types” of Consciousness are actually emergent properties (I discuss what “emergence” is in Note 6).  As a matter of fact, in recent history, more & more scientists seem to support such an idea.

However, “emergence” doesn’t answer every question about Consciousness.

For instance, in karat #1.2, I asked “Why is it possible for beings/systems with many individual sensory nodes to have one single, subjective experience of the world?”

This is essentially a question about Consciousness, because conscious experience appears to be singular.  (In other words, nobody has multiple conscious experiences at a time… we only have one experience, at least from how it appears.  This happens despite the fact that each brain is made up of billions of individual parts.)

Some people refer to this phenomenon as “information integration.” But regardless of what term you use for it, emergence (alone) doesn’t explain why it happens.  (If conscious experience was linked to a “top-level” order of ops… that could explain why our experience is singular.  But what do I know?)

Emergence also doesn’t explain why (like I stated in karat #5.5) the brain can be damaged – sometimes so severely that parts are destroyed, removed, or separated from one another – and still retain that singular experience.

Still, emergence does explain certain types of Consciousness.

Like I mentioned above, some people use the term “conscious” to refer to “being awake or aware of your surroundings.”  Well how awake someone is (or feels), might be a property of emergence.  If certain areas of the brain are responsible for the feeling of “awakeness,” then the more activity we see in those areas, the more awake you might be (or feel).  Same goes for “awareness of surroundings.”

In these ways, my thesis might pertain to Consciousness. Obviously, though, more work needs to be done on all of this.


Note 2: The Brain’s “Goals”

Here’s another important question I said I’d address: “Where exactly do ‘goals’ exist in the brain (and other biological systems)?”

In Chapter 1, I stated that a value is basically a relationship between a.) information that an entity senses, and b.) a goal that an entity has.

But this brings up a problem: we know that all animals use specific, observable physical structures to detect information in the outside world (brains sense info using neurons, other systems might use “proteins” or “sensory hairs” to detect info).  However, a “goal” is more of an abstract idea, not an actual physical structure…

So how is it possible for an organism to create a physical connection between “sensory signals” and “goals,” when a “goal” is not really a physical thing that can be “connected to?”

I think there are a couple answers to this question.

A. As I argued in Chapter 2, the sensory & response mechanisms that Predetermining creatures (creatures without brains) carry should essentially be “hardwired” to accomplish specific tasks, at all times (which is what makes their behavior consistent & predictable).

So if we’re talking about Predetermining organisms, the “goals,” you could say, are built into or part of the sensory/response mechanism itself.  No “physical connection” needs to be made in these systems, because the values already come “pre-built” (most likely when the creature is born), under this thesis.

B. But what about in nervous systems/the brain?

If this book is correct, brains are different from Predetermining systems, because brains can assign values.  If you think about it, this means (in the brain) “sensory info” and “goals” must start off being physically DIS-connected from one another… which means that the “goals” of the brain must exist, somewhere, as a physical thing (at least to begin with).  So where exactly are those “goals?”

One possibility is that goals have stand-in neural structures in the brain.

Neuroscientists have long understood that each of your body parts are “mapped onto” or “represented” in certain areas of the brain: areas referred to as the “homunculus” ([18] StatPearls, “Neurosurgery, Sensory Homunculus,” 2024). Well in a similar way, it’s also possible that biological goals are “mapped onto” or “represented in” certain regions of any nervous system.

So basically, when the brain forms a relationship between “sensory info” and “goals,” what the brain *might* actually be doing is forming a connection between a.) neurons which detect some kind of info in the world, and b.) neurons which “stand in for” or “represent” a specific goal.

These “stand-in” nervous structures could be spread throughout the system, or they could be concentrated in one location – but either way, this is one technique that nervous systems/brains could use to accomplish Value-Assignment.

C. It’s also possible that “stand-in” neural structures aren’t always necessary to create values in the brain.

If the only thing required for value creation is sensory info and a goal, then maybe any kind of goal (even if it’s an “indirect,” abstract, or trivial goal) could suffice.

Let’s say I want to drive to the store.  In order to do that, there’ll be several small steps I have to take.  The first step is to get up from where I am.  The next step is to find my keys.  The next step is to walk to my vehicle… etc.  If each step in this process is like a miniature “goal,” then the neural connections or associations required to complete each of these goals could hypothetically be considered “values.”

So depending on what kinds of things are defined as “goals,” then it’s possible many different kinds of neural connections could be part of the Value-Determination procedure.

“What sorts of tasks would be considered goals?” (Could something as small as moving your own pinky finger, or changing what you look at, count as a “goal?”)

That, I don’t have a definitive answer to.

“What about genetically predisposed neural connections – where do the goals & values come in?”

That’s discussed in the next note.

“Which exact neural structures ‘stand in’ for goals?”

That I also can’t tell you… most likely, you’ll need a Neuroscientist to figure that one out.

Still, I think A., B., and C. above are good enough to explain how goals broadly work, in the brain and other biological systems.


Note 3: Genetic Predispositions & Value-Assignment

I mentioned in karat #4.4 that it’s likely many neural connections in the brain are “genetically predisposed.”

But if that’s true, it brings up a problem: a genetically predisposed value can’t possibly come from “Value-Assignment” (because Value-Assignment is a process which occurs after an organism senses information, and genetic predisposition occurs before an organism can sense anything).

So what’s up with that?  How can you have a genetically predisposed, Value-Assigning system?  If researchers were to find out that most neural connections in the brain are (in some way) genetically predisposed, could you even call the brain a Value-Assigning system?

Well there are a few things to discuss here.

1.  First, if the brain can assign values at all (even if a large percent of connections were somehow not “assigned”), then you can probably still call the brain a “Value-Assigning” system… because the other types of biological systems that exist (Predetermining systems) cannot assign values, in any capacity, under this hypothesis.

So basically, the fact that nervous systems/brains are even capable of Value-Assignment, is what would make the brain a Value-Assigning system (at least in my view).

2.  Secondly, would a “genetically predisposed” neural connection, itself, be considered an “assigned” connection, or a “predetermined” connection (and also, which order-of-operations would apply)?

This one’s complicated.

We know that “genetically predisposed” neural connections & neural circuits must at least use values (meaning there must be a “sensory signal” and some kind of “goal,” for these predispositions to have any effect on behavior).

How do we know this?

Because every “genetic predisposition” you can think of, fits this model.

Take, for example, the “autonomic” or “involuntary” functions of the body, such as the “heart pump” function.  You don’t have to learn to pump your heart (meaning this function is probably “inherited” or “genetically predisposed”).  To perform this function, however, a certain process must take place:

First there must be a sensory signal which “kicks off” or “initiates” the function (otherwise your heart would never pump).  Then the system must determine (or have determined beforehand) the “importance” of that sensory signal (meaning there must be a value somewhere in there, so that the system knows what to do with the signal).  Then the system must take some kind of action (such as contracting the muscles of your heart, causing it to “pump”).  The process then repeats.

(Obviously it’s more complex in real life – this is just a simplified overview.)

If this process sounds kind of like an “order-of-operations”… that’s because it is.  The heart pump mechanism, breathing, hair growth, every reflex you have, every motor skill… all of these functions (under this thesis) will involve some “order-of-operations.”

The question is: which order-of-ops are these “genetic predispositions” using?  Is it the “Predetermining” one, or the “Assigning” one?

I think the answer is: it depends.

Yes, in karat #3.3 I said that individual neurons often behave similarly to “predetermining” mechanisms (because they can respond to highly specific stimuli, and can have highly predictable reactions).

Well the same could be said of “genetically predisposed” neural connections & neural circuits (because these connections/circuits can also respond to highly specific stimuli, and have highly predictable reactions).

But there’s more to it.

In karat #3.2 I also said “an assigned value can be described as a ‘conditional’ or ‘modifiable’ value; because this type of value is CAPABLE OF BEING MODIFIED… and even if an assigned value never IS modified or adjusted after creation… as long as it CAN BE MODIFIED in a direct and active manner… then it can still be considered an ‘Assignment-type’ value…”

(In this case, I’d define “direct and active” as “modifiable by the activity of surrounding neurons or neural systems.”)

So because of this, there are a couple things we can conclude:

If the activation pattern (i.e. the overall stimulus response) of a “predisposed” connection/circuit has never been modified, the process of Value-Assignment obviously hasn’t come into play.  However, I’d argue that these “predisposed” connections can still be considered “Value-Assigning mechanisms,” as long as these connections can be modified, directly and actively.

In this book, for a system/mechanism to be considered “Predetermining,” it must be fixed in how it responds to information.  And a connection which can be changed (directly and actively), can’t necessarily be considered “fixed.”

It may be fixed at the moment, but not in all possible future scenarios.  For example: if a car is currently driving straight forward, would we say that the car must be a “single-direction car?”  Not necessarily.  Even if the car has only ever traveled in a straight line, as long as it can turn (through mechanisms which were built into the car), we’d still likely define it as a “turning car.”

So even if no modification has occurred, as long as a neural connection can be altered (through surrounding neural mechanisms), I’d still generally consider it to be “Assignment-type” (which would mean that the Assigning order-of-ops applies, when it comes to these connections).

Only when the activation pattern of a connection can’t be modified directly or actively, could that connection be considered “Predetermining,” within this thesis (which is indeed possible in some cases).  In such instances, the Predetermining order-of-ops would obviously apply.

(Of course, things get more complicated when you look at actual neurology – because it’s not always easy to say which connections can be modified, and which can’t. Also, how modifiable a connection is, can vary quite dramatically.  But in general, this is my thinking on the matter.)

3.  Lastly, if you’re gonna ask, “What percent of our values are ‘predisposed,’ and what percent do we actually ‘learn’ (i.e. Assign)?” – what that really boils down to is a question of Nature vs. Nurture.

The truth is, I can’t give you a precise answer to that question.  It’s possible each brain fluctuates, in terms of those “percentages.”  But what I do know is that you need both predisposed and “learned” connections for a system like the brain to function the way it does.

Without genetic predispositions (i.e. “pre-set” values/neural circuits), you wouldn’t be able to do very basic things needed for your body’s functioning, such as breathing… you’d have to learn that.

Without genetic predispositions, you’d also have to learn to experience things like pain & hunger.  (Seriously, imagine having to learn pain – imagine trying to psych yourself into feeling hurt, after stubbing your toe or getting stung by a bee.)

Without genetic predispositions, you’d probably even have to learn the basic aspects of sensing information, such as hearing sound, seeing color, and feeling touch.

These abilities are clearly something most of us are born with, not something we have to learn.  And the people who aren’t born with these abilities can’t usually teach themselves these abilities (someone who’s born colorblind can’t teach themselves to see color, as far as we know).

The way we behave can also be influenced by genetic predispositions.  For example, the fear response is almost universal among humans.  Of course, learning can affect when we experience fear, but the fear response itself (the way your mind & body physiologically react to “threatening” information) seems to be something most of us are born with.

In some cases, genetic predispositions can even affect the learning process. For example, research has shown that there are particular ages where it’s “easier” for children to learn languages ([19] mit.edu, “Cognitive scientists define critical period for learning language,” 2018).

And lastly, without genetic functioning, you wouldn’t even have a brain at all… because genes (presumably) are what code for your brain’s structure, at birth.

At the same time, genetic predispositions cannot be responsible for all of the brain’s activities.  It’d simply be impossible, because your DNA cannot predict the exact scenarios & environments you’re gonna be in, throughout your life.

Predetermining creatures can essentially “predict” what they’ll sense and respond to, because of how they’re built (see Chapter 2).

But since Assigning organisms (organisms with brains) can generally sense much wider varieties & types of info than Predetermining creatures, there’s usually no way of “predicting” what exact info will be encountered.  Which means Assigning systems will always need some ability to adapt to individual circumstances – by making dynamic adjustments to values & behaviors.

In fact, Value-Assignment itself could be described as a mechanism of learning & environmental adaptation (because it allows organisms to make dynamic adjustments to values, and thus behaviors).

Beyond that, it’s possible that Value-Assignment can affect processes which are genetically predisposed.

One example is the flinching reflex.  The flinching reflex (a reflex which occurs in response to a fast-moving object approaching) is a near-universal reaction that can be observed in humans, so we can infer that this reflex is probably genetically predisposed.  However, for someone to become a competent fighter, they must typically “unlearn” or “untrain” their flinching reflex (which is possible with enough practice) so that they’re more adept during a fight.

In some cases, environmental influences (particularly those which lead to physiological changes in the body) can even affect genomic expression – which can then be passed down to children – in a process we refer to as “epigenetics” ([20] cdc.gov, “What is Epigenetics?,” 2022). And some of those “environmental influences” may be mediated by the process of Value-Assignment (you can’t have a fear response without threat detection – which, under this thesis, requires value).

But again, just because learning influences brain activity, doesn’t make the brain a blank slate.  And just because genes affect the brain’s functioning, doesn’t mean that all brain activity is controlled by genetic predispositions.

In other words, it’s not an “either-or” scenario: the brain isn’t just a Nature machine or just a Nurture machine.  Both “nature-type” and “nurture-type” influences must be a part of the system, by necessity.

However, I can’t tell you exactly what percent of values/behaviors stem from Nature or Nurture.  (If I had to guess, a balance must continually be struck between these two “influences,” in all organisms.)


Note 4: How do we observe Determination of Effective Action, in the brain and other systems?

It’s hard to say.

Like I mentioned in #4.3, it’s possible Determination of Effective Action (DoEA) and Determination of Value (DoV) are the exact same thing: indistinguishable from one another biologically.  (If they are the same, it would mean Determination of Effective Action does not occur in the brain – or in any biological system – and that Determination of Value is the important operation to observe.)

However, assuming both of these operations occur, we know from Chapter 4 that Determination of Effective Action (in the brain) must, in some way, be connected to the process of Value-Assignment.  We also know (from the order-of-operations) that effective action will always be determined after or while value is determined, and that each “effective action” in the brain will necessarily be linked to some kind of “value.”

But here’s the issue:

I hypothesized in #4.1 & #4.3 that the way operations occur might depend on the level at which they happen (i.e. how many distinct components/systems are involved).  Operations involving more neural components, systems, & brain regions (i.e. “higher-level” operations) might work differently than operations involving less components, systems, & brain regions (“lower-level” operations).

And since “effective actions” will always be tied to specific values (values which can obviously vary, extremely), it may mean that the way DoEA is performed will be extremely situational, and will vary based on the value(s) you’re looking at, and the level on which it occurs.

However, there are a couple things that should be true:

If we assume that most actions which occur in a biological system or mechanism (at any level), are the effective actions of that system/mechanism, then


a.) by working backwards, we should often be able to infer the value(s) connected to those effective actions, and


b.) if action is known and value is known, then it should logically be possible to derive the process by which Determination of Effective Action occurs (assuming such a process is occurring at all).

“What if Decision-Making is actually a way of Determining Effective Action (in other words, what if decision is just a specific type of DoEA)?”

This is an interesting idea, and one that I think has plausibility.

Of course, we know that not all creatures can make decisions.  And even inside nervous systems, not all actions/values involve a Decision-Making process (for example, the way your eyes move when you’re asleep isn’t something you really “decide”).  This means that not all cases of “effective action determination” (even inside the brain) can be an instance of Decision-Making.

So if Decision-Making is a type of DoEA, then DoEA can only turn into decision some of the time.

Still, there are a few arguments in favor of such an idea:

  • DoEA becomes active only in Value-Assigning systems… and Decision-Making (which is also active) only happens in Value-Assigning systems, as well.
  • DoEA and Decision-Making are right next to each other, in the Assigning order-of-operations.
  • Decision-Making is a process that requires an organism to disregard/exclude potential actions (when you decide to take one action, you’re necessarily excluding the other actions you could possibly take).  Well in order to determine what actions are effective (in some cases) it might also require an organism to exclude potential actions (if there are a large number of actions you could take, most likely, not all of them will be effective – which means determining effective action might require you to exclude some of those actions).

There are some implications to this, though.

For one, if all decisions are “Determination of Effective Action,” it would mean that each decision an organism makes must be attached to a specific value or set of values… which isn’t that crazy of an idea.  But it would mean that anytime decisions change, it’s because those value or values have changed (if you remember what I argued in Ch.4, the process of “value determination” is necessarily attached to the process of “determining effective action”).

This would also mean that “Decision-Making” and “Determination of Effective Action” are NOT separate operations, in the Assigning order-of-ops, but are instead interchangeable.  And if what I said in #4.5 is correct (that decision and action might be interchangeable), it would really complicate Assigning order-of-ops.

But like I also said in #4.5, I don’t think decision and action are, in fact, interchangeable… I think that’s just a vague proposition.  So if decision is indeed DoEA, the order-of-ops would probably more look like [detect information → (potentially) confirm the signal → determine the value of that information → determine an effective action/decision → action].

“When would DoEA turn into Decision-Making?”

Well since decision appears to be a conscious process (at least in humans), and since conscious experience might be tied to the “top level” order-of-ops (as I speculated in Note 1), then DoEA might become decision only at the “top level” as well.

Or maybe this is all just conjecture.


Note 5: Other questions about the “order-of-operations”

If we assume (like #4.1 states) that an “order-of-operations” occurs at the “top” of every biological system/organism/loop (i.e. in every neuron, every neural circuit, and at the highest level, in every brain) – then it would mean that the brain technically has billions of “order-of-operations,” all running at the same.

So how is this possible?  How could a system function like that?  And how does a “top level” order-of-operations even materialize, in a system as complex as the brain?

Similar to Consciousness, exactly how it comes about is a mystery to me.  I can’t tell you precisely how it happens – only that it appears to happen, and explains much of what the brain does.

For instance, like I said in Note 1, the existence of a “top-level” order of operations (which occurs alongside, but separately from, other/”lower” operation sets) can explain why can have a “unified” experience of the world, but can still perform functions “in the background” (functions which often happen unconscious to the host).

However, even if this is true, it still leaves important questions to be answered.

For example, “How would we actually separate & measure the ‘higher level’ operation sets, occurring in the brain?

Sure, at low levels, measuring the “order-of-operations” might be somewhat simple (the behavior of individual neurons and neural circuits has already been outlined by researchers, with good degrees of precision).  But measuring “higher-level” operations would not be so easy – because these operations are going to involve a lot more systems/components.

How would you even tell the difference between “higher level” operation sets, when they can involve millions of components/systems?

Also, couldn’t the same components, systems, and brains regions be involved in multiple high level operation sets?  (If so, would those “higher-level” operation sets even be separate?)

Well first off, I would speculate that, yes, the same components/systems/regions could be involved in multiple operation sets.  I’d also speculate that those operation sets could (depending on the case) be considered separate.

As for how to differentiate “higher-level” operation sets from one another – I’d argue that it has to do with activity loops.

What is an activity loop?  Well, (my thesis is) any biological mechanism which results in recurring behavior, recurring function, recurring types of function, or a retracement of activity (activity which “circles back” to the point of origin) could potentially be a biological activity loop… meaning that an order-of-ops should exist “at the top of” that loop, and would be determining that loop’s trajectory (hypothetically).

Of course, loops don’t have to be infinite.  In fact, a loop can occur only once, and still be a loop.

But I think it’s possible that “higher-level” operations sets can eventually be measured & separated by observing these “loops.”

Again, I could be wrong.


Note 6: The Other “Functions”

What about the other “functions” the brain performs?

Many of us, when looking into anything psychology related (especially having to do with brain), hear about these “functions” happening in the brain (such as motor control, long term memory, short term memory, “pattern recognition,” coordination, cognition, etc.).  But if the brain really IS performing these “functions,” then wouldn’t they have to occur at some point in the order-of-operations?

Well basically, I think there are a two main possibilities, as to where these “functions” come about:

1.  It’s possible that many of these “functions” are either required for, or exist to support, one of the six operations listed in Chapter 4.

For example, what scientists call “motor control” and “coordination” could just be the neural systems required for Action to take place, at a high level, in brains.  And what psychologists label “cognition” could be one of the neural mechanisms which supports human Decision-Making.

(Of course motor control, coordination, and cognition are each complex things.  But if we’re talking about where these functions fit in regards to the order-of-ops, this is one possibility.)

2. The second possibility is that many of these “functions” stem from the activity of many “low-level” operation sets combined… in other words, an emergent property of neural activity/organization.

What do I mean by “emergent property?”

Well I think of it like Economics.  All economic activity in a society is made up of transactions.  If you go to the store and buy a watermelon, that’s one transaction; which means you’re technically participating in Economics.

Now one single transaction, on it’s own, is not a highly complicated thing: I give you money, and you give me something else in return.  But if you were to analyze millions or billions of transactions, certain “patterns” would appear.  For example, you’d be able to see where money flows and where that flow is concentrated.

If you were to study those patterns over long periods of time, those patterns may even take certain “shapes”… almost as if there are certain “overall goals” those transactions are meant to perform.

It could be a similar thing with many “functions” of the brain.

If you were to look at the areas of the brain where “memory” takes place, and examine the neural activity happening on a small scale (the “individual transactions”), you probably wouldn’t see anything special.  Those neurons wouldn’t be performing a special “memory” procedure, or operating extremely differently from neurons in the cerebral cortex.

(Some of those neurons might have a different anatomical structure than the rest, and the precise behaviors of those neurons might have small variances, but the way they fundamentally work – the reliance on action potentials, the usage of axons & dendrites, etc. – would probably be similar to most other neurons in the brain.)

However, if you were to scale up & observe larger amounts of interactions in that part of the brain, all of a sudden a certain “function” might start to appear: the function of “memory” (maybe short-term memory, maybe long-term memory, or maybe a mix of both).

So basically, certain “functions” of the brain might not exist to support “higher-level” operations (per se), but instead exist because they “emerged” from lower-level activity.

It’s also possible that some functions could be both an emergent property and “supportive of” a higher level operation.  But this all depends on the case.

As a side note: this is why the six operations are so broad/vague, in how they’re written.  (The term “Action,” for instance, could mean many, many different things… it doesn’t tell you what action is occurring.  “Determination of Value” doesn’t tell you what value is being determined.) The reason for this “broadness” is ultimately because it should be possible for “operations” & operations sets to have extremely diverse effects – in terms of the behaviors/functions they produce, and the way they play out in brains & other systems.


Note 7: The Emotion Discussion

I’ve discussed a lot of things so far, but the topic of emotions and emotional states (and how they fit into the thesis) hasn’t been discussed yet.  So here we go.

First off, what exactly is an “emotion?”

Well part of the difficulty with the concept of “emotion” is that it’s not really one, easily-definable thing.  When someone gets poked with a needle and feels pain, that painful feeling could be described as an “emotion.”  And because Emotion is primarily something you experience (just like with Consciousness), it’s gonna be challenging to say exactly where Emotion comes from.

If you define an emotion as “any experience or sensation,” then you must deal with the hard problem of Consciousness (discussed in Note 1).  If you define emotions as “SPECIFIC KINDS of experiences and sensations,” then you’d have to determine which sensations/experiences count as an “emotion”… which is also difficult.

However, unlike “Consciousness,” there do seem to be particular feelings that people describe as “emotions” (even if they’re not entirely well-defined).  And because of that, we can sometimes measure the effects of different emotions & emotional states (i.e. the behaviors which go along with them), and the brain activity which is correlated with them.

I think this discussion goes further, though… in fact I think, in many cases, “emotions” are often connected to another psychological phenomenon: one we’ve been discussing throughout this book.

See, many people (due to bad education) have been led to believe that Emotion is it’s own separate process in the brain – as if emotions are just these random, unpredictable things that happen to us, separate from logic & reason (or whatever cognitive functions lead us to logic & reason).

But I don’t believe that’s normally true.

In order to prove that Emotion isn’t normally random, all you have to do is think about when people typically feel emotions.  Do people typically feel emotions at completely random times, for no reason, out of the blue?  Or is there usually some event, experience, or perceptive change that happens before the emotion?

In rare cases the first is true, but most of the time, it’s the second.

So if the second is normally true, then we’ve gotta ask: “What exactly is it that happens before emotional changes?”

Well I think the answer (at least in many cases) is disturbingly simple.

Say you’re driving on the freeway, and your dog is in the passenger seat beside you.  Let’s also say, for the sake of the example, that your dog is looking out of the front window & watching traffic, the same as you.

Then, right as you’re about to exit the freeway, a car cuts you off and prevents you from taking your exit.

Naturally, you feel annoyed.

But you look over, and your dog doesn’t appear to be upset at all… in fact doggo over there seems to be completely unaffected by this harrowing traffic brouhaha.

So why do you and the dog have different emotions?  The dog (essentially) watched the same event happen.  Why isn’t the dog perturbed like you are?

Most likely, it’s because the emotion you’re feeling was not caused by the information itself, but by your interpretation of the information.  The dog might’ve witnessed the same relative thing, but the dog interpreted the information very differently than you did (you saw it as an offense, but the dog probably just saw it as some visual noise).  And thus, the dog is experiencing a different emotion than you.

In other words, the process of Value-Assignment might actually be influencing emotions – at least some of the time.

Now, of course, there are exceptions to this.

For instance, many illicit substances & psychoactive drugs can change how you feel (i.e. your emotional state) in pretty dramatic ways.  Why can drugs do that?  Because, as scientists have discovered, emotions & emotional states are often connected to neurochemistry.  And since psychoactive drugs can temporarily alter your brain’s neurochemistry, those same drugs can also have an effect on emotional state.

There are even medical conditions that can alter the neurochemistry of the brain.

On top of that, there are also instances where your “neurochemical state” can be altered directly by bodily changes/bodily trauma.  For example, if someone came up to you out of nowhere and smacked you in the arm with a baseball bat, that would probably affect how you feel – causing pain and possibly anger.  Also, when you go through puberty, a vast array of hormonal changes occur, which can influence neurochemistry, and thus emotion.

In these cases, you can’t say that emotions are purely a result of “interpretation.”  (I’d argue that interpretation can still play some role… just not as big of a role.)

I should also mention that you can’t always control how you interpret things.  Many (on could argue most) of the “values” your brain creates & uses are not created or used consciously.  And even values which are “conscious” may not necessarily be “directly controllable” – since Value-Assignment is typically based on experience (i.e. Sensation), and your experiences are often something you can’t “control.”

Still, if interpretative changes can lead to emotional changes, then it *might* mean that Value-Assignment is connected to emotional state, to some extent, much of the time.  (Remember, the definition of “value” has to do with interpretation of sensory information.  And Value-Assignment is the process by which an organism actively changes values.)

Exactly how values and emotions are connected is not always clear.  In fact, the manner in which they’re connected might vary, depending on the case.

It could be that emotions, themselves, evolved as a way of supporting Value-Assignment, by supporting memory (we tend to remember emotional experiences better than anything else in life… which means “emotion” might be the brain’s way of getting us to remember specific values more easily).  But this is only speculation.


Note 8: An argument against “Neurochemistry, Genetics, and Environment” as the primary factors driving Psychology

In western Psychology, there’s an idea (which has been around for a number of years) that Psychology is basically driven by three major elements: Neurochemistry, Genetics, and Environment.

I have some things to say about this: things I’ve wanted to get off my chest for a long time.  One could even call it a “beef.”

First, I will acknowledge that this viewpoint isn’t completely illogical.  I understand how one might reach this conclusion.

In Physics, you often need to understand how smaller elements function, in order to get a grasp of the “bigger picture” (i.e. the functioning of larger elements).  This is because the smaller elements are the pieces which construct the “bigger picture.”

So basically, if you were trying to figure out the smallest or simplest possible elements affecting Psychology – then Neurochemistry, Genetics, and Environment might appear to be the three most important.  (And of course, each of these elements do play a significant role in Psychology.)

However, viewing Psychology as “a combination of Genetics, Chemistry, and Environment” (I’d argue) is a bad way of thinking about Psychology.

To show you what I’m talking about, let’s take a brief look at each element.

A. Do Genetics influence Psychology?

Almost certainly, yes.  Genes are responsible for building the structures which you use to think, feel, and behave – and because of that, Genetics must necessarily influence the way in which you’re able to do these things (such as the limitations of your sense organs, the body parts you use to act out your ideas, and the very composition of your brain).  This means, in a very literal manner, Genetics affect Psychology…

But what are the methods by which Genetics affect large-scale Psychological processes (such as thought and feeling)?  How much of your experience is influenced by learning, and how much is a result of genetic predisposition?  (As we discussed in Note 3, genes can’t predict what you’ll encounter throughout your life, so learning has to be a large part of brain function.)

These questions are a bit harder to answer.  Simply slapping the label of “Genetics” on it & calling it a day, doesn’t really do justice to the actual mechanisms at play.

B. What about Environment?  We know that an individual’s Environment impacts their Psychology quite substantially… but just how much of that is controlled by Environment v.s. being influenced by Environment?

Learning is obviously a fundamental part of Psychology; but if learning were completely controlled by your Environment, then two individuals placed in the exact same Environment should learn the exact same things, in the exact same manners.

But normally-speaking, this isn’t what we see: two animals with brains, placed in the same Environment at birth, can have drastically different experiences from one another (and learn drastically different things).  Which means there must be some other Psychological component or components (along with “Environment”) which affect the learning process… and those components may not fit neatly into the category of “Genetics” or “Neurochemistry.”

So what are those components?  And what’s the precise mechanism by which Environment interacts with said components, to create Psychology?

This is a little harder to answer.  Simply reducing the mechanism to “Environment” strips away all nuance from the matter.

C. But the truth is, both A. and B. have been discussed by psychologists before…  an argument I never hear anybody make has to do with the third element: Neurochemistry (or “Chemistry” for short).

Now most people reading this book know that Chemistry is necessary for the brain to physically operate.  Neurons in the brain (usually) communicate with each other using “neurochemicals.”  And because of that, some people believe that neurochemicals are minimum components necessary for brain function (remember, if you’re viewing the brain like a physicist, the minimum components are going to be extremely important).

But I think this is simply the wrong way of looking at it.

Let’s pretend you’re someone who’s never heard of “the government” before today, and you want to understand how government works.  Where would you start?

You’d probably start by first learning about what a government is, and the different goals that a government aims to achieve.  Then you’d probably study the different groups & institutions involved in achieving those goals.  From there, you might examine how individual workers perform jobs to further the goals of each governmental body.  You might also look at how different groups work with (and against) each other, to accomplish those goals.

What you probably wouldn’t do is spend all your time analyzing the cell phones and communication methods that government workers use.  Why?  Because in the grand scheme of things, cell phone facts & statistics are not really important to understanding the overall behavior of a government.

Well it’s a similar problem with “neurochemistry.”

Essentially, every aspect of Psychology stems from the activity of multiple parts working together.  When you combine and coordinate those parts, you get thought, feeling, and behavior.

It’s true that  you need chemical activity for your brain to function at all (just like government workers have to communicate with each other somehow, in order for the government to function).  But neurochemistry does not determine overall brain function, the same way that cell phones do not dictate overall government function.

This is because brain function is fundamentally a problem of organization.  In order to understand brain function, it’s less about examining the communication methods; and more about examining a.) the goals of neural communication, b.) what neurons are communicating to one another, and c.) how they’re coordinating to accomplish their objectives.

Here’s one more example to drive the point home:

Most of us know that, currently, doctors will often treat psychological conditions with prescription drugs; because drugs can alter your neurochemistry.

Now these drugs can be effective in many cases, but I’ve always found it interesting that… even when a medication is effective… when you take someone off their medications, there’s a good chance they’ll slip back into the same behavior patterns as before.

Why could that be?

When you alter the brain’s neurochemistry, there will obviously be some effect (since neurochemistry is vital to neuron communication).  But my guess is that, when you introduce neurochemistry-altering medications, you’re usually not changing the fundamental properties of the brain’s operations.  You’re not changing the fundamental values, goals, or organization of the system.

If anything, you’re only modifying the communication method of the workers.

(It’d be like if everyone in government was forced to use fax machines to talk, instead of cell phones.  Fax machines would obviously have a big impact on the efficiency & effectiveness of certain parts of government.  But fax machines would probably not change the objectives, policies, or priorities that any government group has.  And because of that, fax machines would do little to change what the government does overall.)

In fact, this same reasoning could be applied to Genetics and Environment as well.  Even if we accept that Genetics and Environment (and Neurochemistry) are the simplest “components” of Psychology – what none of these “components” take into account is neural organization.  And arguably, organization is the most important of these things to look at, if you’re trying to understand the broader behavior of the creature.

This is ultimately why I think viewing Psychology as “a combination of Genetics, Neurochemistry, and Environment” is a bad model for Psychology.


Note 9: Why can we do things that aren’t survival or reproduction-related?

Throughout this book, I’ve regularly mentioned how values are a result of sensory info & goals: specifically the goals of “survival” and “reproduction.” But many animals (humans, in particular) seem to be capable of taking actions & valuing things that have nothing to do with survival or reproduction.

One example is how many mammal species will play with one another, particularly when they’re young.  Another example is how humans often try make each other laugh (and sometimes that humor can take on complex forms).  Yet another example is how people will sometimes engage in art-creation or philosophy, or carry strong beliefs about social & technical issues (issues which often have nothing to do with survival/reproduction – at least not directly).

So why would all this stuff be possible, if values – according to this thesis – are supposed to be about survival & reproduction?

There are a couple potential answers.

One hypothesis is that all actions & values are related to survival or reproduction, in some roundabout manner.  For example, some scientists believe that animals might engage in play, when young, as a way of “practicing” for real life scenarios.

In my personal opinion, though, that’s not necessarily what’s happening.  I don’t think all of our actions, thoughts, & values have to be guided by survival or reproduction (even indirectly).

Sure, when humans first showed up to the party, maybe all our “values” were related to survival & reproduction (because that was probably necessary).

However, it appears that evolution often selects for intelligence (animals which are more intelligent are generally more likely to pass on their genes).  In fact, the main reason why humans have become the dominant species on the planet is likely because of our capacity to organize, and our capacity for intelligence.

And intelligence (in my view) almost always involves abstraction.

What is “abstraction?”

It essentially means “detached” or “removed from.” I mentioned in Chapter 1 that many creatures have to perform indirect tasks, in order to accomplish their greater goals… well basically, “indirect” is just another term for “abstract.”  Here’s an example:

Let’s say I’m trying to figure out a better way to hunt for food, instead of just using my bare hands.  If I can think of a creative way around that problem (for instance, if I can craft some kind of weapon), not only will that give me a survival advantage, but it will require me to do things which are indirect (i.e. “abstract”) from the main goal I’m trying to accomplish.

Instead of going straight for my food, I have to think of a potential weapon, then spend energy trying build that weapon (finding the parts, putting it together, etc.).  The more steps I must take (to think of & build my weapon), the more “abstract” my ideas/actions must be.

That’s not the only type of abstraction, either.

Let’s say I’m fighting with someone over some food.  If I can think several steps ahead of my opponent, and then take some kind of action to preempt their moves, not only will that give me an advantage in the fight, but it will require me to do something which is indirect/abstract:

Instead of going straight at my opponent, I must imagine what my opponent might do, and then come up with a plan to counteract the thing I’ve imagined them doing (to prevent them from doing it).  Each “step ahead” that I plan, is another step away from my starting task (to fight).  In other words, the more “steps ahead” I can think, the more “abstract” my ideas/actions will be.

Why is this important?

Because if intelligence and abstraction are related, then the more intelligent I become, the more it’ll be possible for me to engage in abstraction… and the more I engage in abstraction, the more I’ll be able to do things & think things which are not (directly) related to survival or reproduction.

In fact, the term “intelligence” may just be another way of saying “one’s ability to perform abstract thinking” – either by “thinking outside of the box” (what I call “Creative Intelligence“) or by “thinking several steps ahead” (what I call “Process Intelligence“).

Eventually, if I become intelligent enough, it might be possible for nearly all the things I do, the thoughts I have, and the value systems I use, to be abstract/disconnected from my biological goals.

Of course, any animal is still gonna need to do some things strictly for survival & reproduction purposes – otherwise the species would die out.

But ultimately, this “abstraction hypothesis” would explain why some animals (as well as humans) can do things, think things, carry values, and perform goals which have nothing to do with survival or reproduction.


Note 10: Value Assignment & The Placebo Effect

First off, what is the “placebo effect?”  If you’re unfamiliar, it’s basically a scientific phenomenon where a person experiences some kind of benefit (either psychological or physiological) from a drug or treatment, even though that drug or treatment was meant to be ineffective.

Say a doctor gives pills filled with sugar to a patient, and then tells the patient that the pills actually contain medicine… if the patient “gets better,” or experiences a reduction in symptoms, then that effect probably wasn’t from the pills (because the pills contain no medicine), but instead stemmed from the patient’s belief or perception that the pills would make them better.

The placebo effect is a well-known phenomenon, and has been useful in medical research for a very long time.  But of course, since it’s discovery, scientists have been curious about it.

And it’s easy to see why.  The placebo effect seems almost magical, in the way it works.  How is it possible for beliefs alone to influence someone’s psychological & physiological state, to the extent that even bodily processes can be altered?

Well, I’d argue that Value-Assignment may have something to do with it.

Let’s start with a question: “How does any creature determine what’s real and what isn’t?”

(It may seem like a dumb question at this point in the book – but it matters to this discussion.)

Science (and scientific tools such as critical thinking, logic, math, and simple observation) certainly might help one to know more about “reality”… but obviously not all creatures can use science, can use it properly, or can use it at all times.  Which means organisms can’t always depend on science or scientific tools, to determine what’s “real” (or what seems real), on a moment-to-moment basis.

In order to have any consistent/regular conception of “reality,” creatures (at least ones with brains) must usually rely on something else: their own beliefs & perceptions.

But what are “perceptions” and “beliefs” – and more importantly, how are they formed?

(It’s likely that Consciousness is required for “perception” or “belief”… and on top of that, some people use the term “perception” to refer to a type of Consciousness… but if we take the Consciousness discussion out of it, what are the fundamental components of perception/belief?)

We know that Sensation is at least required for you to perceive or believe something.  But that can’t be all.

For you to “believe” something, it’s not just about sensing information.  To have a perception or belief, you must be able to understand what you’re sensing, on some basic level.

So where do these “understandings” come from?

Well like I argued in #1.3, in order to “understand” information, you must have some way of connecting or relating sensory info, to something else.

And under this thesis, values are the most basic way that organisms do that.

So if that’s the case – if perceptions & beliefs are (fundamentally) just individual “understandings” of sensory info – then the things we call “perceptions” and “beliefs” may just be “values.”

Why is this important?

Well if “values” are primary way in which organisms “understand” reality, then it would make perfect sense for the body to align it’s own actions/reactions with new values.  In fact, if we couldn’t do this (if our body couldn’t adjust itself, in response to new values), then we probably wouldn’t be able to function at all.  That’s what I’ve been arguing this whole book, at least.

What we call “the placebo effect” may simply be a mechanism (or mechanisms) by which that “adjustment” occurs.

Of course, the placebo effect can’t be caused by just any kind of value.  Yes, it’s possible that all “perceptions” and “beliefs” are values… but not all values are perceptions and beliefs.

For example, Predetermining creatures/systems (under this thesis) use values – but these creatures most likely can’t “perceive” or “believe” anything, or create new values.

Value-Assigning systems can (potentially) believe and perceive, and can also create new values.  And furthermore, if this thesis is right, the brain is a Value-Assigning system.

But not all values (even inside the brain) involve perception/belief.  For example, all “involuntary” functions of the body require values, under this thesis.  But involuntary functions are not something you’re normally aware of, or something you “perceive.”

Therefore, it must be a particular type of values which are responsible for the placebo effect.

Which type of values?

In my estimation: perceptions, beliefs (and ultimately the placebo effect) most likely have to do with top-level values in the brain.

In #4.1, I hypothesized that an “order-of-operations” governs every biological activity loop – from the smallest neural circuit to the “highest level” brain activity.  I also mentioned that operations probably work differently, at different levels.

Well it’s my speculation that “beliefs,” “perceptions,” and conscious experience itself may come into play at the “top level” order-of-operations, in brains/nervous systems.

Since we (seem to) have only one conscious experience at a time, and because “perceptive events” which happen inside that conscious experience can impact bodily functioning (for example, when I identify something as threatening, my heart rate will speed up & adrenaline will be released into my blood) – it may be the case that the placebo effect is simply an “outcropping” or “variation” of that process.

Furthermore, we know that your body normally adjusts itself according to perception.  (When you get up to use the bathroom, when you speak to a friend, when you go to sleep… in each of these cases, you perceived a certain need or desire, and then moved/shifted yourself in some way to accomplish that goal.) So if perception affects the mundane tasks your body performs, why couldn’t perception affect you in “less mundane” ways?

If Value-Assignment is indeed the primary function of the brain, and if “top-level” Value-Assignment can influence the “large scale” actions of the body, then there should be no reason why these values (and the effects they produce) would be limited to the mundane.

Of course, just because a cancer patient believes their cancer will subside, that doesn’t necessarily mean it will.  There are plenty of experiments showing that the placebo effect doesn’t impact everyone, and doesn’t always have the same effect (or the same degree of effect).  So for most people, the placebo effect can only go so far in determining health outcomes.

This ultimately means that beliefs/perceptions can’t control every aspect of your body’s functioning.  But it does make sense that beliefs have some influence over bodily processes, some of the time.  Because presumably, the “top-most” values of the system are generally going to be the “top priority,” when it comes to meeting the organism’s needs (or perceived needs).

There it is.  The book is done.  And that, as they say, is that.

As for who says that… well, one can only guess.

Works Cited

____________________

[1] “The HIV Life Cycle.”
Nih.gov, National Institute of Health, U.S. Department of Health and Human Services | Aug. 4, 2021
link: [hivinfo.nih.gov/understanding-hiv/fact-sheets/hiv-life-cycle] Accessed Dec. 27, 2023

in text: (nih.gov, “The HIV Life Cycle,” 2021)

[2] “Analysis of Neutrophil Chemotaxis.”
In: Coutts A.S. (eds) Adhesion Protein Protocols.
Paul A. Nuzzi, Mary A. Lokuta, Anna Huttenlocher
Methods of Molecular Biology, Volume: 370, Pages: 23-35. | Jan. 2007
link: [https://doi.org/10.1007/978-1-59745-353-0_3]

in text: (Nuzzi, Lokuta, & Huttenlocher, “Methods of Molecular Biology,” 2007, p.23-35)

[3] “Mechanisms of Phagocytosis in Macrophages”
Alan Aderem, David M. Underhill
Annual Review of Immunology, Issue: 1, Volume: 17, Pages: 593-623. | Apr. 1999
link: [https://doi.org/10.1146/annurev.immunol.17.1.593]

in text: (Aderem & Underhill, “Annual Review of Immunology,” 1999, p.593-623)

[4] “Microbe Profile: Akkermansia muciniphila: a conserved intestinal symbiont that acts as the gatekeeper of our mucosa”
Willem M. de Vos
Microbiology, Issue: 5, Volume: 163, Pages: 646-648. | May 2017
link: [https://doi.org/10.1099/mic.0.000444]

in text: (de Vos, “Microbiology,” 2017, p.646-648)

[5] “Signaling and sensory adaptation in Escherichia coli chemoreceptors: 2015 update”
John S. Parkinson, Gerald L. Hazelbauer, Joseph J. Falke
Trends In Microbiology, Issue: 5, Volume: 23, Pages: 257-66. | May 2015
link: [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4417406/]

in text: ( Parkinson, Hazelbauer, & Falke, “Trends In Microbiology,” 2015, p.257-266)

[6] “chloroplast”
Britannica.com, The Editors of Encyclopaedia, Encyclopedia Britannica | Nov. 7, 2023
link: [https://www.britannica.com/science/chloroplast] Accessed Dec. 28, 2023

in text: (britannica.com, “chloroplast,” 2023)

[7] “Contribution of calcium influx on trichocyst discharge in Paramecium caudatum”
Yoshiaki Iwadate, Munehiro Kikuyama
Zoological Science, Issue: 4, Volume: 18, Pages: 497-504. | May 2001
link: [https://doi.org/10.2108/zsj.18.497]

in text: (Iwadate & Kikuyama, “Zoological Science,” 2001, p.497-504)

[8] “Prey capture and digestion in the carnivorous sponge Asbestopluma hypogea (Porifera: Demospongiae)”
Jean Vacelet, Eric Duport
Zoomorphology, Issue: 4, Volume: 123, Pages: 179-190. | Apr. 27, 2004
link: [https://doi.org/10.1007/s00435-004-0100-0]

in text: (Vacelet & Duport, “Zoomorphology,” 2004, p.179-190)

[9] “Multiple invasion mechanisms and different intracellular Behaviors: a new vision of Salmonella–host cell interaction”
Zineb Boumart, Philippe Velge, Agnès Wiedemann
FEMS Microbiology Letters, Issue: 1, Volume: 361, Pages: 1-7. | Dec. 2014
link: [https://doi.org/10.1111/1574-6968.12614]

in text: (Boumart, Velge, & Wiedemann, “FEMS Microbiology Letters,” 2014, p.1-7)

[10] “The effects of vibratory and electrical stimulation on habituation in the ciliated protozoan, Spirostomum ambiguum”
Dustan Osborn, H. Joseph Blair, Joseph Thomas, E.M. Eisenstein
Behavioral Biology, Issue: 5, Volume 8, 1973, Pages: 655-664. | May, 1973
link: [https://doi.org/10.1016/S0091-6773(73)80150-6]

in text: (Osborn, Blair, Thomas, & Eisenstein, “Behavioral Biology,” 1973, p.655-664)

[11] “An intracellular signaling hierarchy determines direction of migration in opposing chemotactic gradients.”
Bryan Heit, Samantha Tavener, Eko Raharjo, Paul Kubes
The Journal of Cell Biology, Issue: 1, Volume 159, Pages: 91-102. | Oct. 14, 2002
link: [https://doi.org/10.1083%2Fjcb.200202114]

in text: (Heit, Tavener, Raharo, & Kubes, “The Journal of Cell Biology,” 2002, p.91-102)

[12] “Cnidocyte Mechanoreceptors are Tuned to the Movements of Swimming Prey by Chemoreceptors”
Glen M. Watson and David A. Hessigner
Science, Volume 243, Article number: 4898, Pages: 1589-1591. | Mar. 24, 1989
Published by: American Association for the Advancement of Science
link: [http://www.jstor.org/stable/1703637]

in text: (Watson & Hessinger, “Science,” 1989, p.1589-1591)

[13] “Nervous System in Cnidarians | Features & Specialized Cells”
Marian Fuchs and Taormina Lepore
Study.com | Nov. 11, 2023
link: [https://study.com/academy/lesson/cnidaria-nervous-system.html] Accessed Jan. 26, 2024

in text: (study.com, “Nervous System in Cnidarians | Features & Specialized Cells,” 2023)

[14] “The biology of CCR5 and CXCR4”
Ghalib Alkhatib
Current opinion in HIV and AIDS, Issue: 2, Volume: 4, Pages: 96-103. | Mar. 2009
link: [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2718543/]

in text: (Alkhatib, “Current opinion in HIV and AIDS,” 2009, p.96-103)

[15] “To Make Collective Decisions, Ants Behave like Individual Neurons in a Larger Brain”
Cassidy Ward
Syfy.com | Jul. 27, 2022
link: [https://www.syfy.com/syfy-wire/ants-act-like-neural-networks-to-make-collective-decisions#:~:text=An%20individual%20ant%E2%80%99s%20brain%20only%20has%20about%20a%20quarter%20of%20a%20million%20neurons] Accessed Feb. 24, 2024

in text: (syfy.com, “To Make Collective Decisions, Ants Behave like Individual Neurons in a Larger Brain,” 2022)

[16] “Basic Neural Units of the Brain: Neurons, Synapses and Action Potential”
Jiawei Zhang
Neurons and Cognition, Cornell University, arXiv:1906.01703v1 | May 30, 2019
link: [https://doi.org/10.48550/arXiv.1906.01703]

in text: (Zhang, arXiv:1906.01703v1, 2019)

[17] “Decoding the contents and strength of imagery before volitional engagement.”
Roger Koenig-Robert and Joel Pearson
Scientific Reports, Issue: 9, Article number: 3504 | Mar. 5, 2019
link: [https://doi.org/10.1038/s41598-019-39813-y]

in text: (Koenig-Robert & Pearson, “Scientific Reports,” 2019, ar.3504)

[18] “Neurosurgery, Sensory Homunculus”
John D. Nguyen and Hieu Duong
StatPearls [Internet], Treasure Island (FL): StatPearls Publishing | Jan. 2024
link: [https://www.ncbi.nlm.nih.gov/books/NBK549841/] Accessed Mar. 9, 2024

in text: (StatPearls, “Neurosurgery, Sensory Homunculus,” 2024)

[19] “Cognitive scientists define critical period for learning language”
Anne Trafton
Mit.edu, MIT News Office, Massachusetts Institute of Technology | May 1, 2018
link: [https://news.mit.edu/2018/cognitive-scientists-define-critical-period-learning-language-0501] Accessed Mar. 19, 2024

in text: (mit.edu, “Cognitive scientists define critical period for learning language,” 2018)

[20] “What is Epigenetics?”
Cdc.Gov, Centers for Disease Control and Prevention | Aug. 15, 2022
link: [www.cdc.gov/genomics/disease/epigenetics.htm#:~:text=Epigenetics%20is%20the%20study%20of,body%20reads%20a%20DNA%20sequence] Accessed Mar. 13, 2024

in text: (cdc.gov, “What is Epigenetics?,” 2022)