• 0 Posts
  • 17 Comments
Joined 2 months ago
cake
Cake day: July 7th, 2024

help-circle
  • i’d agree that we don’t really understand consciousness. i’d argue it’s more an issue of defining consciousness and what that encompasses than knowing its biological background.

    Personally, no offense, but I think this a contradiction in terms. If we cannot define “consciousness” then you cannot say we don’t understand it. Don’t understand what? If you have not defined it, then saying we don’t understand it is like saying we don’t understand akokasdo. There is nothing to understand about akokasdo because it doesn’t mean anything.

    In my opinion, “consciousness” is largely a buzzword, so there is just nothing to understand about it. When we actually talk about meaningful things like intelligence, self-awareness, experience, etc, I can at least have an idea of what is being talked about. But when people talk about “consciousness” it just becomes entirely unclear what the conversation is even about, and in none of these cases is it ever an additional substance that needs some sort of special explanation.

    I have never been convinced of panpsychism, IIT, idealism, dualism, or any of these philosophies or models because they seem to be solutions in search of a problem. They have to convince you there really is a problem in the first place, but they only do so by talking about consciousness vaguely so that you can’t pin down what it is, which makes people think we need some sort of special theory of consciousness, but if you can’t pin down what consciousness is then we don’t need a theory of it at all as there is simply nothing of meaning being discussed.

    They cannot justify themselves in a vacuum. Take IIT for example. In a vacuum, you can say it gives a quantifiable prediction of consciousness, but “consciousness” would just be defined as whatever IIT is quantifying. The issue here is that IIT has not given me a reason to why I should care about them quantifying what they are quantifying. There is a reason, of course, it is implicit. The implicit reason is that what they are quantifying is the same as the “special” consciousness that supposedly needs some sort of “special” explanation (i.e. the “hard problem”), but this implicit reason requires you to not treat IIT in a vacuum.


  • Bruh. We literally don’t even know what consciousness is.

    You are starting from the premise that there is this thing out there called “consciousness” that needs some sort of unique “explanation.” You have to justify that premise. I do agree there is difficulty in figuring out the precise algorithms and physical mechanics that the brain uses to learn so efficiently, but somehow I don’t think this is what you mean by that.

    We don’t know how anesthesia works either, so he looked into that and the best he got was it interrupts a quantom wave collapse in our brains

    There is no such thing as “wave function collapse.” The state vector is just a list of probability amplitudes and you reduce those list of probability amplitudes to a definite outcome because you observed what that outcome is. If I flip a coin and it has a 50% chance of being heads and a 50% chance of being tails, and it lands on tails, I reduce the probability distribution to 100% probability for tails. There is no “collapse” going on here. Objectifying the state vector is a popular trend when talking about quantum mechanics but has never made any sense at all.

    So maybe Roger Penrose just wasted his retirement on this passion project?

    Depends on whether or not he is enjoying himself. If he’s having fun, then it isn’t a waste.


  • It is only continuous because it is random, so prior to making a measurement, you describe it in terms of a probability distribution called the state vector. The bits 0 and 1 are discrete, but if I said it was random and asked you to describe it, you would assign it a probability between 0 and 1, and thus it suddenly becomes continuous. (Although, in quantum mechanics, probability amplitudes are complex-valued.) The continuous nature of it is really something epistemic and not ontological. We only observe qubits as either 0 or 1, with discrete values, never anything in between the two.


  • The only observer of the mind would be an outside observer looking at you. You yourself are not an observer of your own mind nor could you ever be. I think it was Feuerbach who originally made the analogy that if your eyeballs evolved to look inwardly at themselves, then they could not look outwardly at the outside world. We cannot observe our own brains as they only exist to build models of reality, if our brains had a model of itself it would have no room left over to model the outside world.

    We can only assign an object to be what is “sensing” our thoughts through reflection. Reflection is ultimately still building models of the outside world but the outside world contains a piece of ourselves in a reflection, and this allows us to have some limited sense of what we are. If we lived in a universe where we somehow could never leave an impression upon the world, if we could not see our own hands or see our own faces in the reflection upon a still lake, we would never assign an entity to ourselves at all.

    We assign an entity onto ourselves for the specific purpose of distinguishing ourselves as an object from other objects, but this is not an a priori notion (“I think therefore I am” is lazy sophistry). It is an a posteriori notion derived through reflection upon what we observe. We never actually observe ourselves as such a thing is impossible. At best we can over reflections of ourselves and derive some limited model of what “we” are, but there will always be a gap between what we really are and the reflection of what we are.

    Precisely what is “sensing your thoughts” is yourself derived through reflection which inherently derives from observation of the natural world. Without reflection, it is meaningless to even ask the question as to what is “behind” it. If we could not reflect, we would have no reason to assign anything there at all. If we do include reflection, then the answer to what is there is trivially obvious: what you see in a mirror.


  • Classical computers compute using 0s and 1s which refer to something physical like voltage levels of 0v or 3.3v respectively. Quantum computers also compute using 0s and 1s that also refers to something physical, like the spin of an electron which can only be up or down. Although these qubits differ because with a classical bit, there is just one thing to “look at” (called “observables”) if you want to know its value. If I want to know the voltage level is 0 or 1 I can just take out my multimeter and check. There is just one single observable.

    With a qubit, there are actually three observables: σx, σy, and σz. You can think of a qubit like a sphere where you can measure it along its x, y, or z axis. These often correspond in real life to real rotations, for example, you can measure electron spin using something called Stern-Gerlach apparatus and you can measure a different axis by physically rotating the whole apparatus.

    How can a single 0 or 1 be associated with three different observables? Well, the qubit can only have a single 0 or 1 at a time, so, let’s say, you measure its value on the z-axis, so you measure σz, and you get 0 or 1, then the qubit ceases to have values for σx or σy. They just don’t exist anymore. If you then go measure, let’s say, σx, then you will get something entirely random, and then the value for σz will cease to exist. So it can only hold one bit of information at a time, but measuring it on a different axis will “interfere” with that information.

    It’s thus not possible to actually know the values for all the different observables because only one exists at a time, but you can also use them in logic gates where one depends on an axis with no value. For example, if you measure a qubit on the σz axis, you can then pass it through a logic gate where it will flip a second qubit or not flip it because on whether or not σx is 0 or 1. Of course, if you measured σz, then σx has no value, so you can’t say whether or not it will flip the other qubit, but you can say that they would be correlated with one another (if σx is 0 then it will not flip it, if it is 1 then it will, and thus they are related to one another). This is basically what entanglement is.

    Because you cannot know the outcome when you have certain interactions like this, you can only model the system probabilistically based on the information you do know, and because measuring qubits on one axis erases its value on all others, then some information you know about the system can interfere with (cancel out) other information you know about it. Waves also can interfere with each other, and so oddly enough, it turns out you can model how your predictions of the system evolve over the computation using a wave function which then can be used to derive a probability distribution of the results.

    What is even more interesting is that if you have a system like this where you have to model it using a wave function, it turns out it can in principle execute certain algorithms exponentially faster than classical computers. So they are definitely nowhere near the same as classical computers. Their complexity scales up exponentially when trying to simulate quantum computers. Every additional qubit doubles the complexity, and thus it becomes really difficult to even simulate small numbers of qubits. I built my own simulator and it uses 45 gigabytes of RAM to simulate just 16. I think the world record is literally only like 56.



  • Even if you believe there really exists a “hard problem of consciousness,” even Chalmers admits such a thing would have to be fundamentally unobservable and indistinguishable from something that does not have it (see his p-zombie argument), so it could never be something discovered by the sciences, or something discovered at all. Believing there is something immaterial about consciousness inherently requires an a priori assumption and cannot be something derived from a posteriori observational evidence.


  • bunchberry@lemmy.worldto196@lemmy.blahaj.zoneRule elitism
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 month ago

    We feel conscious and have an internal experience

    It does not make sense to add the qualifier “internal” unless it is being contrasted with “external.” It makes no sense to say “I’m inside this house” unless you’re contrasting it with “as opposed to outside the house.” Speaking of “internal experience” is a bit odd in my view because it implies there is such thing as an “external experience”. What would that even be?

    What about the p-zombie, the human person who just doesn’t have an internal experience and just had a set of rules, but acts like every other human?

    The p-zombie argument doesn’t make sense as you can only conceive of things that are remixes of what you’ve seen before. I have never seen a pink elephant but I’ve seen pink things and I’ve seen elephants so I can remix them in my mind and imagine it. But if you ask me to imagine an elephant a color I’ve never seen before? I just can’t do it, I wouldn’t even know what that means. Indeed, a person blind since birth cannot “see” at all, not in their imagination, not even in their dreams.

    The p-zombie argument asks us to conceive of two people that are not observably different in every way yet still different because one is lacking some property that the other has. But if you’re claiming you can conceive of this, I just don’t believe you. You’re probably playing some mental tricks on yourself to make you think you can conceive of it but you cannot. If there is nothing observably different about them then there is nothing conceivably different about them either.

    What about a cat, who apparently has a less complex internal experience, but seems to act like we’d expect if it has something like that? What about a tick, or a louse? What about a water bear? A tree? A paramecium? A bacteria? A computer program?

    This is what Thomas Nagel and David Chalmers ask and then settles on “mammals only” because they have an unjustified mammalian bias. Like I said, there is no “internal” experience, there is just experience. Nagel and Chalmers both rely on an unjustified premise that “point-of-view” is unique to mammalian brains because supposedly objective reality is point-of-view independent and since experience clearly has an aspect of point-of-view then that means experience too must be a product purely of mammalian brains, and then demands the “physicalists” prove how non-experiential reality gives rise to the experiential realm.

    But the entire premise is arbitrary and wrong. Objective reality is not point-of-view independent. In general relativity, reality literally change depending on your point-of-view. Time passes a bit faster for people standing up than people sitting down, lengths of rulers can change between observers, velocity of objects can change as well. Relational quantum mechanics goes even further and shows that all variable properties of particles depend upon point-of-view.

    The idea that objective reality is point-of-view independent is just entirely false. It is point-of-view dependent all the way down. Experience is just objective reality as it actually exists independent of the observer but dependent upon the point-of-view in which they occupy. It has nothing to do with mammalian brains, “consciousness,” or subjectivity. If reality is point-of-view dependent all the way down, then it is not even possible to conceive of an intelligent being that would occupy a unique point-of-view, because everything occupies their own unique point-of-view, even a rock. It’s not a byproduct of the “conscious mind” but just a property of objective reality: experience is objective reality independent of the observer, but dependent upon the context of that experience.

    There’s a continuum one could construct that includes all those things and ranks them by how similar their behaviors are to ours, and calls the things close to us conscious and the things farther away not, but the line is ever going to be fuzzy. There’s no categorical difference that separates one end of the spectrum from the other, it’s just about picking where to put the line.

    When you go down this continuum what gradually disappears is cognition, that is to say, the ability to think about, reflect upon, be self-aware of, one’s point-of-view. The point-of-viewness of reality, or more simply the contextual nature of reality, does not disappear at any point. Only the ability to talk about it disappears. A rock cannot tell you anything about what it’s like to be a rock from its context, it has no ability to reflect upon the point-of-view it occupies.

    Although you’re right there is no hard-and-fast line for cognition, but that’s true of anything in nature. There’s no hard-and-fast line for anything. Take a cat for example, where does the cat begin and end, both in space in time? Create a rigorous definition of its borders. You won’t be able to do it. All our conceptions are human creations and therefore a bit fuzzy. Reality is infinitely complex and we cannot deal with the infinite complexity all at once so we break it up into chunks that are easier to work with: cats, dogs, trees, red, blue, hydrogen, helium, etc. But you always find when you look at these things a little more closely that their nature as discrete “things” becomes rather fuzzy and disappears.


  • You should look into contextual realism. You might find it interesting. It is a philosophical school from the philosopher Jocelyn Benoist that basically argues that the best way to solve most of the major philosophical problems and paradoxes (i.e. mind-body problem) is to presume the natural world is context variant all the way down, i.e. there simply is no reality independent of specifying some sort of context under which it is described (kind of like a reference frame).

    The physicist Francois-Igor Pris points out that if you apply this thinking to quantum mechanics, then the confusion around interpreting it entirely disappears, because the wave function clearly just becomes a way of accounting for the context under which an observer is observing a system, and that value definiteness is just a context variant property, i.e. two people occupying two different contexts will not always describe the system as having the same definite values, but may describe some as indefinite which the other person describes as definite.

    “Observation” is just an interaction, and by interacting with a system you are by definition changing your context, and thus you have to change your accounting for your context (i.e. the wave function) in order to make future predictions. Updating the wave function then just becomes like taring a scale, that is to say, it is like re-centering or “zeroing” your coordinate system, and isn’t “collapsing” anything physical. There is no observer-dependence in the sense that observers are somehow fundamental to nature, only that systems depend upon context and so naturally as an observer describing a system you have to take this into account.


  • Quantum mechanics is incompatible with general relativity, it is perfectly compatible with special relativity, however. I mean, that is literally what quantum field theory is, the unification of special relativity and quantum mechanics into a single framework. You can indeed integrate all aspects of relativity into quantum mechanics just fine except for gravity. It’s more that quantum mechanics is incompatible with gravity and less that it is incompatible with relativity, as all the other aspects we associate with relativity are still part of quantum field theory, like the passage of time being relative, relativity of simultaneity, length contraction, etc.


  • That’s actually not quite accurate, although that is how it is commonly interpreted. The reason it is not accurate is because Bell’s theorem simply doesn’t show there is no hidden variables and indeed even Bell himself states very clearly what the theorem proves in the conclusion of his paper.

    In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant.[1]

    In other words, you can have hidden variables, but those hidden variables would not be Lorentz invariant. What is Lorentz invariance? Well, to be “invariant” basically means to be absolute, that is to say, unchanging based on reference frame. The term Lorentz here refers to Lorentz transformations under Minkowski space, i.e. the four-dimensional spacetime described by special relativity.

    This implies you can actually have hidden variables under one of two conditions:

    1. Those hidden variables are invariant under some other framework that is not special relativity, basically meaning the signals would have to travel faster than light and thus would contradict special relativity and you would need to replace it with some other framework.
    2. Those hidden variables are variant. That would mean they do indeed change based on reference frame. This would allow local hidden variable theories and thus even allow for current quantum mechanics to be interpreted as a statistical theory in a more classical sense as it even evades the PBR theorem.[2]

    The first view is unpopular because special relativity is the basis of quantum field theory, and thus contradicting it would contradict with one of our best theories of nature. There has been some fringe research into figuring out ways to reformulate special relativity to make it compatible with invariant hidden variables,[3] but given quantum mechanics has been around for over a century and nobody has figured this out, I wouldn’t get your hopes up.

    The second view is unpopular because it can be shown to violate a more subtle intuition we all tend to have, but is taken for granted so much I’m not sure if there’s even a name for it. The intuition is that not only should there be no mathematical contradictions within a single given reference frame so that an observer will never see the laws of physics break down, but that there should additionally be no contradictions when all possible reference frames are considered simultaneously.

    It is not physically possible to observe all reference frames simulatenously, and thus one can argue that such an assumption should be abandoned because it is metaphysical and not something you can ever observe in practice.[4] Note that inconsistency between all reference frames considered simulatenously does not mean observers will disagree over the facts, because if one observer asks another for information about a measurement result, they are still acquiring information about that result from their reference frame, just indirectly, and thus they would never run into a disagreement in practice.

    However, people still tend to find it too intuitive to abandon this notion of simultaneous consistency, so it remains unpopular and most physicists choose to just interpret quantum mechanics as if there are no hidden variables at all. #1 you can argue is enforced by the evidence, but #2 is more of a philosophical position, so ultimately the view that there are no hidden variables is not “proven” but proven if you accept certain philosophical assumptions.

    There is actually a second way to restore local hidden variables which I did not go into detail here which is superdeterminism. Superdeterminism basically argues that if you did just have a theory which describes how particles behave now but a more holistic theory that includes the entire initial state of the universe going back to the Big Bang and tracing out how all particles evolved to the state they are now, you can place restrictions on how that system would develop that would such that it would always reproduce the correlations we see even with hidden variables that is indeed Lorentz invariant.

    Although, the obvious problem is that it would never actually be possible to have such a theory, we cannot know the complete initial configuration of all particles in the universe, and so it’s not obvious how you would derive the correlations between particles beforehand. You would instead have to just assume they “know” how to be correlated already, which makes them equivalent to nonlocal hidden variable theories, and thus it is not entirely clear how they could be made Lorentz invariant. Not sure if anyone’s ever put forward a complete model in this framework either, same issue with nonlocal hidden variable theories.



  • bunchberry@lemmy.worldto196@lemmy.blahaj.zonedamn…
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    2 months ago

    You shouldn’t take it that seriously. MWI has a lot of zealots in the popular media who act like it’s a proven fact, kind of like some String Theorists do, but it is actually rather dubious.

    MWI claims it is simpler because they are getting rid of the Born rule, so it has less assumptions, but the reason there is the Born rule in QM is because… well, it’s needed to actually predict the right results. You can’t just throw it out. It’s also impossible to derive the Born rule without some sort of additional assumption, and there is no agreed upon way to do this.[1]

    This makes MWI actually more complicated than traditional quantum mechanics because they have to add different arbitrary assumptions and then add an additional layer of mathematics to derive the Born rule from it, rather than assuming it. These derivations also tend to be incredibly arbitrary because the assumptions you have to make to derive it are always chosen specifically for the purpose of deriving the Born rule and don’t seem to make much sense otherwise, and thus are just as arbitrary as assuming the Born rule directly.[2] [3]

    If you prefer a video, the one below discusses various “multiverse” ideas including MWI and also discusses how it ultimately ends up being more mathematically complicated than other interpretations of QM.

    https://www.youtube.com/watch?v=QHa1vbwVaNU

    MWI also makes no sense for a separate reason. If you consider the electromagnetic field for example, how do we know it exists? We know it exists because we can see its effect on particles. If you drop some iron filings around a magnet, it conforms to the shape of a field, but ultimately what you are seeing is the iron filings and not the field itself, but the effects of the field. Now, imagine if someone claimed the iron filings don’t even exist, only the field. You’d be a bit confused because, well, you only know the field exists because of its effects on the filings. You can’t see the field, only the particles, so if you deny the particles, then you’re just left in confusion.

    This is effectively what MWI does. We live in a world composed of spacetime containing particles, yet wave functions describe, well, waves made of nothing that exist in an abstract space known as Hilbert space. Schrodinger’s derivation of his famous wave equation is based on observing the behavior of particles. MWI denies particles even exist and everything is just waves in Hilbert space made of nothing, which is very bizarre because then you would be effectively claiming the entire universe is composed of something entirely invisible. So how does that explain everything we see?

    [I]t does not account, per se, for the phenomenological reality that we actually observe. In order to describe the phenomena that we observe, other mathematical elements are needed besides ψ: the individual variables, like X and P, that we use to describe the world. The Many Worlds interpretation does not explain them clearly. It is not enough to know the ψ wave and Schrödinger’s equation in order to define and use quantum theory: we need to specify an algebra of observables, otherwise we cannot calculate anything and there is no relation with the phenomena of our experience. The role of this algebra of observables, which is extremely clear in other interpretations, is not at all clear in the Many Worlds interpretation.

    --- Carlo Rovelli, Helgoland: Making Sense of the Quantum Revolution

    The philosopher Tim Maudlin has a whole lecture you can watch below on this problem, pointing out how MWI makes no sense because nothing in the interpretation includes anything we can actually observe. It quite literally describes a whole universe without observables.

    https://www.youtube.com/watch?v=us7gbWWPUsA

    Not to rain on your parade or anything if you are just having fun, but there is a lot of misinformation on websites like YouTube painting MWI as more reasonable than it actually is, so I just want people to be aware.


  • The traditional notion of cause and effect is not something all philosophers even agree upon, I mean many materialist philosophers largely rejected the notion of simple cause-and-effect chains that go back to the “first cause” since the 1800s, and that idea is still pretty popular in some eastern countries.

    For example, in China they teach “dialectical materialist” philosophy part of required “common core” in universities for any degree, and that philosophical school sees cause and effect as in a sense dependent upon point of view, that an effect being described as a particular cause is just a way of looking at things, and the same relationship under a different point of view may in fact reverse what is considered the cause and the effect, viewing the effect as the cause and vice-versa. Other points of view may even ascribe entirely different things as the cause.

    It has a very holistic view of the material world so there really is no single cause to any effect, so what you choose to identify as the cause is more of a label placed by an individual based on causes that are relevant to them and not necessarily because those are truly the only causes. In a more holistic view of nature, Laplacian-style determinism doesn’t even make sense because it implies nature is reducible down to separable causes which can all be isolated from the rest and their properties can then be fully accounted for, allowing one to predict the future with certainty.

    However, in a more holistic view of nature, it makes no sense to speak of the universe being reducible to separable causes as, again, what we label as causes are human constructs and the universe is not actually separable. In fact, the physicists Dmitry Blokhintsev had written a paper in response to a paper Albert Einstein wrote criticizing Einstein’s distaste for quantum mechanics as based on his adherence to the notion of separability which stems from Newtonian and Kantian philosophy, something which dialectical materialists, which Blokhintsev self-identified as, had rejected on philosophical grounds.

    He wrote this paper many many years prior to the publication of Bell’s theorem which showed that giving up on separability (and by extension absolute determinism) really is a necessity in quantum mechanics. Blokhintsev would then go on to write a whole book called The Philosophy of Quantum Mechanics where in it he argues that separability in nature is an illusion and under a more holistic picture absolute determinism makes no sense, again, purely from materialistic grounds.

    The point I’m making is ultimately just that a lot of the properties people try to ascribe to “materialists” or “naturalists” which then later try to show quantum mechanics is in contradiction with, they seem to forget that these are large umbrella philosophies with many different sects and there have been materialist philosophers criticizing absolute determinism as even being a meaningful concept since at least the 1800s.




  • I have never understood the argument that QM is evidence for a simulation because the universe is using less resources or something like that by not “rendering” things at that low of a level. The problem is that, yes, it’s probabilistic, but it is not merely probabilistic. We have probability in classical mechanics already like when dealing with gasses in statistical mechanics and we can model that just fine. Modeling wave functions is far more computationally expensive because they do not even exist in traditional spacetime but in an abstract Hilbert space that can grows in complexity exponentially faster than classical systems. That’s the whole reason for building quantum computers, it’s so much more computationally expensive to simulate this that it is more efficient just to have a machine that can do it. The laws of physics at a fundamental level get far more complex and far more computationally expensive, and not the reverse.