• 3 Posts
  • 99 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle










  • If this works, it’s noteworthy. I don’t know if similar results have been achieved before because I don’t follow developments that closely, but I expect that biological computing is going to catch a lot more attention in the near-to-mid-term future. Because of the efficiency and increasingly tight constraints imposed on humans due to environmental pressure, I foresee it eventually eclipse silicon-based computing.

    FinalSpark says its Neuroplatform is capable of learning and processing information

    They sneak that in there as if it’s just a cool little fact, but this should be the real headline. I can’t believe they just left it at that. Deep learning can not be the future of AI, because it doesn’t facilitate continuous learning. Active inference is a term that will probably be thrown about a lot more in the coming months and years, and as evidenced by all kinds of living things around us, wetware architectures are highly suitable for the purpose of instantiating agents doing active inference.


  • I don’t know about google because I don’t use it unless I really can’t find what I’m looking for, but here’s a quick ddg search with a very unambiguous and specific question, and from sampling only the top 9 results I see 2 that are at all relevant (2nd and 5th):

    In order to answer my question, I need to first mentally filter out 7/9 of the results visible on my screen, then open both of the relevant ones in new tabs and read through lengthy discussions in order to find out if anyone has shared a proper solution.

    Here is the same search using perplexity’s default model (not pro, which is a lot better at breaking down queries and including relevant references):

    and I don’t have to verify all the details because even if some of it is wrong, it is immediately more useful information to me.

    I want to re-emphasise though that using LLMs for this can be incredibly frustrating too, because they will often insist assertively on falsehoods and generally act really dumb, so I’m not saying there aren’t pros and cons. Sometimes a simple keyword-based search and manual curation of the results is preferred to the nonsense produced by a stupid language model.

    Edit: I didn’t answer your question about malicious, but I can give some example of what I consider malicious and you may agree that it happens frequently enough:

    • AI generated articles
    • irrelevant SEO results
    • ads/sponsored results/commercial products or services
    • blog spam by people who speak out of ignorance
    • flame bait
    • deliberate disinformation
    • low-quality journalism
    • websites designed to exploit people/optimised for purposes other than to contribute to a healthy internet

    etc.


  • Maybe I can share some insight into why one might want to.

    I hate searching the internet. It’s a massive mental drain for me to try figure out how I should put my problem into words that others with similar ideas will have done before me - it’s my mental processing power wasted on purely linguistic overhead instead of trying to understand and learn about the problem.

    I hate the (dis-/mis-)informational assault I open myself to by skimming through the results, because the majority of them will be so laughably irrelevant, if not actively malicious, that I become a slightly worse person every time I expose myself.

    And I hate visiting websites. Not only because of all the reasons modern websites suck, but because even if they are a delight in UX, they are distracting me from what I really want, which is (most of the time) information, not to experience someone’s idiosyncratic, artistic ideas for how to organise and present data, or how to keep me ‘engaged’.

    So yes, I prefer stupid a language model that will lie about facts half the time and bastardise half my prompts if it means I can glance a bit of what the internet has to say about something, because I can more easily spot plausible bullshit and discard it or quickly check its veracity than I can magic my vague problem into a suitable query only to sift through more ignorance, hostility, and implausible bullshit conjured by internet randos instead.

    And yes, LLMs really do suck even in their domain of speciality (language - because language serves a purpose, and they do not understand it), and they are all kinds of harmful, dangerous, and misused. Given how genuinely ignorant people are of what an LLM really is and what it is really doing, I think it’s irresponsible to embed one the way google has.

    I think it’s probably best to… uhh… sort of gatekeep this tech so that it’s mostly utilised by people who understand the risks. But capitalism is incompatible with niches and bespoke products, so every piece of tech has to be made with absolutely everyone as a target audience.



  • I don’t remember encountering the particular bug they’re describing. I was hoping it was about the behaviour of drag-and-dropping something into the browser, such as with those “drop a file here to upload”. I am often simply unable to make that work because instead of the thing being dropped into the webpage’s element, it opens the file in the browser instead, which is not really something I ever want to do.




  • Not to mention the problem of what life is even supposed to do beyond a certain point of development. The depressing fact is that there is a finite amount of knowledge to be gained, a finite amount of resources to harvest, a finite diversity of life to contend or thrive alongside with. Once a pocket of life in this massive universe begins to run out of things to do and stagnates, then what? What is there to think about; to feel; to experience?

    There’s little point in exploring space if one know how this universe works. One knows the rules, knows all the ways it can play out, and there’s no surprise waiting on the other end of any venture one can imagine embarking on.

    That’s my theory. The Great Filter is just depressive boredom. We don’t see other life because by the time a civilisation is able and ready to spend thousands of years travelling through deep space, they’ll have already lost any motivation they might have had to do so.

    I suspect that there’s at best a very short window wherein a species is both knowledgeable enough to dream of space exploration and technologically capable of sending any significant amount of artificial constructions out there.

    Not to mention that anything an alien species might send into interstellar space is unimaginably unlikely to be recorded exactly at precisely the moment they pass another lump of matter - especially if the window is as short as I fear.


  • This comment tells me that you do not fully understand reversible computing, thermodynamics, nor what I am trying to say. The snark does not motivate me to be patient or pedagogical, but I’ll still give it a shot.

    By interfering with a closed system as an entity outside of that system (for example by extracting information by performing a measurement on any of its component subsystems such as the position or momentum of a particle), you are introducing a dependency of that formerly closed system’s state on your state and that of your environment. Now, by state I mean quantum state, and by interfering I mean entangling yourself (and your environment) with the system, because our reality is fundamentally quantum.

    Entanglement between an observer and a system is what makes it appear to the observer as if the wave function of the system collapsed to a (more) definite state, because the observer never experiences the branching out of its own quantum state as the wave function of the now combined system describes a superposition of all possible state combinations (their (and their environment’s) preceding state × the system’s preceding state × the state of whatever catalyst joined them together). The reason an observer doesn’t ever experience “branching out” is because the branches are causally disconnected, and so each branch describes a separate reality with all other realities becoming forever inaccessible. This inaccessibility entails a loss of information, and this loss of information is irreversible.

    So there you have it. You can never extract useful work from a closed system without losing something in the process. This something is usually called “heat”, but what is lost is not merely “heat”: it is the potential usefulness of the thing of interest. But it really all boils down to information. Entropy increases as information is lost, and this is all relative to an observer. Heat dissipation represents “useless information” or “loss of useful/extractable energy” as it concerns an entity embedded in a quantum wave function.