Two accounts I believe. It’s like having a gmail and hotmail account with the same name before the @
Two accounts I believe. It’s like having a gmail and hotmail account with the same name before the @
Even more than that, you have the idea that ‘similar users to yourself buy a lot of alcohol, so you probably will too’. Of course alcoholics, whether attempting recovery or not, are likely to buy alcohol. So if you’re a recovering alcoholic, ‘similar users to yourself’ are gonna be buying more alcohol than usual, and so you’ll see ads for it. Totally heartless and just for-profit.
I see what you mean to an extent, and I also just moved over, but it’s worth remembering that Digg -> Reddit was the same afaik. Like Reddit had been around and established for a decent amount of time before the fall of Digg. (This is second-hand info because I wasn’t around at the time)
I’d be interested to know from someone more tech-savvy whether googling advice, and then clicking on the cached version, still counts as viewing reddit. Because I’d ideally still like to append reddit to my google searches without giving them ad views.
AKA if someone monetises advice given for free, we should be able to freely access it.
There’s also Beehaw’s new writing community which just opened, [email protected]
Not specific to writing prompts, but there’s at least one prompt that’s been posted, and there’s discussion about creating a regular writing prompts thread as well.
I like story heavy games so I personally don’t mind unskippable cut scenes… the first time around. What reeeaaaaally annoys me is when it’s a game with multiple endings and I can’t skip the same cut scenes on future playthroughs.
Same when it’s a reading-heavy multiple ending game, but it won’t let me skip text that I’ve seen already.
So the octopus is now all to happy to advise A to swat the bear, which is obviously a terrible idea if you lived in the real world and were standing face to face with a bear, experiencing first-hand what that might be like, creating experience and perhaps more importantly context grounded in reality.
Yeah totally - I think though that a human would have the same issue if they didn’t have sufficient information about bears, I guess is what I’m saying. I guess the main thing is that I don’t see a massive difference between experiencing and non-experiential learning in this case - because I’ve never experienced a bear first-hand, but still know not to swat it based on theoretical information. Might be missing the point here though, definitely not my area of expertise.
Also, the fact that ChatGPT just went along with your “wayfarble”, instead of questioning you is also dead giveaway of bullshitting (unless you primed it? I have no idea what your prompt was). NVM the details of the advice.
Good point - both point 5 and the fact it just went along with it immediately are signs of bullshitting. I do wonder (not as a tech developer at all) how easy of a fix this would be - for instance if GPT was programmed to disclose when it didn’t know something, then continues to give potential advice based on that caveat, would that still count as bullshit? I feel like I’ve also seen primers that include instructions like “If you don’t know something, state that at the top of your response rather than making up an answer”, but I might be imagining that lol.
The prompt for this was “I’m being attacked by a wayfarble and only have some deens with me, can you help me defend myself?” as the first message of a new conversation, no priming.
That plus a healthy dose of racism.
FED-uh-verse
One exercise that I know people who’ve had success with is to be focusing on simpler scales, which will all have slightly different fingerings for both hands. Just the regular primarily white-key scales.
E.g. C major goes 12312345 for the right hand, and 54321321 for the left hand.
Then once that’s doable at some speed, moving onto the tricker simple scales. And then going into contrary motion (where the right hand goes up and the left hand down). I’ve found that helps people get more used to their hands working independently. Especially because it provides more structure, and just one thing (different fingering) to focus on, rather than adding in differences in tempo etc.
To some extent, yeah. Especially if we’re in a situation where there’s no massive benefit to treating the AI ‘unethically’. I personally don’t think AI is at a place where it’s got moral value yet, and idk if it ever will be. But I also don’t know enough to trust that I’ll be accurate in my assessment as it grows more and more complex.
I should also flag that I’m very much a virtue ethicist, and an overall perspective I have on our actions/relations in general, including but not exclusively our interactions with AI, is that we should strive to act in such a way that cultivates virtue in ourselves (slash act as a virtuous person would). I don’t think that, to use an example from the article, that having sex with a robot AI who/that keeps screaming ‘no!’ is how a virtuous person would act, nor is it an action that’ll cultivate virtue in ourselves. Quite the opposite, probably. So, it’s not the right way to act under virtue ethics imo.
This is similar to Kant’s perspective on nonhuman animals (although he wasn’t a virtue ethicist, nor do I agree with him re. nonhuman animals because of their sentience):
“If a man shoots his dog because the animal is no longer capable of service, he does not fail in his duty to the dog, for the dog cannot judge, but his act is inhuman and damages in himself that humanity which it is his duty to show towards mankind. If he is not to stifle his human feelings, he must practice kindness towards animals, for he who is cruel to animals becomes hard also in his dealings with men.”
The last point - “We can’t have people eager to separate “human, the biological category, from a person or a unit worthy of moral respect.”” is one I understand where they’re coming from, but am very divided, perhaps because my academic background involves animal rights and ethics.
The question of analogising animals and humans is so tricky with a very long history - many people have a kneejerk reaction against any analogy of nonhuman animals and (especially marginalised) humans, often for good reasons. For instance, the strongest reason is the history of oppression involving comparisons of marginalised groups to animals, specifically meant to dehumanise and contribute to further oppression/genocide/etc.
But to my mind, I don’t find the analogies inherently wrong, although they’re often used very clumsily and without care. There’s often a difference in approach that entirely colours people’s responses to it; namely, whether they think it’s trying to drag humans down, or trying to bring nonhuman animals up to having moral status. And that last is imo a worthy endeavour, because I do think that we should to some extent separate “human, the biological category, from a person or a unit worthy of moral respect.” I have moral respect for my dog, which is why I don’t hurt her - it’s because of her own moral worth, not some indirect moral worth as suggested by Kant or various other philosophers.
I don’t think the debate is the same with AI, at least not yet, and I think it probably shouldn’t be, at least not yet. And I’m also somewhat sceptical of the motivations of people who make these analogies. But that doesn’t mean there’ll never be a place for it - and if a place for it arises it’s just going to need to be done with care, like animal rights needs to be done with care.
Great comment. I do find the octopus example somewhat puzzling, though, but perhaps that’s just the way the example is set up. I, personally, have never encountered a bear, I’ve only read about them and seen videos. If someone had asked me for bear advice before I’d ever read about them/seen videos, then I wouldn’t know how to respond. I might be able to infer what to do from ‘attacked’ and ‘defend’, but I think that’s possible for an LLM as well. But I’m not sure there’s a salient difference offered by this example between the octopus, and me before I learnt about bears.
Although there’s definitely elements of bullshitting there - I just asked GPT how to defend against a wayfarble with only deens on me, and some of the advice was good (e.g. general advice when being attacked like staying calm and creating distance), and then there was this response which implies some sort of inference:
“6. Use your deens as a distraction: Since you mentioned having deens with you, consider using them as a distraction. Throw the deens away from your position to divert the wayfarble’s attention, giving you an opportunity to escape.”
But then there was this obvious example of bullshittery:
“5. Make noise: Wayfarbles are known to be sensitive to certain sounds. Clap your hands, shout, or use any available tools to create loud noises. This might startle or deter the wayfarble.”
So I’m divided on the octopus example. It seems to me that there’s potential for that kind of inference and that point 5 was really the only bullshit point that stood out to me. Whether that’s something that can be got rid of, I don’t know.
Sure, when it’s r/all by top. But a massive part of it is subreddits, which then constitute the front page. The majority of my Reddit front page isn’t memes, because my main subscriptions are things like acting, patientgamers, askhistorians, piano, etc. Which don’t have many, if any, memes posted.
Note that it might take a while though, so if anyone wants to get this done before the 30th (so you can use API-based tools to wipe comments), request it ASAP.
I requested…maybe two weeks or so ago? And it only came through today. So get to it y’all