• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: October 11th, 2024

help-circle
  • And you’ve never hit a cat that was hiding under your car? Are you sure? How can you prove it? Have you gotten out each time you drove away to make sure there wasn’t a cat left behind?

    I don’t find this convincing. Have you asked the Waymo Taxi the same thing? I can check if I’ve run over a cat, and I’m naturally Inclined to care. I can’t say the same about a robot. Especially one that isn’t open source.

    If they can avoid animals now (which they can, and do), they can improve that detection and/or logic for cats that have disappeared under the car and not reappeared. That’s not even an assumption, much less a “big” one.

    I develop software for a living. It is a big assumption to think that this will be fixed with a software update. I don’t know why you act as if it’s a sure thing.

    I personally don’t like the idea of driverless cars.

    And there is your bias.

    Yes I am biased against driverless cars. They are a new technology that is being tested without our consent, and they are dependent on corporations rather than humans being held accountable when things go wrong (something that we currently struggle with as a society). The fact that you think I should default to the contrary is strange to me.

    No one argues self-driving cars are “needed.” The point is, they are a significant improvement over humans when developed correctly.

    I’d rather gravitate towards a driverless society where we invest in public transit and infrastructure rather than further ingraining cars into our society and adopting private companies (who use us as unwitting beta testers) as the solution to our problems.

    How are people this fucking stupid? Really? I don’t want you to answer that. I would need some rational and intelligent discussion on the subject.

    You need to calm down. Attacking my intelligence isn’t helping your argument. I think I’m done engaging with you now.


  • This, at least, can likely be remedied fleet-wide and permanently with a software fix.

    Oh? That seems like a pretty big assumption. Even if the company themselves said that a software update could fix running over a living creature, I would be skeptical.

    These people are just looking for an excuse to rail against automation

    Excuse or valid criticism from a negatively affected community? I personally don’t like the idea of driverless cars. I don’t think they are at all necessary to society. I don’t see them as inevitable infrastructure or even a good path forward. I don’t think my stance is unreasonable.

    as if a human driver would have definitely seen the cat.

    There are plenty of cats in my neighborhood and I’ve never hit one. I’d expect an automated vehicle to drive better than a human, not worse.

    You talk about people “railing against automation” but is it more productive to make reflexive excuses for its failures? The fact of the matter (IMO) is that we shouldn’t be beta test subjects for these companies and this new technology.

    Also, keep cats inside.

    This I can agree with.








  • Political intervention is what started Google, so I don’t see the problem.

    How about taking responsibility and just not using services that require it.

    Google has shaped the web into what it is over decades so that they could maintain their position of power. This is the very essence and purpose of a monopoly. Yet here you are trying to blame anything but the monopoly for the monopoly’s existence.

    Nothing like convincing hundreds of millions of people to abandon a company rather than put any pressure on the small group of greedy people who own it.


  • my experience was that Wikipedia was specifically called out as being especially unreliable and that’s just nonsense.

    Let me clarify then. It’s unreliable as a cited source in Academia. I’m drawing parallels and criticizing the way people use chatgpt. I.e. taking it at face value with zero caution and using it as if it’s a primary source of information.

    Eesh. The value of a tertiary source is that it cites the secondary sources (which cite the primary). If you strip that out, how’s it different from “some guy told me…”? I think your professors did a bad job of teaching you about how to read sources. Maybe because they didn’t know themselves. :-(

    Did you read beyond the sentence that you quoted?

    Here:

    I can get summarized information about new languages and frameworks really quickly, and then I can dive into the official documentation when I have a high level understanding of the topic at hand.

    Example: you’re a junior developer trying to figure out what this JavaScript syntax is const {x} = response?.data. It’s difficult to figure out what destructuring and optional chaining are without knowing what they’re called.

    With Chatgpt, you can copy and paste that code and ask “tell me what every piece of syntax is in this line of Javascript.” Then you can check the official docs to learn more.


  • I think the academic advice about Wikipedia was sadly mistaken.

    Yeah, a lot of people had your perspective about Wikipedia while I was in college, but they are wrong, according to Wikipedia.

    From the link:

    We advise special caution when using Wikipedia as a source for research projects. Normal academic usage of Wikipedia is for getting the general facts of a problem and to gather keywords, references and bibliographical pointers, but not as a source in itself. Remember that Wikipedia is a wiki. Anyone in the world can edit an article, deleting accurate information or adding false information, which the reader may not recognize. Thus, you probably shouldn’t be citing Wikipedia. This is good advice for all tertiary sources such as encyclopedias, which are designed to introduce readers to a topic, not to be the final point of reference. Wikipedia, like other encyclopedias, provides overviews of a topic and indicates sources of more extensive information.

    I personally use ChatGPT like I would Wikipedia. It’s a great introduction to a subject, especially in my line of work, which is software development. I can get summarized information about new languages and frameworks really quickly, and then I can dive into the official documentation when I have a high level understanding of the topic at hand. Unfortunately, most people do not use LLMs this way.



  • Throughout most of my years of higher education as well as k-12, I was told that sourcing Wikipedia was forbidden. In fact, many professors/teachers would automatically fail an assignment if they felt you were using wikipedia. The claim was that the information was often inaccurate, or changing too frequently to be reliable. This reasoning, while irritating at times, always made sense to me.

    Fast forward to my professional life today. I’ve been told on a number of occasions that I should trust LLMs to give me an accurate answer. I’m told that I will “be left behind” if I don’t use ChatGPT to accomplish things faster. I’m told that my concerns of accuracy and ethics surrounding generative AI is simply “negativity.”

    These tools are (abstractly) referencing random users on the internet as well as Wikipedia and treating them both as legitimate sources of information. That seems crazy to me. How can we trust a technology that just references flawed sources from our past? I know there’s ways to improve accuracy with things like RAG, but most people are hitting the LLM directly.

    The culture around Generative AI should be scientific and cautious, but instead it feels like a cult with a good marketing team.