• 0 Posts
  • 15 Comments
Joined 9 months ago
cake
Cake day: March 9th, 2025

help-circle


  • I’ve also seen it that way and have been coached by my psychologist on it. Ultimately, for me, it was best to set an expiration date. The date on which I could finally do it with minimal guilt. This actually had several positive impacts in my life.

    First I quit using suicide as a first or second resort when coping. Instead it has become more of a fleeting thought as I know I’m “not allowed” to do so yet (while obviously still lingering as seen by my initial comment). Second was giving me a finish line. A finite date where I knew the pain would end (chronic conditions are the worst). Third was a reminder that I only have X days left, so make the most of them. It turns death from this amorphous thing into a clear cut “this is it”. I KNOW when the ride ends down to the hour.

    The caveat to this is the same as literally everything else in my life: I reserve the right to change my mind as new information is introduced. I’ve made a commitment to not do it until the date I’ve set, but as the date approaches, I’m not ruling out examining the evidence as presented and potentially pushing it out longer.

    A LOT of peace of mind here.



  • I’ve talked with an AI about suicidal ideation. More than once. For me it was and is a way to help self-regulate. I’ve low-key wanted to kill myself since I was 8 years old. For me it’s just a part of life. For others it’s usually REALLY uncomfortable for them to talk about without wanting to tell me how wrong I am for thinking that way.

    Yeah I don’t trust it, but at the same time, for me it’s better than sitting on those feelings between therapy sessions. To me, these comments read a lot like people who have never experienced ongoing clinical suicidal ideation.








  • I asked OpenAI the same. Only without the snark, and noted that US politicians and OpenAI pressers should not be considered evidence without facts:

    Great question—and a healthy dose of skepticism.

    If we assume U.S. lawmakers may be misrepresenting the situation or overstating the threat (intentionally or otherwise), then it’s worth asking: What counter-evidence or independent information exists that disputes the claims against DeepSeek?

    Here’s what we can currently analyze:

    1.

    OpenAI’s Role and Statements

    • The accusation is that DeepSeek used OpenAI data without authorization.
    • OpenAI has not released detailed technical evidence (as of now) proving DeepSeek “stole” data.
    • It is also unclear what “unauthorized use” means here—was it scraping publicly available content generated by ChatGPT? That’s not the same as hacking or violating terms.

    Counterpoint: Unless OpenAI provides forensic evidence (e.g., logs showing data scraping from banned accounts or circumvention of APIs), their claim is not independently verifiable.

    2.

    Open-source AI Training Practices

    • DeepSeek R1 and other models are trained on publicly available data, much like LLaMA, Falcon, Mistral, etc.
    • Many open-source LLMs train on similar corpora, including data from books, code repositories, and even internet crawls, which may contain ChatGPT outputs.
    • No proof has been provided that DeepSeek used proprietary datasets beyond what’s publicly available or scraped from the internet.

    Counterpoint: If DeepSeek simply trained on web data that happened to contain ChatGPT responses (which are abundant online), that wouldn’t constitute theft by typical AI research standards.

    3.

    DeepSeek’s Own Transparency

    • DeepSeek has published detailed model cards and training parameters for R1, a transparency move rare among Chinese AI firms.
    • This suggests a desire to appear cooperative or aligned with global AI openness norms—not necessarily what a covert “spy tool” would do.

    Counterpoint: Spyware tools or state-controlled AI systems don’t usually go open source and document their architectures.

    4.

    No Independent Audit or Third-Party Verification

    • As of now, there’s no independent audit of DeepSeek’s model, training corpus, or infrastructure.
    • The accusations rest heavily on political framing, not technical forensics released to the public.

    5.

    Broader Pattern of U.S. Tech Nationalism

    • U.S. lawmakers have a history of labeling Chinese tech firms as threats without providing concrete, public-facing evidence (Huawei, TikTok, etc.).
    • This doesn’t mean the threats are false, but it does mean we should demand proof beyond committee soundbites.

    TL;DR – Is there counter-evidence?

    Not conclusive counter-evidence, but there’s a lack of compelling public proof supporting the claim. The case against DeepSeek is built on:

    • OpenAI’s statements (without shared evidence),
    • Political concerns about China,
    • and suspicions rooted in strategic competition.

    That’s not enough to say DeepSeek is innocent—but it’s not enough to prove guilt either