asthfghl: (You may kiss me now!)
Asthfghl ([personal profile] asthfghl) wrote in [community profile] talkpolitics2025-07-24 02:58 pm

The only outcome of the Ukraine war

The war in Ukraine will end when Putin loses, or at least when he realizes he can't win. Many in the West still mistakenly believe a deal can be struck with him. But we've got to realize Putin isn't just waging a war over territory, he's trying to erase Ukrainian identity, destroy post-WW2 international norms, and challenge the democratic ideals that threaten his regime.

Sure, Trump's relationship with Putin is complex. He admires unchecked power and sees Putin as a kind of ally, even friend. Over decades, Trump has benefited financially from Russian ties, this isn't a conspiracy, it's well documented. And while some in Washington are pushing for stronger sanctions, there are still people in his orbit looking to cut deals with Russia for personal or financial gain.

Putin, meanwhile, miscalculated the whole thing right from the start. He underestimated Ukraine's resistance and the West's willingness to support it. Now, his strategy has always had 3 parts:

1. Rebuild a Russian empire, with Ukraine forcibly brought back into the fold.

2. Crush the democratic and anti-corruption ideas born from the 2014 Maidan revolution.

3. Undermine the global rules-based order to build a world where might makes right.

Countries like Hungary and Slovakia have taken different stances, largely due to internal corruption, economic dependence, or political self-interest. Hungary's leader, Viktor Orban, has openly chosen to side with Russia, even while facing protests at home. The EU needs to rethink its approach to such members, maybe even consider a suspension of their voting rights - that could be a start. They're a hindrance to Europe's geopolitical position, and a "useful idiot", Russia's Trojan horse.

It's no surprise that Putin doesn't understand the power of democratic ideas or the depth of Ukraine's national identity. But the West must understand this: the war ends only when Putin fails. If he doesn't, the rest of us in Eastern Europe are next.
nairiporter: (Default)
nairiporter ([personal profile] nairiporter) wrote in [community profile] talkpolitics2025-07-23 02:34 pm
Entry tags:

My 2 cents on the monthly subject: AI

AI is changing our world fast - helping with tasks, generating text, even thinking for us. It's tempting and convenient, and that's exactly the danger: it feeds our natural laziness. Especially for younger people, it's easy to stop making effort when a machine can do it all.

But AI isn't the enemy. Like every big invention - the printing press, photography, the internet - it causes fear at first. Eventually though we adapt and find balance. The key is to stay active as thinkers, creators and humans. If we do, AI can amplify our work, not replace us.

In fields like history, AI might even help us face uncomfortable truths. It can detect lies, patterns and gaps in the stories we've been told. But it won't rewrite history for us - it'll just make it harder to ignore the facts.

AI doesn't bring truth - it brings pressure for truth. And whether we use it to grow or to hide will say more about us than about the machine.
luzribeiro: (Dog)
luzribeiro ([personal profile] luzribeiro) wrote in [community profile] talkpolitics2025-07-18 01:39 pm

Friday LOL. Oh, that damn Windows!

It's crap, I know, but we can't live without it. Or can't we?

Anyway, Windows can be annoying AND funny at the same time, depending the angle you look at it from (and your current mood). Want examples? You insist on examples? Okay, you asked for it:







And many MORE!
garote: (wasteland librarian)
garote ([personal profile] garote) wrote in [community profile] talkpolitics2025-07-16 02:35 pm
Entry tags:

Originality is the art of concealing your source

Late last year I wrote this. Since it's on-topic, I'd like to see what everyone here thinks...

Search engines used to take in a question and then direct the user to some external data source most relevant to the answer.

Generative AI in speech, text, and images is a way of ingesting large amounts of information specific to a domain and then regurgitating synthesized answers to questions posed about that information.  This is basically the next evolutionary step of a search engine.  The main difference is, the answer is provided by an in-house synthesis of the external data, rather than a simple redirect to the external data.

This is being implemented right now on the Google search page, for example.  Calling it a search page is now inaccurate.  Google vacuums up information from millions of websites, then regurgitates an answer to your query directly.  You never perform a search.  You never visit any of the websites the information was derived from.  You are never aware of them, except in the case where Google is paid to advertise one to you.

If all those other pages didn’t exist, Google's generative AI answer would be useless trash.  But those pages exist, and Google has absorbed them.  In return, Google gives them ... absolutely nothing, but still manages to stand between you and them, redirecting you to somewhere else, or ideally, keeping you on Google permanently.  It's convenient for you, profitable for Google, and slow starvation for every provider of content or information on the internet.  Since its beginning as a search engine, Google has gone from middleman, to broker, to consultant.  Instead of skimming some profit in a transaction between you and someone else, Google now does the entire transaction, and pockets the whole amount.

Reproducing another's work without compensation is already illegal, and has been for a long time.  The only way this new process stays legal is if the work it ingests is sufficiently large or diluted enough that the regurgitated output looks different enough (to a human) that it does not resemble a mere copy, but is an interpretation or reconstruction.  There is a threshold below which any reasonable author or editor would declare plagiarism, and human editors and authors have collectively learned that threshold for centuries.  Pass that threshold, and your generative output is no longer plagiarism. It's legally untouchable.

An entity could ingest every jazz performance given by Mavis Staples, then churn out a thousand albums "in the style" of Mavis Staples, and would owe Mavis Staples nothing, while at the same time reducing the value of her discography to almost nothing.  An entity could do the same for television shows, for novels - even non-fiction novels - even academic papers and scientific research - and owe the creators of these works nothing, even if they leveraged infinite regurgitated variations of the source material for their own purposes internally.  Ingestion and regurgitation by generative AI is, at its core, doing for information what the mafia needs to do with money to hide it from the law:  It is information laundering.

Imitation is the sincerest form of flattery, and there are often ways to leverage imitators of one's work to gain recognition or value for oneself. These all rely on the original author being able to participate in the same marketplace that the imitators are helping to grow. But what if the original author is shut out? What if the imitators have an incentive to pretend that the original author doesn't exist?

Obscuring the original source of any potential output is the essential new trait that generative AI brings to the table.  Wait, that needs better emphasis:  The WHOLE POINT of generative AI, as far as for-profit industry is concerned, is that it obscures original sources while still leveraging their content.  It is, at long last, a legal shortcut through the ethical problems of copyright infringement, licensing, plagiarism, and piracy -- for those sufficiently powerful enough already to wield it.  It is the Holy Grail for media giants.  Any entity that can buy enough computing power can now engage in an entirely legal version of exactly what private citizens, authors, musicians, professors, lawyers, etc. are discouraged or even prohibited from doing. ... A prohibition that all those individuals collectively rely on to make a living from their work.

The motivation to obscure is subtle, but real.  Any time an entity provides a clear reference to an individual external source, it is exposing itself to the need to reach some kind of legal or commercial or at the very least ethical negotiation with that source.  That's never in their financial interest.  Whether it's entertainment media, engineering plans, historical records, observational data, or even just a billion chat room conversations, there are licensing and privacy strings attached. But, launder all of that through a generative training set, and suddenly it's ... "Source material? What source material? There's no source material detectable in all these numbers. We dare you to prove otherwise." Perhaps you could hire a forensic investigator and a lawyer and subpoena their access logs, if they were dumb enough to keep any.

An obvious consequence of this is, to stay powerful or become more powerful in the information space, these entities must deliberately work towards the appearance of "originality" while at the same time absorbing external data, which means increasing the obscurity of their source material.  In other words, they must endorse and expand a realm of information where the provenance of any one fact, any measured number, any chain of reasoning that leads outside their doors, cannot be established.  The only exceptions allowable are those that do not threaten their profit stream, e.g. references to publicly available data.  For everything else, it's better if they are the authority, and if you see them as such.  If you want to push beyond the veil and examine their reasoning or references, you will get lost in a generative hall of mirrors. Ask an AI to explain how it reached some conclusion, and it will construct a plausible-looking response to your request, fresh from its data stores. The result isn't what you wanted. It's more akin to asking a child to explain why she didn't do her homework, and getting back an outrageous story constructed in the moment. That may seem unfair since generative AI does not actually try to deceive unless it's been trained to. But the point is, ... if it doesn't know, how could you?

This economic model has already proven to be ridiculously profitable for companies like OpenAI, Google, Adobe, et cetera.  They devour information at near zero cost, create a massive bowl of generative AI stew, and rent you a spoon.  Where would your search for knowledge have taken you, if not to them?  Where would that money in your subscription fee have gone, if not to them?  It's in the interest of those companies that you be prevented from knowing. Your dependency on them grows. The health of the information marketplace and the cultural landscape declines. Welcome to the information mafia.

Postscript:

Is there any way to avert this future? Should we?

We thoroughly regulate the form of machines that transport humans, in order to save lives. We regulate the content of public school curriculums according to well-established laws, for example those covering the establishment clause of the first amendment. So regulating devices and regulating information content is something we're used to doing.

But now there is a machine that can ingest a copyrighted work, and spit out a derivation of that work that leverages the content, while also completely concealing the act of ingesting. How do you enforce a law against something that you can never prove happened?
luzribeiro: (Default)
luzribeiro ([personal profile] luzribeiro) wrote in [community profile] talkpolitics2025-07-16 03:47 pm
Entry tags:

On the monthly subject: "AI Regulation: Striking the Balance"

I'm all for smart guardrails that help us harness AI safely without suffocating innovation. Now, the US has been highly reactive (with over 550 AI‑related bills in 45 states) but lacks cohesive federal direction. Meanwhile, the EU’s sweeping “AI Act” sets high standards but could overburden smaller innovators:
https://www.wired.com/story/plaintext-sam-altman-ai-regulation-trump/
https://www.mdpi.com/2078-2489/14/12/645
https://time.com/7213096/uk-public-ai-law-poll/

So, how about:

Targeted regulation: Instead of painting AI with one brush, focus on where the risks lie, like bias in hiring tools or misuse in facial recognition.

Outcome over technology: Don’t regulate the tech itself; regulate its applications.

Enforceable rules: We need real teeth - clear accountability, not toothless charters.

Bottom line: What we need is fine‑tuned, enforceable, risk‑adaptive policies, so AI can thrive while protecting people.

Thoughts?
oportet: (Default)
oportet ([personal profile] oportet) wrote in [community profile] talkpolitics2025-07-11 02:52 pm
Entry tags:

nonstop non-story nonsense

So....

The 'list' of Epstein 'clients' that existed before, never existed.

The security camera footage outside his cell that didn't exist before - does exist (and rumors are it's been altered)

Now there's also rumors of a little they-go-or-i'm-gone between a few highers-ups in the Trump administration (bondi vs bongino)

Someone resigning or getting fired seems inevitable at this point - who do you think it will be?

Who do you believe Epstein was? A disgraced financier with a sick side job? C.i.a? Mossad? All of the above?

If you were an advisor to Trump, would you advise him to say nothing, do nothing, and wait for it to go away?

Or is this not going away?