The people on this discussion board are certainly not representative of potential IRA contributors. Do you have any objective evidence for your claim?
This is a case study in get woke go broke
Update (1625ET): And there it is; the Trump administration has designated Anthropic a ‘supply-chain risk’
In a Friday evening post on X, Secretary of War Pete Hegseth said that this week, “Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon,” adding that Anthropic and CEO Dario Amodei have "chosen duplicity.
Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.
The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.
*Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.*
Anthropic’s stance is fundamentally incompatible with American principles.** Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.*
In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.
The fact that a vast majority of low income persons not contributing to an IRA are not doing so because they don’t understand, or they can’t afford the contributions. A $1k match isn’t going to make them any more comfortable with something they don’t understand, and it won’t do anything to help them afford the contributions.
I suspect the number of people holding out for a little sweeter deal before starting an IRA is little more than what qualifies as a rounding error. People inclined to be incentivized by the advantages are already taking advantage of the existing advantages.
A corporation asked the government to agree to its terms of service, which are not woke, they asked the government to follow the law. The government said they already follow that law, and yet they declined to agree to the ToS. WHY??? The government threw a fit like a baby. If you don’t agree to the ToS, you don’t get candy. I don’t see any problem here.
And they aren’t going broke, they have the best AI models right now. That’s probably the reason the government wants it so much.
This is SecWar Hegseth’s response to Wokethropic
This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.
Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.
Instead, @AnthropicAI and its CEO@DarioAmodei
, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.
The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.
Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.
Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.
In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.
America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.
Literally throwing a fit like a baby. Have you ever heard of a government reacting so publicly to a business that doesn’t bend the knee? I can’t recall anything like this.
To be fair, that response offers nothing tangible to support any of what it says. It’s just grandstanding, basically taking a page from the typical liberal playbook.
Maybe if someone would try to detail what it is about the TOS that is so arrogant and a threat to safety?
Seems serious to me
In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
On Friday, OpenAI said it would also draw the same red lines as Anthropic: no AI for mass surveillance or autonomous lethal weapons.
“We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”
That’s what this fight is about. The government stated in one way or another that mass surveillance is already illegal and that there will always be a human in the loop, but they still threw a fit.
The fight is about who will enforce the agreement. Here’s what Secretary Hegseth said in his post.
Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
Well, yeah, they are serious about the grandstanding. No where do they even hint of what exactly Anthropic is demanding or requiring. Hegseth is just throwing a tantrum.
See Hegseth’s post. Anthropic wants veto power over how customers like the department of war use their software. If someone sells you a car and forces you to agree that you will not break the law, Does that give the vendor the right to have someone in the car enforcing the agreement?
Yeah, that the same interpretation as when liberals insist that the true objective of people who support bathroom laws is to erase trans people from existance.
I expect the actual terms, not vague rhetoric designed to make it seem so horrible without ever noting what “it” is.
Sorry, not following this statement. What does trans have to do with AI software?
Apparently OpenAI was able to come to an agreement satisfactory to them and their customer, the US government.
This is how the free enterprise system works. A vendor and a customer come to a mutually beneficial agreement. No one is forcing anthropic to do business with anyone. But their potential customers can also act in their best interest.
anthropic seems to think there are plenty of woke customers who will use their product. Let’s see if they are right.
Claiming to know their “true objective”, instead of referencing what they actualy say/do/think./want.
I’m guessing the reality of what they think is magnitudes less inflamatory. Thus the grandstanding. Why detail the differences, when you can prod everyone to just assume the extremes? As I noted, that’s the typical liberal playbook.
More get woke, go broke
Asset management firm Vanguard is paying $29.5 million to multiple states to settle an antitrust lawsuit over its environmental, social, and governance investing goals.
The suit, filed by Texas and a dozen other states, accused Vanguard, BlackRock, and State Street of conspiring to raise coal prices by pushing climate activism on to companies supplying the resource. The investment management firms allegedly pressed such companies to adopt green energy policies that would inevitably lead to higher energyprices, thereby violating antitrust laws.
Someone’s being dishonest about what this is really about. If it’s illegal and government would never do any mass surveillance (NSA surveillance 2.0 better with AI now) or use AI in autonomous weapons, why not agree to these terms of service. It’s a little bit about me doing a credit card application and not paying attention to what APR they’re gonna charge me if I carry a balance. I agree to whatever APR because I’ve never carried a balance in my life.
Hence my conclusion is they just want to use the best AI on the market without any liability on legality and companies’ right to not agree to just about any illegal activity. Sure they’ll promise to not use it for illegal activities. They have an obedient DoJ, they’ll deal with any whistleblowers who report us when we do break the law being the American public’s back. What are they gonna do after the fact? Apologize? Impeach? LOL Yeah right. How will they make amend to those killed by autonomous weapons making unethical/illegal decisions?
It stinks of a powerplay to bully companies into giving AI without any restrictions. If you go down the line fa enough, I’m sure one unscrupulous company out there will take them on the offer.
Maybe more than one?
AFAIK xAi has not been heard from.
My conclusion is they dont want some AI company arbitrarily determining what is appropriate use and what isnt appropriate use in real time. Not that they intend to use the tech for illegal purposes, but that they’d be subject to the provider’s definition/interpretation of ‘illegal”. Which these days is often based on nothing more than “we dont think it should be like that”.
It’s less bullying, than it is preventing these corporations from retaining their own bully card.
That statement was pathetic, but the underlying sentiment wasnt to far off. A little less grandstanding and a lot more rational explaining would go a long way.
From what I’ve read, OpenAI got the exact same terms that Anthropic was demanding. Exactly the same two red line items – no mass surveillance, no autonomous war machines.