Stake Out AI
(as seen in insideBIGDATA
and The Conversation)༺༻༺༻༺༻
It would have been hard to guess
from this 2022 AI-generated image...

...that it would become
a complete automation threat for artists
just one year later.
And AI companies have a business model
based on causing mass unemployment,
plausibly in the near future.
OpenAI, the creator of ChatGPT,
continues to pursue its founding mission:to create
"highly autonomous systems that
outperform humans at most
economically valuable work."
OpenAI estimates that "it’s conceivable
that within the next ten years,
AI systems will exceed expert skill level
in most domains."
Anthropic, OpenAI's startup rival,
estimates that frontier AI models
"could begin to automate
large portions of the economy,"and that"companies that train
the best 2025/26 models will be
too far ahead for anyone to catch up
in subsequent cycles."
We at Stake Out AI have tracked
the rapid development of
uncontrollable AI systems
and our increasing deference to them.(Even before the release of ChatGPT!)
As pro bono advisors and volunteers,
our goal is to help
professional associations threatened by AI
coordinate optimally against
AI takeover.
Sign up for a free advising session on
how to work towards a human-led future.Join the waitlist at the bottom of this page.
Advisor

Dr. Peter S. Park
(Harvard Ph.D. '23, Princeton '17)
is Stake Out AI's Co-Founder and Director.He also conducts strategy research at MIT,
as a Vitalik Buterin Postdoctoral
Fellow in AI Existential Safety.During his time at Harvard,
Dr. Park helped organize towards
the first successful strike
and the first successful contract
by the Harvard Graduate Students Union
(HGSU-UAW),
as the math department's steward.
Overview
Summary: Our best guess is thatfirst, AI will automate most human jobsthen, AI will make most decisions in societyand finally, AI will disempower humanity
and potentially cause our extinction.
Tech companies continue to create
AI systems of ever-increasing capabilities,despite the fact that they still do not know
how to control AI.
Even today's AI systems are accelerating
scams, propaganda, and radicalization.
Leading AI experts from academia
and industry believe that"mitigating the risk of extinction from AI
should be a global priority alongside
other societal-scale risks
such as pandemics and nuclear war."
But Big Tech is still training
larger and larger AI models on
workers' data without their consent,in order to profit from the automation
of these same workers' careers.e.g., those of artists, journalists,
lawyers, and writers.
We started Stake Out AI
to help accurately inform and coordinate
human efforts against AI takeover.
Currently, we think that the best way
to secure a human-led futureis to mobilize towards
an indefinite global pause
on large AI models.
AI models large enough to replace workers
should not be pursued,
just like biological weapons and
human genetic engineering.
Join our waitlist
For our pro bono advising,
please join our waitlist here:
link to form
Conversations with an advisor
are strictly confidential by default.
Resources
Note: For resources on how to take action,
please scroll to the bottom.
Database of AI HarmsPast episodes of AI harm have been
compiled in the AI Incident Database.
༺༻༺༻༺༻
What is Artificial Intelligence?
(6 minutes)
by Improve the NewsSelected quote: "There are
three main narratives that define
the understanding of artificial intelligence:the pro-establishment narrative,the establishment-critical narrative,and the technoskeptic narrative."
༺༻༺༻༺༻
The A.I. Dilemma
(1 hour and 8 minutes)
by the Center for Humane Technology,
creator of the Emmy-winning documentary
The Social Dilemma
Take action
Note: The content of this website
does not constitute legal advice.༺༻༺༻༺༻
Which Copyright Lawsuits against AI
Tend to be Successful?Plaintiffs (and potential plaintiffs) of
copyright lawsuits against AI
may increase their likelihood of winning
by arguing that
AI is a viable, substantially similar
substitute of their output,and by refraining from arguing that
AI is not a good substitute of their output,
e.g., on Andersen v. Stability:
"Plaintiff is arguing that
AI art output is only as good as
the input image data in its training set.But at the same time,
Plaintiff is contradicting its big theme
by admitting that
AI is not a good substitute for human art.
...The question, as always, is
whether the new art is
'substantially similar' to the original art....[Professor Lemley and Dr. Casey] notes
that the more the AI output tends to
substitute for the original art,
the weaker the fair use argument becomes."Quoted from the post of
IP lawyer Eric Adler (2023)
"And some purposes—say, a system
designed to write a new pop song
in the style of Taylor Swift
or a translation program that
produces a translation of
an entire copyrighted work—seem
more substitutive than transformative,so that if they run afoul of the
ever-broadening definition of
similarity in music,
fair use is unlikely to save them.Quoted from the Texas Law Review article
of Lemley and Casey (2023)
༺༻༺༻༺༻
What Kind of Movements
Tend to be Successful?Findings from a survey of 120 experts:
"Experts thought the most important
tactical and strategic factor for a
social movement’s success is
'the strategic use of
nonviolent disruptive tactics,'ranking it as more important than
focusing on gaining media coverage
and
having ambitious goals...
...69% of experts thought that
disruptive tactics were effective for
issues (like climate change) that have
high public awareness and support.For issues with high awareness but
low support (like anti-vaccination),
only 30% thought disruptive tactics
were effective...
...The most important
governance and organizational factor
for a social movement’s success was
the ability to 'mobilise and scale quickly
in response to external events,'whereas experts thought
having decentralised decision making
was the least important factor...
...The most important internal factors that
threatened social movement success were
'internal conflict or movement infighting'
and
a 'lack of clear political objectives.'...
...90% of experts thought that
non-violent climate protests
targeting the government are at least
somewhat effective overall."
More relevant findings can be found in
the writeup of Ozden et al. (2023)
summarizing the expert survey.
༺༻༺༻༺༻
An Organization's Guide
to Political AdvocacyBolder Advocacy has excellent resources
for how 501(c)(3) nonprofits,
501(c)(4) nonprofits,
and other types of organizationscan engage in political advocacy.
We especially recommend
The Rules of the Game:
A Guide to Election-Related Activities
for 501(c)(3) Organizations
Contact
info@stakeout.ai
༺༻༺༻༺༻Get InvolvedAI companies have a business model
based on replacing human workers.To illustrate, OpenAI has the
stated mission of creating
"highly autonomous systems that
outperform humans at
most economically valuable work."
Artists, actors, and writers
(and eventually, workers everywhere)
face AI-driven disempowerment.Their work is being unfairly used by
AI companies, in order to train
the very AI systems capable of
stealing their incomes.We humans need to act now while we still
have leverage, rather than risk being
made economically useless by the
powerful AI systems of the future.
We are seeking collaborators for Stake Out AI,
a nonprofit that provides
pro bono advising to AI-threatened
professionals and unions.
Stake Out AI is already
advising workers of multiple
AI-threatened occupations:on fact-finding and optimal messaging
for AI copyright issues,
and for ongoing union strikes that
are negotiating contractual protections
against AI encroachment.
We are looking for collaborators who are experts
in other roles (including but not
limited to: nonprofit operations, public
communication, labor organizing).Please reach out if you are generally interested
in our mission, as well as in exchanging takes
on how to make AI good for humans!
A specific need we have would be a
U.S.-based lawyer, to complement
our experience in AI research and in
union organizing.
Responsibilities will include:1. providing pro bono legal advising to
AI-threatened professionals(e.g., advice to artists on copyright lawsuits,
advice to unions on
contract negotiations and strikes),
2. researching legal and policy strategies
for stopping AI-driven disempowerment
Please email (1) your CV and
(2) your cover letter to
dr.park@stakeout.ai to apply.Collaborators selection will be via
rolling recruitment.
Thank you very much for
your consideration.- Dr. Peter S. Park(Co-Founder/Director of Stake Out AI;MIT postdoc in the Tegmark lab;former math-department steward for
the Harvard Graduate Student Union)
P.S.: Please reach out to me
(dr.park@stakeout.ai) if you happen to
have any questions.

Dr. Peter S. Park
(Harvard Ph.D. '23, Princeton '17)
is Stake Out AI's Co-Founder and Director.He also conducts strategy research at MIT,
as a Vitalik Buterin Postdoctoral
Fellow in AI Existential Safety.During his time at Harvard,
Dr. Park helped organize towards
the first successful strike
and the first successful contract
by the Harvard Graduate Students Union
(HGSU-UAW),
as the math department's steward.

Harry Luk
(LinkedIn)
is Stake Out AI's Co-Founder and COO.He is an Engineer turned Entrepreneur,
Marketer and nonprofit Co-Founder.He strives to make the biggest positive
difference he can in our world, pushing the
limits to help as many people as possible.He believes the low-hanging fruit to
transforming the world rapidly is to redirect the
existing large sums of donations from
ineffective (sometimes even harmful)
charities to implement high-impact
and evidence-based interventions.His mission is to deploy his career capital to
help co-found or scale up numerous
effective charities to do the most good
with limited resources.