StakeOut.AI

StakeOut.AI fights to
safeguard humanity from
AI-driven disempowerment.

We use evidence-based outreach
to inform people of the threats
that advanced AI poses
to their economic livelihoods
and personal safety.

Our mission is to create a
united front for humanity,
driving national
and international coordination

on robust solutions
to AI-driven disempowerment.

(as seen in insideBIGDATA
and The Conversation)

༺༻༺༻༺༻


Consider the fate of the digital artists.

In 2022, AI art models were on the rise,
yet their aesthetic quality was
not yet economically competitive.


Some artists foresaw the looming changes,
and adopted the necessary urgency.


But other artists were instead convinced
to wait-and-see.


Fast forward one year,
and the aesthetic quality of AI art
took a substantial leap.


Many digital artists lost their incomes
and their leverage overnight.


When one is blindsided about
an imminent threat,
there will likely be missed opportunities.


Like for preventing pandemics, it is
important to act early:
to act well before
clear signs of the threat manifest.

༺༻༺༻༺༻
What is the threat?An extremely small number of
overconfident AI companies
are working to
replace most human livelihoods,
without the public consent.


OpenAI, the creator of ChatGPT,
continues to pursue its founding mission:
to create
"highly autonomous systems that
outperform humans at most
economically valuable work.
"


Also, consider the following
quote of Rich Sutton,
the first-ever advisor of Google DeepMind:

"Rather quickly, they would
displace us from existence...
It behooves us to
give them
every advantage,
and to bow out when we can
no longer contribute…

...I don't think we should fear succession.
I think
we should not resist it.
We should embrace it and prepare for it.Why would we want greater beings,
greater AIs, more intelligent beings
kept subservient to us?"

When AI industry leaders
say things like this,
they are not shunned or stopped.
After Sutton gave his talk AI Succession
at the World AI Conference in Shanghai,
he was invited on to a partnership
with Keen Technologies
to build autonomous AI systems that
outperform humans at most capabilities.


These AI companies continue to create
AI systems of ever-increasing capabillities,
despite the fact that they still do not know
how to control AI.


Even today's AI systems are accelerating
scams, propaganda, and radicalization.


Leading AI experts from academia
and industry believe that
"mitigating the risk of extinction from AI
should be a global priority alongside
other societal-scale risks
such as pandemics and nuclear war."



But Big Tech is still training
larger and larger AI models on
workers' data without their consent,
in order to profit from the automation
of these same workers' careers.
e.g., those of artists, journalists,
lawyers, actors, and writers;
and eventually, everyone.
༺༻༺༻༺༻
Who are we?StakeOut.AI is an initiative to
raise public awareness of
the threats posed by advanced AI
to people’s economic livelihoods
and personal safety.


We publicize educational materials
via Internet-based media avenues.


We also provide pro bono advising
on people who are or will be
threatened by advanced AI.


Economist Daron Acemoglu predicts that
without a course correction,
"a very large number of people will
only have
marginal jobs,
or not very meaningful jobs.”


Philosopher Aaron James goes further,
and predicts that “in due course at least,
[AI] really might cause

lasting structural unemployment
on a mass scale.”


In this scenario,
“...jobs are steadily automated,
year after year.

In the old days, for every job destroyed,
a new one was eventually created,
leaving total employment more or less
unchanged.

Now deep-learning machines,
aided by clever entrepreneurs,
race ahead and

do the new tasks as well...

...Many people — most people,
even in their prime years — simply

can’t find tolerable work
and stop looking.”


Most people do not want their
economic livelihoods or personal safety
jeopardized by advanced AI.


Our mission is to inform
the people of the world about
the size and imminence of this threat,
and what they can do about it
before it's too late.


Advisors

Dr. Peter S. Park
(Harvard Ph.D. '23, Princeton '17)
is StakeOut.AI's Co-Founder and Director.
He conducts AI strategy research at MIT,
as a Vitalik Buterin Postdoctoral
Fellow in AI Existential Safety, working
closely with Max Tegmark (one of TIME's
Top 100 Most Influential People in AI).
Dr. Park's AI research has been
cited by leading experts in the field, such
as Geoffrey Hinton and Yoshua Bengio.
During his time at Harvard,
Dr. Park helped organize towards
the first successful strike
and the first successful contract
by the Harvard Graduate Students Union
(HGSU-UAW)
,
as the math department's steward.

Amy Frieder
(Harvard J.D. '22, Cornell '15)
is Stake Out AI's Co-Founder and CLO.
As a worker-side lawyer, she has
represented private-sector employees in
discrimination cases while working at a
civil rights and employment law firm,
and has advocated for better conditions and pay for public-sector employees while
working at a federal employee union.
She also served in the U.S. Air Force as an
intelligence analyst within U.S. Cyber
Command's Cyber National Mission Force.


Join our waitlist

For our pro bono advising,
please join our waitlist here:
link to form

Conversations with an advisor
are strictly confidential by default.

If you have other requests,
please reach out via email.
(info @ stakeout.ai)

Resources

Note: For resources on how to take action,
please scroll to the bottom.

Database of AI HarmsPast episodes of AI harm have been
compiled in the AI Incident Database.

༺༻༺༻༺༻

What is Artificial Intelligence?
(6 minutes)

by Improve the NewsSelected quote: "There are
three main narratives that define
the understanding of artificial intelligence:
the pro-establishment narrative,the establishment-critical narrative,and the technoskeptic narrative."


༺༻༺༻༺༻
The A.I. Dilemma
(1 hour and 8 minutes)

by the Center for Humane Technology,
creator of the Emmy-winning documentary
The Social Dilemma


Take action

Note: The content of this website
does not constitute legal advice.
༺༻༺༻༺༻
Which Copyright Lawsuits against AI
Tend to be Successful?
Plaintiffs (and potential plaintiffs) of
copyright lawsuits against AI
may increase their likelihood of winning
by arguing that
AI is a viable, substantially similar
substitute of their output,
and by refraining from arguing that
AI is not a good substitute of their output,


e.g., on Andersen v. Stability:
"Plaintiff is arguing that
AI art output is only as good as
the input image data in its training set.
But at the same time,
Plaintiff is contradicting its big theme
by admitting that
AI is not a good substitute for human art.

...The question, as always, is
whether the new art is
'substantially similar' to the original art.
...[Professor Lemley and Dr. Casey] notes
that the more the AI output tends to
substitute for the original art,
the weaker the fair use argument becomes."
Quoted from the post of
IP lawyer Eric Adler (2023)


"And some purposes—say, a system
designed to write a new pop song
in the style of Taylor Swift
or a translation program that
produces a translation of
an entire copyrighted work—seem
more substitutive than transformative,
so that if they run afoul of the
ever-broadening definition of
similarity in music,
fair use is unlikely to save them.
Quoted from the Texas Law Review article
of Lemley and Casey (2023)



༺༻༺༻༺༻
What Kind of Movements
Tend to be Successful?
Findings from a survey of 120 experts:
"Experts thought the most important
tactical and strategic factor for a
social movement’s success is
'the strategic use of
nonviolent disruptive tactics,'
ranking it as more important than
focusing on gaining media coverage
and
having ambitious goals...

...69% of experts thought that
disruptive tactics were effective for
issues (like climate change) that have
high public awareness and support.
For issues with high awareness but
low support
(like anti-vaccination),
only 30% thought disruptive tactics
were effective...

...The most important
governance and organizational factor
for a social movement’s success was
the ability to 'mobilise and scale quickly
in response to external events,'
whereas experts thought
having decentralised decision making
was the least important factor...

...The most important internal factors that
threatened social movement success were
'internal conflict or movement infighting'
and
a 'lack of clear political objectives.'...

...90% of experts thought that
non-violent climate protests
targeting the government are at least
somewhat effective overall."


More relevant findings can be found in
the writeup of Ozden et al. (2023)
summarizing the expert survey.



༺༻༺༻༺༻
An Organization's Guide
to Political Advocacy
Bolder Advocacy has excellent resources
for how 501(c)(3) nonprofits,
501(c)(4) nonprofits,
and other types of organizations
can engage in political advocacy.
We especially recommend
The Rules of the Game:
A Guide to Election-Related Activities
for 501(c)(3) Organizations

Dr. Peter S. Park
(Harvard Ph.D. '23, Princeton '17)
is Stake Out AI's Co-Founder and Director.
He conducts AI strategy research at MIT,
as a Vitalik Buterin Postdoctoral
Fellow in AI Existential Safety, working
closely with Max Tegmark (one of TIME's
Top 100 Most Influential People in AI).
Dr. Park's AI research has been
cited by leading experts in the field, such
as Geoffrey Hinton and Yoshua Bengio.
During his time at Harvard,
Dr. Park helped organize towards
the first successful strike
and the first successful contract
by the Harvard Graduate Students Union
(HGSU-UAW)
,
as the math department's steward.


Harry Luk
(LinkedIn)
is Stake Out AI's Co-Founder and COO.
He is an Engineer turned Entrepreneur,
Marketer and nonprofit Co-Founder.
He strives to make the biggest positive
difference he can in our world, pushing the
limits to help as many people as possible.
In his 14+ years of Internet-based marketing
career, he has managed $6,217/day
($186,510/month) in direct response ad
spend
; created conversions
in 5+ languages, 20+ markets, and 88+
countries; and singlehanded delivered five-
and six-figure product launches (one promo
achieved six figures in three days, with
$0 in ad spend
).


Amy Frieder
(Harvard J.D. '22, Cornell '15)
is Stake Out AI's Co-Founder and CLO.
As a worker-side lawyer, she has
represented private-sector employees in
discrimination cases while working at a
civil rights and employment law firm,
and has advocated for better conditions and pay for public-sector employees while
working at a federal employee union.
She also served in the U.S. Air Force as an
intelligence analyst within U.S. Cyber
Command's Cyber National Mission Force.

༺༻༺༻༺༻
Please contact us via email.
(info @ stakeout.ai)

Contact

We'd love to hear from you.
Please ask us anything, comments,
questions, feedback, or propose ideas.
You can reach us via email:
(info @ stakeout.ai)


༺༻༺༻༺༻


For our pro bono advising,
please join our waitlist here:
link to form
Conversations with an advisor
are strictly confidential by default.