Artificial intelligence

From agingresearch
Revision as of 18:23, 3 February 2023 by Admin (talk | contribs) (18 revisions imported)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This is version of Vladimir Shakirov (editing rules). Alternative versions of this article would be here. Currently no alternative versions available.

One of the best resources to track the progress in AI is https://paperswithcode.com/sota
A brief list of most impressive AI achievements

My concise opinion on AI is as follows:
(1) AI develops VERY fast. It's the fastest developing technology currently. It clearly would define our future, our 2020s, 2030s etc.
The largest bulk of life extension progress would be firmly connected to the progress in AI.
(2) AI development brings much hope as well as many risks. Nobody knows what will outweigh. AI safety research might be the most important thing for humanity. AI safety research is funded to some extent though it's perhaps the most underfunded field compared to its importance for humanity. After Asilomar conference on beneficial AI (2017) the topic is more widely discussed even between top researchers where all spectrum of opinions on AI safety is more or less represented.
(3) As for overall progress in AI, I don't see any easy way to help it except maybe try to go to DeepMind but it's hard, it's only for the most talented, and the field already has thousands of brilliant people. AI field (as opposed to radical life extension field of AI safety field) does not experience any big troubles which could possibly be solved with a small dedicated team of enthusiasts.
(4) As for progress in AI safety, it is really good topic to concentrate on. While there are some dozens to hundreds to maybe thousands people working on different aspects of AI safety (depending on how broadly we define the field), the AI safety field still has relatively few people and funds and public attention. So even one highly motivated person or a group of enthusiasts can potentially considerably improve some important aspects of the field of AI safety
(4.1) For relatively young people (say 1970 year of birth or later) there are arguably more chances to die from AI safety related reasons (not just AGI killing all humans though that's also possible but also humans waging war with AGI tools etc) than to die from aging related reasons.
(4.2) Still subjectively for me (Vladimir Shakirov) AI safety field is really tough and depressing. It's much more nice and rewarding to research something more positive like life extension or maybe to collect reasons why AI would probably be safe.
(4.3) One of such reasons of why AI would be safe sounds like: Well OK, the problem of AI safety might look really hard. But wait, the problem of creating smth like AlphaFold or AlphaZero or ChatGPT also looks tremendeously hard but DeepMind/OpenAI succeeded! So probably they would succeed in that another task of AGI safety as well, they are really smart after all and have a good track of solving tremendeously hard tasks.

Examples of how AI can be useful for life extension

AlphaFold - existing topic
Deep learning based body growing - speculative topic