-
Content count
3,132 -
Joined
-
Last visited
Everything posted by zurew
-
I think this is kind of good, but i wouldn't necessarily call this an honest approach, because it is already kind of baked in laws to restrict companies from doing such moves. Related to Dall-E 2, one real concern could be around the art,designer job market (how it will revolutionise the market, how many people will lose their jobs, what kind of new jobs could be created and how can we take care of those artist who will lose their income , what will happen with art schools and designer schools, what will happen with art and designer teachers) --> this would be one way to think about this specific issue in a systemic way and thinking ahead before the shit kicks in. Related to GPT-3, one big concern could be related to misinformation. 1)In the future how can we differentiate between human vs AI generated information, articles, scientific papers. 2) How can social media sites will be able to differentiate between AI operated vs human operated profiles and accounts (and how can we help them to be prepared before we let GPT-3 public), 3) How GPT-3 will make the writers job a lot less valuable, and how GPT-4 will probably totally destroy the writer job market and what alternative solutions can we provide for those people. Related to DeepFake, how can we differentiate between faked and not faked images, videos, audio files. When it comes to talking on phone, how can we determine if we are talking with an AI or with a real person. Related to self driving cars and trucks, what alternative job/ solution can we provide for those people who will lose their jobs in the near future (truck,bus,train,taxi drivers) Related to the entertainment market, how can we take care of comedians, musicians, who will probably lose their jobs in the next decade, because AI will be able to generate super funny memes, messages, videos and what not, and AI will be able to generate any kind of music in a much greater quality than a human would ever could and with a much greater efficiency as well. In the future when most jobs will be occupied by AI, how can we wrestle with the meaning crisis, where most people lose their motivation and hope and purpose to do anything, because there will be UBI and humans will be worthless (in terms of market value and labour). So basically what artificial pillar(s) can we create that can provide the same or higher level of meaning to people than jobs and religions combined. I could go on and on, but the point is that we should think how we can create a system that incentivise us and companies to think ahead and to think about these issues and try to find solutions to these problems before they occur. I know, some of these are more far away than others, but some of these problems are so big and so complex that they crave a lot of brain power and time to find solutions for them , and they will inevitably emerge, so we better start somewhere. I also know, that some of these problems can't be addressed by only one company or agent because they are too big and some of them are collective and some of them are global issues. So the relevant question would be how can we create a system where we can help these companies and incentivise them to think about ethical issues, and how can we create a trustworthy relationship with them, where companies that are working on AI can safelyand willingly provide information about what tech they are working on, so the government can create systems that are directly related to solving problems that will emerge from those AI services/items. One othee relevant question would be where can we tangibly see the companies or any government to take these concerns / issues seriously? Now this one hold some weight. Thanks for this article, its good to see something like this.
-
Every preference has its own advantages and disadvantages.
-
I think Chris Duffin could be considered yellow.
-
Where do you see those considerations being manifestested in practice? Btw some of them might be aware, but thats not the point. The point is that they are not incentivised to care about those ethical concerns. Do you think that message was intentioned as a solution or more of a pushback to your overly positive narrative? Okay, then its all clear now, it seems like there are a lot less that we disagree on, lets focus on the remaining disagreements.
-
This theory is falsifiable. We could give some shrooms to a monkey and then monitor his brain. Of course this wouldn't be a 2 day long experiment this would probably take 100s of years or even thousands to see if this if the theory is actually true or not. But to see if the theory has some validity to it, we wouldn't need to wait thousands of years, if the shroom has even a slight effect on the monkey's brain, then that should be detectable without waiting around for 100s of years. We could also see if the descendants of that particular monkey has a slightly different brain than the normal ones. Of course that wouldn't automatically mean that the theory is true, but it might strengthen the potential validity of the theory.
-
I believe this will be possible for us. We can learn about immortality from many species: jellyfish, Turritopsis dohrnii Hydra Planaria
-
Why not ask him to further elaborate before you cry about him not giving you a 20 page long , extremely detailed answer?
-
A high libido guy won't be able to control his instincts unless he has good sex very freqently. Its easy to control your urges if you have a low libido and you don't even want to have sex 90% of the time, but its a different talk when you have the urge to have sex at least once a day.
-
Yeah, i agree. In the current system its not a realistic option. That message was mostly targeted at Scholar. Yes, the question is how can you incentivise all members to work towards the same goals? The answer will be an overly complex game B answer, that we haven't totally figured out yet. So the goal is to work towards that game B system, and help the game B guys in our own ways.
-
No, the vast majority of professionals don't give a single fuck about contemplating how to prevent a fuckup scenario. Yeah maybe some layman people who don't know shit about AI and about tech they might be sceptical about it and they might come up with a lot of dooms day scenarios but professionals who could actually have a direct impact on it, don't give a fuck and even if some of them do, they can't do shit because other professionals will push it mindlessly anyway. Also there is a huge difference between coming up with dooms-day scenario's vs taking action to prevent those outcomes from happening. Thats not my job here, my job here was to outline not to just naively believe and assume that everything will be okay on its own. I have some solutions in my mind, but i had to react to your naive positivity on this subject. Being overly sceptical or naive about this subject are both bad and not useful. We can only solve the problems that we can recognise. If we naively assume that everything is okay, then there isn't anything to prepare for or to be solved there. Identifying problems as they are and knowing whats the potential problems are , is the process that can open the gates to be able to solve these problems and to be able to prevent these stuff from escalation. There are no easy answers here, its a systemic problem. Some of these problems can't be solved without radical changes. Not just that, focus both on the potential problems aand the potential opportunity as well. This is a not a subject where you can revert back your fuckups and mistakes, thats exactly why we have to be really careful, calculated, smart and think about this issue in a systemic way. The vast majority of the professionals are naively pushing this subject and they are way overly positive about it to a point where if the 'potential danger talk' comes up, they don't give a fuck about it or they change the subject or they leave the convo, or they get heated about it. I wouldn't consider myself pessimistic about this subject, i would consider myself to be realistic and careful. I can clearly see the potential opportunities and how much an AI will be capable of both in a good and bad way. Its already much more advanced in some ways than a human, its good , its fantastic etc, but its also dangerous if its being used mindlessly. Nice, so one shouldn't outline some potential problems if he/she can't solve it right away. Problem with people like you is that you create and give people a false sense of safety and hope. I had to push back with the negative side to balance your overly naive positive side, this way people can see both sides of the coin and have a more whole view about this subject. Right now what we need is to slow the fuck down and think. The last thing what we need here is to push this subject even more without thinking about the dangers in depth and about the solutions.
-
@The Mystical Man @Scholar Being wise means being able to recognise threat as it is. Thats the first step that needs to be done. After that we can talk about the positive views and aspects and how good things can get, but first we have to drop the naive notion that everything will be okay on its own.
-
Do you watch it, because you are too horny and you guys don't have enough sex, or there is a different reason? For instance, if you want to have sex every day and you girl doesn't want to, of course you will want to find a way to satisfy your needs. Your sexual needs won't just go away, so you have to talk with her, because sexual satisfaction is a main pillar in a relationship. If you won't get your sexual needs met , even though you two guys are compatible in all the other ways that one pillar is worth enough to quit the relationship because you won't be able to maintain it down the road. Some girls and guys have low libido and others have high, you need to find a partner who has a same level of libido like you. If she has the same level of libido like you, then you guys could just have more sex and that way you could satisfy your sexual needs. If sex isn't good enough then that will be a different problem and talk.
-
That sounds good , but in practice its not that easy to just create an AI to counter an AI. Its always easier to destroy and to deceive than to protect and to recognise truth from fakeness. Its really good at faking stuff. Its so good at faking people's voice, that some scams are already in place. "Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find" https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=3478015a7559 "Criminals are using deepfakes to apply for remote IT jobs, FBI warns" https://techmonitor.ai/technology/cybersecurity/deepfakes-it-jobs-fbi If its a sophisticated scam, a scammer could create a fake version of you doing something in a video, talking about some stuff (using your voice), and if the scammer has enough data about your speeches for instance it could use this forum for data gathering, the AI could pretty well predict what words would you use in what context so it would be a really really convincing scam. There is an AI that can remove objects from a video. It make it look like that particular object wasn't even there and it is pretty convincing and this kind of tech could be used to make even more disinfo and to be even more misleading. Its not that hard to access some of these sites to use their AI to generate pretty convincing stuff for you. Of course its not so easy that a layman could do it, but we don't need several people to make a big fuckup. One person is enough who have access to a large database and an advanced AI that can generate thousands of fake scientific papers and articles and statistics in a few minutes. That one person wouldn't have to be an outsider, it could be done as an "inner job" and that would be enough to generate so much misinfo to fuck up our sensemaking ability a lot. We agree that AI doesn't understand stuff, what we don't agree on is that how dangerous an AI currently is, and how dangerous can it get in the future, and how hard to counter those problems , (the big assumption hereis if its even counterable). Its relatively cheap to create an AI drone that can shoot a person down based on face-id. Its accessible and affordable for normal people, because its not that expensive and you can get a face id chip relatively easy. I could mention the hacking part as well. There are AI's that can be used for hacking purposes. There are counter AI's as well, but again its easier to attack than to counter and a normal layman don't have the defensive capability to defend him/herself from an AI's attack lets be it physical or cyber. We could get more deeper into how AI could be used for military purposes because that is a huge problem as well. So assuming all those things will be or could be countered and solved, we still haven't talked about the fuckup problem, where we try to push more and more the current AI technology to make it more and more advanced ,without thinking 1 second about the ethical consequences, and without making sure not to make a wrong step where there is no going back.
-
Yeah im impressed as well and i am honestly kind of scared because how good the deepfakes currently are . Im just sceptical of the NLP part , but other than that, we can create specific AI-s and those can do tasks that a human could never do and they can sometimes do other impressive stuff as well. I think the biggest problem we will have to face will be associated with deepfakes. Unfortunately the better an AI can parrot our language structure and the better it can fake our voice, and the better it can create deepfake images and videos the harder it will get to recognize which one is fake and which one is real. Btw appretiate this thread, because we will have to face reality, and yes a lot of jobs will be out of the market, because AI will take most people's place. I will say this: over the next 2 decades 20-30 % of the current jobs will be completely taken over by AI, but that % might be higher, we will see.
-
Nono, the current training methods and models are having limits to them. If you learn in more depth how you train an AI and how neural networks works, then you can realise that it has its own limits to it, the question is how far the model can be maxxed. That will be answered in the next decades, but as i said, NLP is the hardest part here and it won't be solved easily thats for sure. Its naive to think that AI will just figure it out, without any major changes in its training. It might be able to figure it out but it might not we will see. Passing the turing test doesn't really mean shit. Its parroting human text and human behaviour, but parroting doesn't require inner understanding of that particular thing. GPT-3 is still very bad in understanding text, in fact it doesn't really understand anything , it just predicts what the next word should be, or what answer should be given for a particular question based on what answer a human would give for it. GPT3 is way too overhyped
-
NLP (natural language processing) is still in its infancy. There will be a lot and i mean a lot work that will need to be done before we will talk about an AI that can compherend text better than a human. Current AI isn't capable of abstract understanding, it doesn't have an inner understanding of things like a human do, it just trying to find some patterns based on the given data. Current AI isn't even capable to understand text like a normal human can, and its a question if it can even reach human level using the current training methods and models. Don't forget when it comes to NLP, AI can only be trained on data that was produced by humans, so for instance when it comes to making text summaries, at max it will only be as good of a summary as if it would have been produced by a human. Summarizing is just one part, the other big problem is the "trying to figure out the meaning", because one sentence could be interpreted many many different ways. Not just that you can interpret one sentence on its own many many different ways ,but if you try to add context to it you will have to be able to see the big picture and the whole context, and then the smaller context to get the "real" meaning out of that text. I think its one of the hardest problems to be solved. That being said, DALL-E 2 and its other upgrades will be a real threat for artists .
-
Need to make it super fucking hard for yourself to be able to reach for your mobile phone or to reach for the things that you want to limit or the things that you don't want to do anymore but you do it because of the dopamine. You will automatically do things that are giving you do most dopamine and things that are easily attainable (easily reachable + high dopamine is the deadliest combination here). Being the most attainable part is super important , because if you make an environment for yourself, where the only thing you can do and to reach for are the things that you know you have to do, then your brain won't have much of a choice but to do the 'right' things. In practice this could mean delete youtube from your mobile phone, or download an app that actually blocks youtube as a site ,and blocks it as an app or it can limit for you. You could actually turn your mobile off and put it far away from yourself and let yourself literally suffer for the next 1-3 days until your brain recovers a little bit from the enormous stimulation it gets 24-7. So in a nutshell, contemplate what things you have a problem with and either delete those things from your life, or make those things super hard to be attainable or usable.
-
One big problem i have with this video , that he is not talking about how to motivate yourself consciously. He is talking about things that are actually applicable but i don't think widely applicable for guys that are highly unmotivated and sad. This video has some tangible advice in it (but its still surface level, because he doesn't talk about how to actually do those steps what he is mentioning, he is just saying you need to do these 4-5 things, but he is not talking about how to start doing those things and how can you maintain doing those things etc). The very group that he is trying to target with his message will get a false sense of knowledge and hope that they can actually do it and after that they will become even more depressed (after the false sense of hope from the motivational video runs out) because they will think that they are dumb or something is wrong with them because they couldn't get shit done with even kinobody's advice. This video is more of a motivational video than a real practical one (imo). Most depressed people know already that they want to have this x kind of lifestyle , they know what relationship they want, they know that eating shit food is not good, they know that fapping all day all night is not the best for your mood etc or if they don't know that already, they won't get suddenly enlightened by this video, because what they need is an actual guidance and not surface level motivational speech. The reason why some of them won't be able to do it is because they don't really get guidance how to do it. This is a stage orange level analysis and advice and a big generalization as well. His advice could be good for some people, but for others its terrible, because they will feel even more shitty after they can't change their lifes on their own. He is also blurrying the line between feeling temporarily sad and actually being clinically depressed. That being said, this video still has its own value, becuase it is more of a motivational video, and it might motivate some people to start doing shit (and hopefully they will be able to maintain that thing/lifestyle, but at the end of the day this guy can only take credit for the motivational aspect, and not for the most improtant aspect (which is how you can actually change your life in a way where you can maintain it and not fall back, and how can you consciously motivate yourself).
-
This was a really good debate.
-
I've read a little bit about it and the blockchain tech definitely makes it much more safer for users to use this kind of service and for investors to invest in this company. So one big benefit that is exclusive to companies like this is the high level of safety, which couldn't really be achieved without a blockchain. One other thing that is closely related to security, is this decentralized structure and business model, which makes this whole thing much less prone to corruption.
-
Why is the token part needed there? Couldn't you support such companies in other ways than buying tokens or crypto?
-
This is maybe the wisest answer here. @Jannes There would be one more thing here. Sometimes the best thing to do is to search directly for the thing that you actually want, and not for parts. (In this case search for bots that are specifically created for clash of clans). So, you could directly search for clash of clans bots, and you might as well find one that could work, that way you don't need to create it yourself, and you don't need to hire anyone to do it for you. But of course, you have to be careful not to download some virus, so you will need to make sure if its something reliable or not before you try it out.
-
I think if you have some time to learn a little python (one of the easiest programming language) ,then i think its better to do it by yourself. But if you don't want to spend any time with it and you have enough money to pay for it , then you could search for bots on freelance sites. I think a clash of clans bot will be a cheap project. here is a freelance site: https://www.fiverr.com/gigs/bot-development Other freelance sites: https://www.upwork.com/ https://www.freelancer.com/jobs/?keyword=bot
-
Its hard to make an AI and then to train that AI to play videogames for you. I think there are better ways to do the automatization (for example creating a bot), especially, if we are not talking about complex actions. Its not that hard to make a bot that can do some easy repetitive steps for you. Sometimes the problem is that some game engines can detect that you use a bot and that can be a challenge to try to get around. Other thing you will need to be aware of , is that after every game update, you will need to check, whether your bot is still working fine or not.
-
So the rationale here is that if someone can't protect literally everything the same way, then he is full of contradictions? Scholar was never took a position where you need to hold everything to the same moral standards, so he doesn't need to protect that position. If you want to say that why would you value some things over others the answer is really easy: Because no one can live up to those standards. No one is denying these things. But just because we can't live up to those ultimate standards, where you need to be able to save everything and protect everything, that doesn't mean that its hypocritical to take a position where you want to protect animals. Right now you are attacking a position, that was taken by no one.