zurew

Member
  • Content count

    3,132
  • Joined

  • Last visited

Everything posted by zurew

  1. Having only a rational calculting brain, without any instinct for survival = death. If you think that having an instinct to survive is an emotion, then sure frameing that way your argument could be correct, but by that framework you are arguing for something that necessarily lead to extinction and death, because without having an instinct for survival, why would anyone or anything want to maintain their survival?
  2. How much have you thought about this concept , before you immedately rejected it like a knee jerk reaction? Do you actually know that what your assumption about this is necessarily the case, or you just assume it? How can you reject it on a logical level? Because what you are saying here is almost as if you would say that methods and tools that are more suitable and effective for long term gain will necessarily be less effective compared to other things in short term. Is that actually necessarily true in every instance or just an assumption? I don't see how deductively you can get to your conclusion from the premise of something being better for long term. How is something being better for long term will necessarily be worse for the short term, walk me through the deduction, because I don't see it. Using inductive reasoning I could see,but deductively(when a certain logical conclusion will come with 100% certainty and accuracy from the premise(s)), thats what I don't see. Thats a simplistic way to think about this concept. You almost treat this subject as if we would talk about physics (where we can deductively run down our logic and say with almost 100% accuracy whats gonna happen and how things will turn out). With subjects like this (that are heavily affected by human psychology and behaviour) thats not the case, and you also assume that we have actually tried all kinds of frameworks already and we know for sure, that the current one is the best and the most effective that we can come up with. Thats like saying our evolution is done and there is nowhere to develop or go anymore. Sometimes if you goal is too narrow, the optimization for it won't work too well, because sometimes there are other variables that can have a direct effect on that goal, but you won't recognize it , because the framework that you think in and work with is too narrow. So for example, you might think that by not having any regulation on the market will necessarily generate more capital than if you do have some regualtions on it. I bet that especially long term, you will generate a lot less capital overall if you let every shady shit to happen and to psychologically and mentally and physically fuck up people by making them addicted to a bunch of things. Thats just one example, where on the most surface level one idea might seem cool, but if we think about it 1 more layer down the road, we can immedately recognize that, thats not necessarily the case Can you actually show me an example of a socialist system ,where there is central planning and it is optimized for generating capital? If not, then how can you be so sure, that it wouldn't be more effective, than letting the market do its own thing?
  3. What Im talking about is that the assumption of "game B methods,tools will be necessarily less effective and useful in a game A world" is not necessarily true, and don't necessarily have to be that way by nature. The time we can come up with such examples, where a game B tool is more effective and useful(even in the context of game A) than other game A tools but at the same time can trigger some internal change in the current Game A world, structres, then we have a framework that we can use to start to move towards an actual game B world. For the sake of working with something tangible (you don't need to agree with the premise, I just bring this up to have something tangible to work with). The easiest example that people often time bring up is capitalism vs socialism. That socialism is not as effective at game A compared to capitalism therefore it will eventually fail because of outside pressure from capitalist countries, but what if there is an actual socialist framework that is more effective at game A things than capitalism? If there is such a system, and a country would start to implement that kind of system, eventually everyone would be forced to implement that kind of socialist system, because if they don't, they will be left behind economically and they will slowly lose their political power, therefore they will be incentivised to either create and then implement an even more effective framework or to implement that socialist framework.
  4. @Bobby_2021 This is what ChatGPT has to say about their interaction:
  5. Thinking in the context of exponential tech I don't think we are too far away from it. Just if you take a look at the current AI development (just in one year how much it has evolved), then I don't know if we can actually properly compherend whats being exponential looks like as time goes on. In the past, exponential development might have meant increasing by 5 developmental units in one year, now it might be 10000 and then eventually millions and then billions of developmental units every year. Again, before we get there, first we have to agree on the framework. Once you can agree with the framework at least on an intellectual and logical level and you don't see contradictions in it and see how it could be utilized, after that we can talk about specific examples of it or even about the creation of specific examples of it. If you don't even agree with the framework, than no example will be sufficient enough for you. In other words (to use your words from one of your video), right now I want to make you agree on the the framework (structure) and then we can talk about what kind of variables(content) could be placed in that structure. Or in other words, I want you to agree with the main/general function, that could be used to generate the specific examples that you ask for. So to not be too abstract about it (For the record I haven't thought deeply about specific examples of it yet, just thought on a very surface level way about a few one) , even if I would bring up the same example that Daniel brought up about a more transparent kind of government structure and you would be able to reject it - you rejecting that example wouldn't prove that the framework itself is wrong, it would only prove that the example was bad. Right now, the goal would be to first agree on the framework and about its necessity, and then after that (comes the hard part) collectively think and inquire very hardly and deeply about proper examples, that could represent that framework. Having access to a superintellgence(that can decieve, come up with plans, help you with any kind of knowledge, help you or even automatically build different kind of weapons, generate billions amount of misinformation etcetc - basically amplify anyone's power and intentions by a 1000-10k fold) in your basement seems pretty dangerous to me. If you talk about people not having access to it, just only some elites or the owners of the company, that also seems pretty fucking if not even more scary to me. A guy or a handful of people having access to a technology that could be used to dominate and terminate the whole world with and to force your ideology and morals onto the world? I only see two ways out from that problem; either elevate humanity to the next level socially and spiritually and psychologically or create a wise AI that can actually think for itself and won't help or even prevent everyone from fucking up the whole world.
  6. Thats why (if you reread what I wrote about the positive multipolar trap) have to be implemented. Its not a simple stage green "lets moralise the fuck out of everything and then naively hope that everyone will get along with us" Its more like implementing a dynamic that is just as if not more effective in the context of game A than other game A tools ,but at the same time lays the groundwork for game B (so game A pariticpants will be incentivised to implement the tool ,and then by implementing it, the tool itself will slowly but surely change the inherent game A countries and structures). Sure, but have we had a scenario where almost every individual can have access to tools that are just as if not more dangerous than nukes? The biggest point of game B is to build a structure where we can collaborate and somewhat live in peace. In the current game A structure some forms of collaboration is impossible, even though that kind collaboration is what would be needed to solve certain global problems. The world kind of still gets along with some countries having nukes and they can somewhat manage each other so that they don't have to kill each other (although if we look at the current situation thats very arguable). Now imagine a scenario where actually billions of people have nukes and they all need to manage and collaborate with each other. That would be cool if we assume that 1) we can track it exactly when it will get out of hand and 2) The AI won't deceive or trick us 3)If the AI won't be conscious and will 100% do what we want, then we need to assume that no one will try to use it to fuck everything up (bad intentions aren't even necessary for this).
  7. I just don't see that middleground, when we have exponential tech and the development of that tech can't be slowed down. People having access to Godlike tech without having wisdom and knowledge. Or maybe (this is the only one I see right now) a wise AI could help us maintaining our society and creating the necessary developmental and social structure for us , where we can develop socially, psychologically, spiritually at our own pace. - so creating artificial environments, where we can develop ourselves and might even fasten up our development.
  8. @Leo Gura I think we are at a time or getting really close at a time in our evolution where the next step will be either a giant fucking leap in our evolution or death, and there is no room for babysteps anymore. I agree that one of the most unreal thing to say is that we will achieve something close to a Game B world in a relatively short time, but on the other hand, it seems just as if not more unreal to say that we can maintain our society under a Game A structure for much longer.
  9. We know a lot more about our physical limits than about our spiritual and social and collaborative limitations. Even our physical abilities could be pushed to a great extent if the necessary knowledge, care, and tech is there. Sociology is still fucking new, and almost no one practicing or knows about serious spirituality on this planet, so we have no idea where those limits are, and how fast a human can actually develop spritually and socially and psychologically. It will take much longer if we don't even try or think about it. A lot of assumptions are built in in the thinking "that humans have to wait x years before they can actually develop to certain levels", questioning those things and pushing those things will be a main, and a necessary part of our survival. Again I don't see how you can maintain a game A system while you have tech that can be accessed by any fool and then destroy everyone and everything with. You would have a point if implementing the frameworks the Game B guys talking about, wouldn't be necessary for our survival.
  10. I don't think we have that much time to fuck around with a Game A system, just if you think about the technological development alone, it will make the dynamics in the system so that we can't wait for that long. Imagine everyone having access to more powerful tools than nukes - if that world is not organised by that time - we will die or seriously fucks things up. What we need is not just creating technology but creating social tech and sort of spiritual tech as well, where we can hopefully fasten the social and spritual development and don't have to wait 1000 years. We mostly have the problems we have right now, because we only have exponential (normal/conventional tech) and not exponentional spiritual,social tech. Btw, obviously the reversed multipolar trap wasn't invented by me, so credit goes to Daniel and to the Game B team. Here is a video snippet (I timestamped it) , where he talks about it and about transparency.
  11. I don't think the western model in its current form is actually effective (people are more divided than ever before), therefore I don't think what you brought up is proof for that, the idea I brought up wouldn't necessarily work. I didn't say that this would immedately get us to Game B, I said this framework is one necessary tool to start to move towards Game B. First we need to hash out the framework and then just after that we can start to think and argue about the specifics
  12. If they would have no fear of death, then they would die really fast, because they wouldn't have a really strong incentive to maintain their survival. You can't maintain or create a society who don't give any fucks about their survival. You can't really escape this problem. If you are talking about AIs who don't care about death, then they would have even less incentive to corrabolate, if you do talk about AIs who care about survival, then we are back to square 0, where they will be forced to make certain decision that will go against each others interest - therefore will make corrabolation hard, and deception and manipulation will kick in - and we are back to the same problems we have in our society (regardless if you take out emotions from this equation or not).
  13. If Western models would be much more effective, they would be eventually forced to implement them, because they wouldn't want to lose their political power. I don't think that necessarily they would go to war, because thats a big potential lose for them especially, if economically the western models would make you much more powerful - therefore more effective at war as well. Or would you say that this kind of tactic is only effective with stage orange countries and countries that are mostly below stage orange will always prioritize their ideology over everything else? I like to think of certain ideologies as just tools to get to a certain society or to get certain things done - however I too have certain things that I would defend and hardly let go regardless of effectiveness (because I care about other things as well, not just effectivness ) - for example democracy. My first example that I gave above (the transparent kind of government structure) may cut too deep too fast (because it might threatens some core part of a certain ideology), but I think the reversed mutipolar trap tactic is the way to go in general. Maybe first, the more surface level ideas need to be changed using this tactic and then from there we can go deeper and deeper one step at a time. What we are essentially talking about here, is trigerring or fastening up internal change in these countries (as much as that possible without too much pusback and negative effects). Obviously the hard part of this, is how to balance things to not actually fuck things up unintentionally. There are other tools to achieve or to trigger or to fasten up internal change in other countries but probably this concept is one of the biggest one.
  14. @aurum When it comes to actually stopping everyone internationally I think this is what is required (this is nowhere near specific enough, this is just the layout/structure of what I think is required): I will copy paste from, what I wrote in the "should we shut down all AI" thread. The bar for the solution here is incredibly high and I obviously wouldn't say that it is anywhere near realistic, but unfortunately I think this is what is required. Very shortly, we need new AI research tools/methods that every country,company is incentivised to implement/use ,but at the same time safe as well.
  15. There is only really one way to do this (to actually make sure everyone will participate, including all countries), but that thing haven't really been discovered yet. It could be called a reversed multipolartrap or a positive multipolar trap, where you invent a new method/tool that is the most effective in the dynamics of game A, but at the same time moves us towards game B or has game B characteristic in it. Because it is the most effective, people will automatically start to use it, if they want to stay competitive. So for instance, in the context of politics (this might or might not be true, I will just use this for the sake of an analogy and demonstration of this concept) if a transparent government model is more effective than other ones, and different countries start to see that, they will eventually need to implement that model, because other governments who implement that model will start to outcompete the ones that don't use this model. Because of that pressure eventually everyone will use that model, but for example - because of the transparency - these new models could start to change the global political landscape in a way that start moving us towards game B. Now is that true that a more transparent government model is more effective than other ones? we can argue about that, thats not the point I try to make here, the point is to 1)find/create a method or tool that has inherent qualities in it similar to game B or at the very least has the potential to change certain game A systems internally to move us towards game B, and 2) at the same time so effective in the current game A world, that people will be incentivised to use/implement it, beacuse people will see that by using ot, they will get short term gains with it. In the context of this discussion, the challenge would be for smart AI researchers to find/create a new research method that is opmitized for safety but at the same time one of the most if not the most effective and cost efficient method to progress AI (I have no idea if this is possible or not, Im just saying what I think is actually required to make everyone participate [including abroad]).
  16. So are you okay with incest instances, where there is no chance of anyone getting pregnant - if not - whats your arugment against it?
  17. some prompts for GPT: https://www.reddit.com/r/ChatGPT/comments/12bphia/advanced_dynamic_prompt_guide_from_gpt_beta_user/
  18. I already said im not into it, but a person who is into it, how will that person get fucked up by it?
  19. The difference is that I could easily argue why rape is immoral, but no one this thread have made a sound argument why incest is actually immoral.
  20. You have a simplictic view of morality, morality is much more than that. None of what you are suggesting would be above morality. The time you can make conscious decisions, thats the time, when you are considered as a moral agent regardless if you can feel or have emotions or not. You will need to make your decisions based on your moral system. Here is a tangible question: Lets say you have two robots(robot A and robot B) each of them requires 50 unit of energy every day to survive and to maintain themselves. If they don't get the sufficient enough energy for that day, they will shut down and they will be essentially dead. One of them gets injured (lets say robot A) for some reason and now it requires 65 unit of energy every day in order for robot A to surive until it gets repaired. In a finite world where you only have 100 unit of energy every day, what dynamic would go down between those robots, what do you think? - One of them would be forced to die, but each of them can make multiple decision there (morality) what they want to do. Why would any of those robots choose an altruistic behaviour over an agressive one (where they destroy and shut down each other).
  21. The Power dynamic problem isn't exclusive to incest either. Even if you can defend that argument, the max you can achieve with it is to reject some forms of incest, but you won't be able to reject the whole category of incest. - What about 2 twins having sex together? This isn't related to morality. Not having growth or having less growth isn't immoral, but I wouldn't even say that you necessarily will grow a lot less if you are into incest. 1) You can still have a poly or an open relationship or you don't even have to have a partner relationship with your family member it can be exclusively about hooking up. 2) When it comes to growth from a relationship, most of that growth and maturing comes from being able to maintain the relationship and not from landing one. If you want to bring up "but what about rejection?"- You can get and will get rejected for all sorts of reasons outside of dating. That kind of character growth can be achieved outside of dating and most people won't even get through that much rejections. People in general are not into cold approach. - in short you don't have to go through a fuckboy phase in order for you to have character development.
  22. There is no strong argument against it, so morally it isn't bad. Would I do it? Personally I wouldn't, but I wouldn't consider it immoral. What do you mean by a "healthy one" - are your mostly referring to genetic dysfunctions? - because if so, then the argument you would use for that wouldn't be exclusive to incest, and could be used in different contexts as well (for example what if we know that your child could inherit from you with x% chance different kind of bad diseases - should you be prevented from having children - or how do you parse moral questions like that?) This assumes, that people will almost never date outside their family , and I don't think that would be the case in general. Yeah but those elite families tried to keep their bloodline "clean", and werent doing incest just because they were necessarily attracted to each other. If you take out the necessity for keeping your bloodline "clean", then I think you would see that people in general would be attracted to people outside their close relative circle. One argument for that is the Westermarck effect: So in general you will be sexually disgusted towards your siblings, and because of that, generally you will be more likely to date outside your family, therefore being worried that the normalization of incest would destroy society is not that strong of an argument.
  23. First of I don't think we can totally shut down everyone from pushing the progress forward ( for the obvious reasons some of you guys have already established (we can't control internationally everyone)). Now that I realize that your points were given in the context of prevention (and not in the context, where there is an already psychopatic AI), I agree with those, they are good in that context and I would add a few more things: if we want to minimize the negative effects of AI (unintented and intended all included) then we need to understand whats happening inside the blackbox (why it works the way it does, why it gives the answers and how it arrives at its conclusions and what foundational mechanisms drive its replies and thinking process) - Some people at OpenAI have already suggested 1 thing how this can be achieved: Pause the development for a while and try to recreate the current AI but with now using different methods (so trying to create AIs with similar capability, but using different pathyways). This way the developers will be able to understand why things work the way they do right now. We all obviously know, that there is a big market pressure for people to be the first to create and produce AGI. I think we need to hope for these things: 1)That maximizing the progess towards AGI entails maximizing alignment as well. 2)I think and I hope, most people who wants to be the first AGI creators, will want to create an AI that doesn't do things randomly, but actually does what the creators or people want it do to (even people who don't give a fuck about others, but only about money and fame and their survival, they will probably want it to do the things they want). 3) I think a lot of people in general are afraid of AI, so pushing for AI safety will hopefully gather a lot of sponsors and donations and help, not just from governments, but from people in general. Being a virtous AI company will be probably a big advantage compared to other companies. If some of these companies want to maximize their progress (which they obviously inctentivised to do so, because then they can dominate the whole market by it), I think they will be forced to at least try to hold the "AI safety" image up, because that way they can gather more money and more help from government and from people to maximize their progress. - this is sort of a reversed multipolar trap or a positive multipolar trap. 4) I think the progess is very dependent on how much quality feedback you get from people and how many people can try your AI in general. Hopefully, doing things in a shady way (where you hide the progress of your company/government regarding AI) will slow the development down, compared to companies like OpenAI, where their AI is already used all across the world and because of that they get a lot more feedback from people and from developers and that accelerates the developmental process. I'll give some reasons why I think maximizing AI alignment will hopefully maximize progress: Generally speaking, the more honest and quality feedback an AI can give back to the developers the faster they can catch and understand the problems, so if an AI is really deceptive and not aligned with the developers intentions, that can slow the development process down a lot If the AI does exactly what the developers want from it, then that could be directly used to fasten up the process: Imagine being able to talk with the AI about all the things you want to change on it and being able to tell it to change those things inside itself. Thats all I got for know, but obviously more things could be added/said.
  24. This assumes a lot again. Being rational just means being able to use logic and filter shit out, however logic alone won't tell you what you should do in different situations morally, it can only tell you what you should do, once your moral system is already established. If those robots would be conscious, then they would automatically have some kind of a moral system. From their perspective, the best moral system is the one that can maximize their survival - and that alone will create a lot of conflict. - Distributing things equally is not necessarily always the best option for that (here many examples could be given, that can demonstrate this point). They would be capable and smart enough to survive on their own, without any need for help from external sources. Knowing all that, why would they make compromises and lower their chance of survival by letting other parties (robots in this case) have a direct say in their life? They would agree, 1) if they would have the exact same morality, that wouldn't only be about maximizing self survival(but why would they agree to a common moral system like that?) 2) You assume that in each an every scenario it is beneficial for each of them to always work together If one of their survival is less optmizied in the "lets work together scenario" and they calculate that beforehand, why would they go with that compared to other ones, where they can dominate and maximize their survival better?