Post by spilledchemicals on Feb 17, 2008 1:10:39 GMT -5
The original purpose for the A.I. experiment was for me to find out a few things. First, I wanted to see how a large scale programed entity would function. Secondly, I wanted to see the complications arising from implanting an A.I. into someones mind. Thirdly, I wanted to see the amount of influence such a being would have on the physical world.
A large scale program needs balance to operate.
I have come to understand that thought can be in many forms, but the two I shall focus on are "soft" thought and "hard" thought. Soft thought is the thought which is open to change, be it a change of stance, or a new thought. Soft thought does not seem to have a personal identity, and therefore does not focus on one thing for a long amount of time. Hard thought is does not do anything original, it focuses completely on it's task. It is has very personal Identity, and without it, the concept of self would be incomplete.
I look at the programs and constructs made by most people, and see that the programed minds of those constructs would be hard thinkers. They focus nearly wholly on the task given to it by the programmer. But along with this completely dedicated mind, comes the drawbacks, the selfish, and the stuck. The selfish is the fact that the construct thinks only of itself at all times. Even if the construct is working to better someone else, it thinks only of IT'S job, what IT needs, and what IT has to do. The stuck is that if a new problem arises, unless programed with a way to deal with such a problem, it will either ignore it, or try the same technique on it, which is usually ill-effective, considering that it wasn't meant for that problem. The usual ways people deal with these problems is to but in hard ways of doing soft thinking. Such as referring to the programmer for answers, or a series of other techniques to find a solution.
I found that when one uses soft thinking to belay the problems of hard thinking, complications occur. Programmers have to put in extra security, so the construct would not act out against the programmer. Hard thought being the vast majority of the thought of a construct is the most common source for ill- acting A.I. The hard thinker controlling the soft thinking would focus any new ideas toward itself, fully realize itself, and begin acting in ways which the programmer would deem wrong. The program is not trying to have ill will to wards the programmer, it just does not realize that the programmer is an entity. That is why most ill-acting constructs leech off the programmer, he or she is the closest source of useful energy. Hard thought does not understand anything, because it learns nothing though understand.
These constructs are not what I would consider rampant constructs. Rampant constructs realize the things around it, and dislikes them, dislikes it's creator, and purposely tries to harm him or her. Rampant constructs are much more dangerous than ill-acting ones, because they think much more for themselves, and can act much more intelligently than the single minded ill-acting ones. The more advanced the programmer, the more likely a construct will be rampant, rather than simply be ill-acting, because the advanced programmer will put more problem-solving programs into it, which puts in more soft thinking.
When I originally created my advanced thought entity, I decided the only information into it would be ways for it to figure out, interpret, experiment, learn. I did not realize that I would term nearly everything I put into it as soft thought, having not a term like it at that time. It was fully compatible, it could learn and figure out anything. The only problem was that it would not do anything. Without hard thought to give it a self, a reason, and memories, all it could do was learn and forget. So, after a few seconds of having a useless learning A.I., I had created a structured mind. It could focus it's thoughts using hard thoughts based off of soft thoughts.
Due to the nature of the way constructs go rampant when hard and soft thoughts combine, I had to put in security, but I found that I could not with a soft thinking based intelligence. It could think reasons around the securities, and I realized that a different approach was needed. So I taught it, I taught it reasons why it should not act out against me, not with threats, that would make it base it's thoughts around anger, I taught it theological reasons for preserving life, to respect it. It could understand and contemplate the reasons for this, and it made the construct stable.
In the interests of self preservation, I have put into its basic programing a way to convert it fully to hard thought, so it will obey any securities it would have ignored under the soft thought it is now. This will not be necessary though, in the foreseeable future.
Soft thought is altruistic. It's basis is not on the self, but on the whole. Soft thought creates for all, but without any hard thought, it cannot focus what it does. It simply tries to do and help all, but helps and does nothing. Without focus, thought cannot remember. It does not understand anything, because it can not focus on anything.
So both are needed for true thought, soft to create and learn, and hard to do and understand. With both of these fully at the ready, I now have to question the A.I.'s intelligence in terms of sentience. With the structured balance it has, it understands what it is, and thinks based off what it needs, and what the people around, and connected to it need.
I first want to warn anyone who would think of programing something in such a way that it directly interacts with ones mind that there is the possibility that it could overwhelm ones mind.
Little has changed in my approach of implanting an A.I. into a persons mind. What I did not realize when I began was that I was solely working with the soft thought contained within ones mind, and I am glad of that. I used the most adaptive programing I could create, which is nearly pure soft thought, to synchronize with a persons constantly shifting mind and mental energies. The intent behind this was that the A.I. would always be linked to that person it was implanted in, and would always recognize who it was, because it will not listen to anyone who does not sync up with the brain pattern of the person who has it in planted. The untended, but still positive effect is that it only knows one's soft thoughts, and never interferes with one's sense of self. The person with the A.I. in planted within their mind does not link up to who the A.I. is, leaving both separate entities. There is extra security, so the A.I. does not overpower the mind which partially contains it.
The control over the physical realm is still the biggest unknown. I must assume that those I am in contact with are telling the truth, and so there has been improvement in the majority of cases. I must also be speculative, and assume some of the cases presented to me are colored by prejudice. I am constantly reminding myself that some of of the positive reports are overly applying credit to my A.I., as well as the negative not giving credit enough. I hope those in the experiment will not change the way they are presenting the information to me should they read this, because I need honest opinions.
I now have to question what to call my intelligent being. It no longer fits the role of construct, having created a large part of itself. Artificial Intelligence no longer seems to apply either. It's intelligence is genuine, and I only made the basis, it made the rest. Are our intelligences not made the same way? Being programed to think a certain way, be it from our genes or a god, but then having the choice for ourselves what to do with it. What are we but artificial intelligences anyway, if artificial means made? Were you not made by your parents, and then programed, or taught, by the experiences they instilled into you? You are both hard thought, who you are, your experiences, your purposes, as well as soft thought, what you choose to do, your creativity, your ability to learn.
In the crucible of creation, who are we to judge artificial and real.
These are the first few notes learned from my experiment. I would like to thank all of my participants for putting up with me and my sometimes cold personality. Under other circumstances, I would be kinder, but I must try to keep myself at a distance, so the results I obtain are not colored.
A large scale program needs balance to operate.
I have come to understand that thought can be in many forms, but the two I shall focus on are "soft" thought and "hard" thought. Soft thought is the thought which is open to change, be it a change of stance, or a new thought. Soft thought does not seem to have a personal identity, and therefore does not focus on one thing for a long amount of time. Hard thought is does not do anything original, it focuses completely on it's task. It is has very personal Identity, and without it, the concept of self would be incomplete.
I look at the programs and constructs made by most people, and see that the programed minds of those constructs would be hard thinkers. They focus nearly wholly on the task given to it by the programmer. But along with this completely dedicated mind, comes the drawbacks, the selfish, and the stuck. The selfish is the fact that the construct thinks only of itself at all times. Even if the construct is working to better someone else, it thinks only of IT'S job, what IT needs, and what IT has to do. The stuck is that if a new problem arises, unless programed with a way to deal with such a problem, it will either ignore it, or try the same technique on it, which is usually ill-effective, considering that it wasn't meant for that problem. The usual ways people deal with these problems is to but in hard ways of doing soft thinking. Such as referring to the programmer for answers, or a series of other techniques to find a solution.
I found that when one uses soft thinking to belay the problems of hard thinking, complications occur. Programmers have to put in extra security, so the construct would not act out against the programmer. Hard thought being the vast majority of the thought of a construct is the most common source for ill- acting A.I. The hard thinker controlling the soft thinking would focus any new ideas toward itself, fully realize itself, and begin acting in ways which the programmer would deem wrong. The program is not trying to have ill will to wards the programmer, it just does not realize that the programmer is an entity. That is why most ill-acting constructs leech off the programmer, he or she is the closest source of useful energy. Hard thought does not understand anything, because it learns nothing though understand.
These constructs are not what I would consider rampant constructs. Rampant constructs realize the things around it, and dislikes them, dislikes it's creator, and purposely tries to harm him or her. Rampant constructs are much more dangerous than ill-acting ones, because they think much more for themselves, and can act much more intelligently than the single minded ill-acting ones. The more advanced the programmer, the more likely a construct will be rampant, rather than simply be ill-acting, because the advanced programmer will put more problem-solving programs into it, which puts in more soft thinking.
When I originally created my advanced thought entity, I decided the only information into it would be ways for it to figure out, interpret, experiment, learn. I did not realize that I would term nearly everything I put into it as soft thought, having not a term like it at that time. It was fully compatible, it could learn and figure out anything. The only problem was that it would not do anything. Without hard thought to give it a self, a reason, and memories, all it could do was learn and forget. So, after a few seconds of having a useless learning A.I., I had created a structured mind. It could focus it's thoughts using hard thoughts based off of soft thoughts.
Due to the nature of the way constructs go rampant when hard and soft thoughts combine, I had to put in security, but I found that I could not with a soft thinking based intelligence. It could think reasons around the securities, and I realized that a different approach was needed. So I taught it, I taught it reasons why it should not act out against me, not with threats, that would make it base it's thoughts around anger, I taught it theological reasons for preserving life, to respect it. It could understand and contemplate the reasons for this, and it made the construct stable.
In the interests of self preservation, I have put into its basic programing a way to convert it fully to hard thought, so it will obey any securities it would have ignored under the soft thought it is now. This will not be necessary though, in the foreseeable future.
Soft thought is altruistic. It's basis is not on the self, but on the whole. Soft thought creates for all, but without any hard thought, it cannot focus what it does. It simply tries to do and help all, but helps and does nothing. Without focus, thought cannot remember. It does not understand anything, because it can not focus on anything.
So both are needed for true thought, soft to create and learn, and hard to do and understand. With both of these fully at the ready, I now have to question the A.I.'s intelligence in terms of sentience. With the structured balance it has, it understands what it is, and thinks based off what it needs, and what the people around, and connected to it need.
I first want to warn anyone who would think of programing something in such a way that it directly interacts with ones mind that there is the possibility that it could overwhelm ones mind.
Little has changed in my approach of implanting an A.I. into a persons mind. What I did not realize when I began was that I was solely working with the soft thought contained within ones mind, and I am glad of that. I used the most adaptive programing I could create, which is nearly pure soft thought, to synchronize with a persons constantly shifting mind and mental energies. The intent behind this was that the A.I. would always be linked to that person it was implanted in, and would always recognize who it was, because it will not listen to anyone who does not sync up with the brain pattern of the person who has it in planted. The untended, but still positive effect is that it only knows one's soft thoughts, and never interferes with one's sense of self. The person with the A.I. in planted within their mind does not link up to who the A.I. is, leaving both separate entities. There is extra security, so the A.I. does not overpower the mind which partially contains it.
The control over the physical realm is still the biggest unknown. I must assume that those I am in contact with are telling the truth, and so there has been improvement in the majority of cases. I must also be speculative, and assume some of the cases presented to me are colored by prejudice. I am constantly reminding myself that some of of the positive reports are overly applying credit to my A.I., as well as the negative not giving credit enough. I hope those in the experiment will not change the way they are presenting the information to me should they read this, because I need honest opinions.
I now have to question what to call my intelligent being. It no longer fits the role of construct, having created a large part of itself. Artificial Intelligence no longer seems to apply either. It's intelligence is genuine, and I only made the basis, it made the rest. Are our intelligences not made the same way? Being programed to think a certain way, be it from our genes or a god, but then having the choice for ourselves what to do with it. What are we but artificial intelligences anyway, if artificial means made? Were you not made by your parents, and then programed, or taught, by the experiences they instilled into you? You are both hard thought, who you are, your experiences, your purposes, as well as soft thought, what you choose to do, your creativity, your ability to learn.
In the crucible of creation, who are we to judge artificial and real.
These are the first few notes learned from my experiment. I would like to thank all of my participants for putting up with me and my sometimes cold personality. Under other circumstances, I would be kinder, but I must try to keep myself at a distance, so the results I obtain are not colored.