Topic: Video Games, Ethics
Choice: noun the right, power, or opportunity to choose; option: The child had no choice about going to school.
Hello everyone, today I'd like to talk about the treatment of women in video games. No, I'm not talking about what they're wearing, I have nothing against that (or rather, I don't want to get into that subject). No, I'd like to talk specifically about relationships in video games.
For some time now, it has been possible to have a romantic sub-plot, or to develop a relationship in a game which has nothing to do about relationships. Most of these games are role-playing games (RPGs), but certain other games allow this also. Usually, these romances are pretty bland, almost going to the point of romancing a plaster wall. The medium of video games has a long way to go before being able to mimic an actual human relationship.
However, the complexity and depth of video game romance is not what irks me; adding a meaningful second storyline can really enhance gameplay experience. No, what I would like to address is the question of choice. To illustrate this, let me take the example of a character which has recently become one of my favorites: Skyrim's Serana.
When Dawnguard came out, the fanboy in me was quick to jump on an opportunity to play the expansion, and almost immediately I was charmed by Serana's wit and her wistful personality. The daughter of a powerful vampire lord, and a powerful vampire herself, Serana shares ideas and opinions with the Dragonborn. She is strong and independent, and yet also displays a lonely interior. It is clear that her designers intended her to be a deep character the players can interact with. I myself enjoy very much having her tag along, suck the lifeforce out of enemies and share aloud her displeasure with the weather.
Like many other characters in the game, she can be asked to marry the player, which she politely declines, mentioning a complicated family history and a dislike for religious temples. I thought this was amazing. Not only was she a unique character, she also wasn't a slave to the player's every whim. This element of "free will" alone raised my respect for this character.
The community did not share my opinion.
After this *ahem* revelation was made to the public, several players complained about not being able to marry the vampiress. In fact, a petition was released to make Serana a marriageable character. At the time of writing this, 4595 people had signed.
Why do I find this disturbing and disgusting? Let us rephrase, shall we? Man falls in love with Woman. Woman states that she is not interested. Man prays to God to change her mind. God destroys Woman's personality and will. Man marries Woman.
See? It's creepy, unhealthy and downright immoral.
One person can argue that the person affected is a video game character, and thus is not eligible for this kind of moral analysis. However, just what values would we be promoting if we allowed this kind of behavior? Gamers like to be in control of their environment, true, but at what point should we shatter the barrier between control and morality? If we allow this, then we are saying that it is okay to force unwilling persons into activities they would normally refuse to partake in.
The designers clearly thought of marriage when creating Serana, and they clearly chose not to allow it. Was it to preserve her personality? To make her unique? To give her a semblance of free will? Whatever their reasons, we should respect them. After all, they provided us with a character people clearly love dearly; why should we change her?
Stay frosty, readers,
Snowman
My name is Jo, and I like designing solid software, playing good games, and cuddling adorable animals. Coffee?
Thursday, August 2, 2012
Tuesday, July 10, 2012
When Games Die
Topic: Video Games
To Mourn: verb to grieve or lament for the dead.
Hi everyone, welcome.
Please, have a seat. Some warm coffee or hot chocolate, maybe? Just get comfortable, as we're going to get into some heavy stuff here. Okay, maybe not heavy for you, specifically, and that's fine. As long as you can accept that this affects a number of other people, that's fine. In fact, we're just happy to have you here with us, and I mean that.
The concept of death applied to inanimate objects, and sometimes concepts, is nothing new. You often hear of a TV franchise dying, or of the death of a character in a movie, or the oft-uttered curse "my [insert object here] just died". We accept this, and acknowledge this as being not the litteral death of a thing, but rather as a feeling of grief brought forward by the end of something enjoyed and cherished.
I don't think I'm bringing anything new to the world if I talk about video game death. However, I would like to discuss the concept of a video game's lifetime for a moment. The way I see it, a video game goes through stages of "life" in a similar way a person does.
Young games are extremely dynamic and are constantly changing, either through active development (indev) or through its community. Here, the temporal age of a game is irrelevant; some games, such as UT2004 or Counter-Strike remain dynamic through their active communities even to this day. In a way, these games always bring something new to any player no matter how long they've been playing.
Some games, however, are very old (but still very much alive). They do not change, and the community rarely "awakens" from its years-long slumber, but people consistently revisit those games. Oldies, such as Super Mario Bros. fall into this category. Either due to their nostalgic value or their simplicity, some games will remain old forever, continuously revisited by people of all generations. They do not bring anything new necessarily, but people cannot in all honesty say "I will never play that game again".
So how does one really define a video game's death? I, for one, believe that a video game dies when it becomes obsolete for a player. Let us take the example of the Halo franchise. How many people still actively play Halo? Probably quite a few, though more because of its nostalgic value than anything else; it was the original, the beginning of the story. Halo would be an old game by the standards defined above. But what about Halo 2? How many people still played it when the third game came out? For that matter, how many people still played Halo 3 when Halo: Reach came out? Not as many, certainly. In fact, when the "new and improved" version of a game comes out, the community of the old game, in most cases, goes quiet. Sure, some people still play the game, but the enthusiasm is no longer there.
The movie is no longer in theatres. The hype is gone. And all your save files will eventually become stagnant swamps of meaningless 0's and 1's.
That game is dead.
Most gamers have had to face that problem at some point. There comes a time where the game stops being new. The urge to play again, to explore and learn from the world, simply isn't there anymore.
In a way, a game's death is a deeply personal thing. Let us take the example of one of my personal favourites, The Elder Scrolls III: Morrowind. For me, that game is immortal. I don't play anymore, and certainly don't plan to in the near future, but I know that the next time I will re-enter Vvardenfell, I will feel at home. Morrowind is still very much alive for me. However, is this true for other gamers? Some people I know believe they're done with this game for good and will probably never play it again; for those people, Morrowind is gone.
Let me tell you a story of my own recent experience with video game death.
I've recently (less than a week ago) discovered Terraria, a Minecraft-like game where you take control of a little 2D sprite and explore a world filled with monsters and wonders. You build your fortress, survive hellish encounters (zombies, Cthulhu's eyes and killer unicorns to name a few) and explore valleys, deserts, jungles and caves in complete freedom. You have to craft your own gear from materials you gathered and use this gear to explore even further reaches of the world. Those who know me well know that this is the kind of game I usually enjoy for a long time.
Recently, before I've started playing, Terraria's developers announced that they would stop developing the game. This means that no new content would be delivered to us, and that's fine. A developer cannot continuously work on the same game forever, and those who were angered by this announcement need to realize that game developers are not subject to each gamer's every whim. Terraria will stop growing, certes, but it will still be alive, at least for me.
I'm not eactly sure what exectly happened (I've read various conflicting accounts), but at some point Terraria's devs announced another game, Starbound, described as being "Terraria in space, but much, much bigger".
This is not the devs' fault, and I do not blame them. In fact, I encourage this and am looking forward to this new game. But I know that when Starbound comes out (end of Summer 2012), I will stop playing Terraria in favour of its new bretheren. This is no one's fault other than mine, but I know that when the new game will come out, the old game will die for me.
And yet, I still enjoy Terraria a lot. I know that in a few months, the place I instantly fell in love with will be replaced by a "new and improved" version and will effectively die, and yet I cannot stop myself from spending time in it. In a very sad way, I am exploring a dying world. And knowing that it is dying, that it will be dead in a few months, makes me sad.
Gamers are fickle and quirky people. We impatiently wait for a game to come out, then complain that it is not perfect in every possible way (knowing well that no game is). We complain about every little bug, and howl in rage if the ending is not to our liking.
But we love our games, and when they die, we mourn.
Stay frosty, everyone.
Yours truly,
Snowman
To Mourn: verb to grieve or lament for the dead.
Hi everyone, welcome.
Please, have a seat. Some warm coffee or hot chocolate, maybe? Just get comfortable, as we're going to get into some heavy stuff here. Okay, maybe not heavy for you, specifically, and that's fine. As long as you can accept that this affects a number of other people, that's fine. In fact, we're just happy to have you here with us, and I mean that.
The concept of death applied to inanimate objects, and sometimes concepts, is nothing new. You often hear of a TV franchise dying, or of the death of a character in a movie, or the oft-uttered curse "my [insert object here] just died". We accept this, and acknowledge this as being not the litteral death of a thing, but rather as a feeling of grief brought forward by the end of something enjoyed and cherished.
I don't think I'm bringing anything new to the world if I talk about video game death. However, I would like to discuss the concept of a video game's lifetime for a moment. The way I see it, a video game goes through stages of "life" in a similar way a person does.
Young games are extremely dynamic and are constantly changing, either through active development (indev) or through its community. Here, the temporal age of a game is irrelevant; some games, such as UT2004 or Counter-Strike remain dynamic through their active communities even to this day. In a way, these games always bring something new to any player no matter how long they've been playing.
Some games, however, are very old (but still very much alive). They do not change, and the community rarely "awakens" from its years-long slumber, but people consistently revisit those games. Oldies, such as Super Mario Bros. fall into this category. Either due to their nostalgic value or their simplicity, some games will remain old forever, continuously revisited by people of all generations. They do not bring anything new necessarily, but people cannot in all honesty say "I will never play that game again".
So how does one really define a video game's death? I, for one, believe that a video game dies when it becomes obsolete for a player. Let us take the example of the Halo franchise. How many people still actively play Halo? Probably quite a few, though more because of its nostalgic value than anything else; it was the original, the beginning of the story. Halo would be an old game by the standards defined above. But what about Halo 2? How many people still played it when the third game came out? For that matter, how many people still played Halo 3 when Halo: Reach came out? Not as many, certainly. In fact, when the "new and improved" version of a game comes out, the community of the old game, in most cases, goes quiet. Sure, some people still play the game, but the enthusiasm is no longer there.
The movie is no longer in theatres. The hype is gone. And all your save files will eventually become stagnant swamps of meaningless 0's and 1's.
That game is dead.
Most gamers have had to face that problem at some point. There comes a time where the game stops being new. The urge to play again, to explore and learn from the world, simply isn't there anymore.
In a way, a game's death is a deeply personal thing. Let us take the example of one of my personal favourites, The Elder Scrolls III: Morrowind. For me, that game is immortal. I don't play anymore, and certainly don't plan to in the near future, but I know that the next time I will re-enter Vvardenfell, I will feel at home. Morrowind is still very much alive for me. However, is this true for other gamers? Some people I know believe they're done with this game for good and will probably never play it again; for those people, Morrowind is gone.
Let me tell you a story of my own recent experience with video game death.
I've recently (less than a week ago) discovered Terraria, a Minecraft-like game where you take control of a little 2D sprite and explore a world filled with monsters and wonders. You build your fortress, survive hellish encounters (zombies, Cthulhu's eyes and killer unicorns to name a few) and explore valleys, deserts, jungles and caves in complete freedom. You have to craft your own gear from materials you gathered and use this gear to explore even further reaches of the world. Those who know me well know that this is the kind of game I usually enjoy for a long time.
Recently, before I've started playing, Terraria's developers announced that they would stop developing the game. This means that no new content would be delivered to us, and that's fine. A developer cannot continuously work on the same game forever, and those who were angered by this announcement need to realize that game developers are not subject to each gamer's every whim. Terraria will stop growing, certes, but it will still be alive, at least for me.
I'm not eactly sure what exectly happened (I've read various conflicting accounts), but at some point Terraria's devs announced another game, Starbound, described as being "Terraria in space, but much, much bigger".
This is not the devs' fault, and I do not blame them. In fact, I encourage this and am looking forward to this new game. But I know that when Starbound comes out (end of Summer 2012), I will stop playing Terraria in favour of its new bretheren. This is no one's fault other than mine, but I know that when the new game will come out, the old game will die for me.
And yet, I still enjoy Terraria a lot. I know that in a few months, the place I instantly fell in love with will be replaced by a "new and improved" version and will effectively die, and yet I cannot stop myself from spending time in it. In a very sad way, I am exploring a dying world. And knowing that it is dying, that it will be dead in a few months, makes me sad.
Gamers are fickle and quirky people. We impatiently wait for a game to come out, then complain that it is not perfect in every possible way (knowing well that no game is). We complain about every little bug, and howl in rage if the ending is not to our liking.
But we love our games, and when they die, we mourn.
Stay frosty, everyone.
Yours truly,
Snowman
Tuesday, June 19, 2012
On Comments
Topic: Programming
Comment: noun a note in explanation, expansion, or criticism of a passage in a book, article, or the like; annotation.
Today I'd like to spend a little time talking about the controversial artifacts which are known as comments, and my current stance on commenting.
Each programmer knows what a comment is. However, for starting programmers, or people who are interested in programming but don't know anything about it yet, a comment is basically written text in a code file (also known as the source code) which will not be interpreted as code. It is basically text which is not part of the program. In most languages, they are usually marked by either two slashes (//) or a hash character (#). The following is an example:
Now, the programming community is divided as to the usefulness of comments in code. One side states that comments improve code readability and should be used as often as possible. Using comments will also help new developers understand legacy code much faster, code which may be extremely confusing to read (confuse code). The other side states that comments are completely useless and can be replaced by good programming. The code should be in an of itself clear to understand, and that if comments are needed then the code is bad (bad notation). Additionally, people tend to rely too much on comments and place them in useless places (comment abuse).
Before stating my own opinion, let me show pseudo-code examples that I have encountered of both extremes. First, let us look at an example of confuse code:
Here is an example of bad notation:
Finally, let us look at an example of comment abuse:
Now, what is my stance on comments? Well, I sit pretty much in the middle. I believe that code should be easy to understand without the need for comments. However, let us look at the following example:
As such, I believe that comments should not explain the how of code, but should be used to explain the why of design decisions.
Now, I'm curious to hear what you think of comments. Have an opinion? Then leave a comment! (Yes I am a shameless sellout.)
Anyways, that's it for today folks. Stay cool,
Snowman
Comment: noun a note in explanation, expansion, or criticism of a passage in a book, article, or the like; annotation.
Today I'd like to spend a little time talking about the controversial artifacts which are known as comments, and my current stance on commenting.
Each programmer knows what a comment is. However, for starting programmers, or people who are interested in programming but don't know anything about it yet, a comment is basically written text in a code file (also known as the source code) which will not be interpreted as code. It is basically text which is not part of the program. In most languages, they are usually marked by either two slashes (//) or a hash character (#). The following is an example:
int x = 0;As you can see, we are initializing a variable, x, to the value of 0. Then, we set the value to be 1. However, in between we have text saying what we are about to do. That text is clearly not part of the executable code; that text is known as a comment.
// Setting x to 1
x = 1;
Now, the programming community is divided as to the usefulness of comments in code. One side states that comments improve code readability and should be used as often as possible. Using comments will also help new developers understand legacy code much faster, code which may be extremely confusing to read (confuse code). The other side states that comments are completely useless and can be replaced by good programming. The code should be in an of itself clear to understand, and that if comments are needed then the code is bad (bad notation). Additionally, people tend to rely too much on comments and place them in useless places (comment abuse).
Before stating my own opinion, let me show pseudo-code examples that I have encountered of both extremes. First, let us look at an example of confuse code:
int a = Math.sqrt(Math.pow(xValAfterModification, 9));What is a? What does it represent? What is xValAfterModification, why are we raising it to the 9th power and why are we taking the square root of that? What does methodFoo do and why are we setting b to this? No matter where this code is found, this is extremely messy and hard to understand. Someone would have to spend a long time looking at this to understand what exactly this is doing.
int b = methodFoo(a);
Here is an example of bad notation:
// This variable will store the current product priceAt first, this seems okay. The programmer is simply specifying what the variable will be used for. However, in this case the comment is completely useless. Why doesn't the programmer simply rename the variable n to currentProductPrice? No one ever confuse something called currentProductPrice with, say, the number of available products. In this case the comment is used instead of good programming practices (specifically using clear variable names).
int n = 0;
Finally, let us look at an example of comment abuse:
// Creating x and setting it to 0Ignoring the obvious uselessness of this code (why didn't they just set x to 2?), the comments in this example just state exactly what the code does at every step. However, the comments simply state exactly what the next line is doing exactly as if the person would read the code out loud. These comments are useless since they state the obvious.
int x = 0;
// Adding 1 to x
x = x + 1;
// Multiplying x by 2
x = x * 2;
Now, what is my stance on comments? Well, I sit pretty much in the middle. I believe that code should be easy to understand without the need for comments. However, let us look at the following example:
x = x + 1;Now, why did the programmer increment x twice instead of simply incrementing it only once?
x = x + 1;
// x is a control variable. Due to theAh, now the reasoning is clear. Since the program is multiconcurrent, the programmer wants to increment the value twice instead of just once. There might be other questions someone can ask (is x used to view the number of modifications? is x written in another thread?), the comment adds clarity to the code in a way that the code could not do itself.
// volatile and multiconcurrent nature of this program,
// we need to increment this varibale
// twice instead of just adding
// 2 to it.
x = x + 1;
x = x + 1;
As such, I believe that comments should not explain the how of code, but should be used to explain the why of design decisions.
Now, I'm curious to hear what you think of comments. Have an opinion? Then leave a comment! (Yes I am a shameless sellout.)
Anyways, that's it for today folks. Stay cool,
Snowman
Wednesday, May 30, 2012
A Face Full Of "D'Awww"
Topic: Varia
Appeal: noun the power or ability to attract, interest, amuse, or stimulate the mind or emotions: The game has lost its appeal.
Hello, my dear readers.
I just wanted to post something to indicate that contrary to popular belief, this blog is not dead, it is simply on hiatus segnities. This means that I will resume updating regularly once I quit being lazy.
In the meanwhile, enjoy this moving picture of the world's cutest puppy:
Yours truly,
Snowman
Appeal: noun the power or ability to attract, interest, amuse, or stimulate the mind or emotions: The game has lost its appeal.
Hello, my dear readers.
I just wanted to post something to indicate that contrary to popular belief, this blog is not dead, it is simply on hiatus segnities. This means that I will resume updating regularly once I quit being lazy.
In the meanwhile, enjoy this moving picture of the world's cutest puppy:
Yours truly,
Snowman
Monday, April 23, 2012
Bostrom's Argument
Topic: Varia
Simulation: noun the representation of the behavior or characteristics of one system through the use of another system, especially a computer program designed for the purpose.
Today I would like to present to you the argument of a certain Nick Bostrom about whether or not we live in a simulation. For those of you who are not aware of this philosophical question, it goes a little like this; how can we be certain, as thinking beings, that we are not in fact living in reality, but in a simulated reality part of a larger world, that we may in fact simply be living in a computer program. If that were the case, then there would be no way of knowing. So which is it more logical to think: that we live in reality, or that we live in a simulation? Dr. Bostrom makes a point of explaining that it is more logical to think the latter, and here is his argument.
Of the following three prospects, only one can be true:
(1) Sufficiently advanced civilizations which can create simulations do not, and cannot, exist;
(2) Sufficiently advanced civilizations which can create simulations would not be interested in doing so (i.e. they would not create simulations);
(3) We are most likely living in a simulation;
Now, before we get started, we need to clarify what it means to be a "sufficiently advanced civilization". To be one such civilization is to have attained the level of technological advancement required to create simulations of entire universes (similar to ours).
If (1) is true, then by definition neither (2) nor (3) can be true (since simulations would effectively not be possible). If (2) is true, then (1) is not true by default, and (3) would not be true as it would not be likely that we are in a simulation since it is not likely that a simulation exists. If (3) is true, then it is clear that both (1) and (2) are false.
Now, what about the premise that they are all false? Well, that is incredibly unlikely, and this is why. Suppose that both (1) and (2) are false. Then, there exists a sufficiently advanced civilization which can create simulations, and that civilization is interested in doing so. If that is the case, then that civilization would have, in all likelihood, created such a simulation in which billions of beings would be simulated. In fact, it is likely that many thousands of simulations are being run. If this civilization is sufficiently advanced (and we assume it is), then there would be a much greater number of simulated universes than "real" universes (of which there would only be one). By extension, there would be a much larger number of simulated "minds" than real minds. Therefore, it is a very real probability that our minds are simulated.
What about the prospect that all 3 are false? Well, Dr. Bostrom argues that it is incoherent to think in such a way. I, personally, would argue against that; indeed, what if we are the most advanced civilization ever? What if we would father all other simulations? Only time can tell, but it is a very real possibility.
On this note I leave you, dear readers. Stay cool,
- Snowman
Simulation: noun the representation of the behavior or characteristics of one system through the use of another system, especially a computer program designed for the purpose.
Today I would like to present to you the argument of a certain Nick Bostrom about whether or not we live in a simulation. For those of you who are not aware of this philosophical question, it goes a little like this; how can we be certain, as thinking beings, that we are not in fact living in reality, but in a simulated reality part of a larger world, that we may in fact simply be living in a computer program. If that were the case, then there would be no way of knowing. So which is it more logical to think: that we live in reality, or that we live in a simulation? Dr. Bostrom makes a point of explaining that it is more logical to think the latter, and here is his argument.
Of the following three prospects, only one can be true:
(1) Sufficiently advanced civilizations which can create simulations do not, and cannot, exist;
(2) Sufficiently advanced civilizations which can create simulations would not be interested in doing so (i.e. they would not create simulations);
(3) We are most likely living in a simulation;
Now, before we get started, we need to clarify what it means to be a "sufficiently advanced civilization". To be one such civilization is to have attained the level of technological advancement required to create simulations of entire universes (similar to ours).
If (1) is true, then by definition neither (2) nor (3) can be true (since simulations would effectively not be possible). If (2) is true, then (1) is not true by default, and (3) would not be true as it would not be likely that we are in a simulation since it is not likely that a simulation exists. If (3) is true, then it is clear that both (1) and (2) are false.
Now, what about the premise that they are all false? Well, that is incredibly unlikely, and this is why. Suppose that both (1) and (2) are false. Then, there exists a sufficiently advanced civilization which can create simulations, and that civilization is interested in doing so. If that is the case, then that civilization would have, in all likelihood, created such a simulation in which billions of beings would be simulated. In fact, it is likely that many thousands of simulations are being run. If this civilization is sufficiently advanced (and we assume it is), then there would be a much greater number of simulated universes than "real" universes (of which there would only be one). By extension, there would be a much larger number of simulated "minds" than real minds. Therefore, it is a very real probability that our minds are simulated.
What about the prospect that all 3 are false? Well, Dr. Bostrom argues that it is incoherent to think in such a way. I, personally, would argue against that; indeed, what if we are the most advanced civilization ever? What if we would father all other simulations? Only time can tell, but it is a very real possibility.
On this note I leave you, dear readers. Stay cool,
- Snowman
Friday, March 30, 2012
Unit Brain Structure
Topic: Programming, AI
Brain: noun the brain as the center of thought, understanding, etc.; mind; intellect.
While this is not the first definition that is thought about when one thinks about the brain, I just wanted to make clear that this is what I intend by the word in the following discussion.
Earlier, I spoke about my personal research into artificial intelligence, and now I would like to speak more about the "brains" behind this.
As I mentioned before, the Demeter Project focuses on creating AI testing environments, as well as the AI themselves. These Units of AI quite obviously need some sort of brain; an area of code where they can come up with answers to questions, debate which action should be conducted next, etc. We humans have our actual biological brain which allows us to think and make decisions based on the environment around us.
So now we need to define a brain for our Unit; the brain itself will have the symbol [X], where X is the name of our Unit (Azure's brain would be denoted [Azure]). So let me propose the following definition for [X]:
[X] = ($, Π, δ, Γ)
Ah, Greek letters, the international symbols to denote pretentiousness. Indeed, the only reason I added Greek letters to my system was because it looked cooler. But each of these letters and symbols mean something, which I will now define.
$ stands for the Unit's Reasoning Module, and is currently the only (partially-)implemented part of the brain. This allows the Unit to make decisions based on its environment and come to conclusions such as "the robin is a bird, so it flies" or "a human has 'cheveux'". It can also come up to conclusions such as "this thing has fur, four legs and it meows, so it is probably a cat".
Π refers to the Unit's Memory Management Module (MMM), which would record the "important" events which occurred in the past. I put "important" between quotes because that is one of the biggest challenges about machine memory; we humans have a memory capacity which would roughly be equal to 2.5 million gigabytes, roughly the equivalent of 300 years of video content. So obviously, it is (so far) impossible to store this kind of data on a computer, so what do we "filter out" and what do we "leave in"? If we leave in too much, will the AI become "terminally nostalgic"? All unanswered questions.
δ and Γ both refer to the Decision Module of the Unit's brain, and would be responsible of communicating with $ and Π to make decisions based on input from the environment. δ more specifically refers to what I like to call the Frontal Lobe of the unit. When this module is active, it would be akin to a "thought process" in a human being.
I haven't delved too much into Γ, and there is a good reason for it; it is still extremely vague in my mind, and perhaps the only part of the brain implementation which would be tightly-coupled with the implementation of a body. Γ is called the Subconscious Brain, and would be responsible for every decision which would not need to be "thought" about. For example, suppose you have a Unit which wants to go from point A to point B. Let us suppose that this decision has been made by δ. Now, we do not want δ to be occupied with the actual task of moving the Unit's legs (wheels, w/e): we want δ to learn paths, memorize locations, sounds, images. It will be Γ's task to do all the "subconscious" moves; when you're walking, you don't think about your legs, or which muscles you need to use in order to walk. This is what would probably be the most complicated part of the Unit's brain.
Anyway, I think I have said enough today about the brain's structure. Please tell me what you think in the comments. Until then, stay cool!
-Snowman
Brain: noun the brain as the center of thought, understanding, etc.; mind; intellect.
While this is not the first definition that is thought about when one thinks about the brain, I just wanted to make clear that this is what I intend by the word in the following discussion.
Earlier, I spoke about my personal research into artificial intelligence, and now I would like to speak more about the "brains" behind this.
As I mentioned before, the Demeter Project focuses on creating AI testing environments, as well as the AI themselves. These Units of AI quite obviously need some sort of brain; an area of code where they can come up with answers to questions, debate which action should be conducted next, etc. We humans have our actual biological brain which allows us to think and make decisions based on the environment around us.
So now we need to define a brain for our Unit; the brain itself will have the symbol [X], where X is the name of our Unit (Azure's brain would be denoted [Azure]). So let me propose the following definition for [X]:
[X] = ($, Π, δ, Γ)
Ah, Greek letters, the international symbols to denote pretentiousness. Indeed, the only reason I added Greek letters to my system was because it looked cooler. But each of these letters and symbols mean something, which I will now define.
$ stands for the Unit's Reasoning Module, and is currently the only (partially-)implemented part of the brain. This allows the Unit to make decisions based on its environment and come to conclusions such as "the robin is a bird, so it flies" or "a human has 'cheveux'". It can also come up to conclusions such as "this thing has fur, four legs and it meows, so it is probably a cat".
Π refers to the Unit's Memory Management Module (MMM), which would record the "important" events which occurred in the past. I put "important" between quotes because that is one of the biggest challenges about machine memory; we humans have a memory capacity which would roughly be equal to 2.5 million gigabytes, roughly the equivalent of 300 years of video content. So obviously, it is (so far) impossible to store this kind of data on a computer, so what do we "filter out" and what do we "leave in"? If we leave in too much, will the AI become "terminally nostalgic"? All unanswered questions.
δ and Γ both refer to the Decision Module of the Unit's brain, and would be responsible of communicating with $ and Π to make decisions based on input from the environment. δ more specifically refers to what I like to call the Frontal Lobe of the unit. When this module is active, it would be akin to a "thought process" in a human being.
I haven't delved too much into Γ, and there is a good reason for it; it is still extremely vague in my mind, and perhaps the only part of the brain implementation which would be tightly-coupled with the implementation of a body. Γ is called the Subconscious Brain, and would be responsible for every decision which would not need to be "thought" about. For example, suppose you have a Unit which wants to go from point A to point B. Let us suppose that this decision has been made by δ. Now, we do not want δ to be occupied with the actual task of moving the Unit's legs (wheels, w/e): we want δ to learn paths, memorize locations, sounds, images. It will be Γ's task to do all the "subconscious" moves; when you're walking, you don't think about your legs, or which muscles you need to use in order to walk. This is what would probably be the most complicated part of the Unit's brain.
Anyway, I think I have said enough today about the brain's structure. Please tell me what you think in the comments. Until then, stay cool!
-Snowman
Tuesday, March 27, 2012
And That's All I Have To Say About That
Topic: Varia
Opinion: noun a personal view, attitude, or appraisal.
I don't like doing this, posting an update only to refer you to another article, but I felt the need to share this with you:
Pourquoi je suis pour la hausse et contre la gratuité, de Lise Ravary
Please note that this article is in French.
Stay cool,
Snowman
Opinion: noun a personal view, attitude, or appraisal.
I don't like doing this, posting an update only to refer you to another article, but I felt the need to share this with you:
Pourquoi je suis pour la hausse et contre la gratuité, de Lise Ravary
Please note that this article is in French.
Stay cool,
Snowman
Wednesday, March 21, 2012
Keep At It
Topic: Programming, Varia
Expert: noun a person who has special skill or knowledge in some particular field; specialist; authority: a language expert.
People have unrealistic expectations when it comes to programming (and this applies also in every other area of life). There are books out there with stupid, outrageous titles: Learn X In Y Days, with Y being anything between 3 and 10 and X being almost anything. I used to be a TA for an introductory course at McGill University, and it would aggravate me to no end when students would complain after getting a bad grade, stating that all they "needed" to do to pass the course was "read a 'for dummies' book".
You cannot expect to be an expert in a field the first time you enter it. True, there are some exceptions, but they are incredibly rare. Harold Ramis once said "It takes at least 10 years to get good at anything", and I couldn't agree more. And before you get good, you're going to have to be bad, and that's okay. Everybody starts everywhere.
What's my point? Well, people get discouraged when they're told they're bad at something, and most people quit trying to do it. And this is not limited to programming; learning to play an instrument, or learning to play a sport, or learning any field of study takes a long time.
So people, if you feel bad at something you want to do, just keep at it!
Stay cool,
Snowman
Expert: noun a person who has special skill or knowledge in some particular field; specialist; authority: a language expert.
People have unrealistic expectations when it comes to programming (and this applies also in every other area of life). There are books out there with stupid, outrageous titles: Learn X In Y Days, with Y being anything between 3 and 10 and X being almost anything. I used to be a TA for an introductory course at McGill University, and it would aggravate me to no end when students would complain after getting a bad grade, stating that all they "needed" to do to pass the course was "read a 'for dummies' book".
You cannot expect to be an expert in a field the first time you enter it. True, there are some exceptions, but they are incredibly rare. Harold Ramis once said "It takes at least 10 years to get good at anything", and I couldn't agree more. And before you get good, you're going to have to be bad, and that's okay. Everybody starts everywhere.
What's my point? Well, people get discouraged when they're told they're bad at something, and most people quit trying to do it. And this is not limited to programming; learning to play an instrument, or learning to play a sport, or learning any field of study takes a long time.
So people, if you feel bad at something you want to do, just keep at it!
Stay cool,
Snowman
Tuesday, March 20, 2012
Keep Calm and Carry On
Topic: Varia, Riots
Survival: noun the act or fact of surviving, especially under adverse or unusual circumstances.
Living in Quebec, more specifically Montreal, these days has become slightly dangerous for students (and people in general) due to the abnormally-high amount of protesters. Now, I don't want to start preaching about which side is right; the tuition hike is a subject that I (along with all my friends I'm sure) am sick and tired of hearing about. Instead, I will post a few tips for those of you who might be stuck in the middle of a riot.
Now hold on, says Avid Reader, they are just peaceful protests! They won't turn into riots!
Yeah, just step outside and watch. More and more of these "peaceful protests" are turning to violence and vandalism. I have my own, very strong opinion about these protests and the reasons behind them, but like I mentioned earlier I do not believe it is my place to start debating.
So here are a few things you can do if you are ever stuck in a riot you never wanted to be part of in the first place:
1. Avoid Protests: If you feel like being yelled at and shoved around would effectively ruin your day (how someone can feel angry after being pushed and hit is beyond me), then it might be a good idea to avoid the protests altogether. Obviously this is not always possible for various reasons, but doing so effectively prevents you from being in a position you would like to avoid.
2. Keep Calm, Keep Your Head Down, and Keep Moving: Riots tend to make people edgy and angry and, sometimes, violent, so keeping calm will keep you from doing something which you would regret. If someone pushes or shoves you, don't respond (as that would just aggravate the situation) and keep walking.
Keeping your head down, shoulders hunched, will not only protect your face, but it will avoid drawing attention to you, thus reducing the risk that you'll be caught in a brawl.
Finally, keep moving. As soon as you stop, you are now basically part of the rioting crowd.
3. Stay Indoors: Well I think this one speaks for itself.
4. Carry Sugary Foods With You: This one is a little obscure and confusing. When in a riot, your adrenaline levels automatically increase, thus draining your energy reserves faster than normal. If that happens, then you may become too tired (especially at the end of the day) to make the right decisions based on your immediate situation. Sugary foods can therefore provide the energy you might temporarily need to leave the riot.
Edit: I was told by my better half that it takes up to an hour for the body to absorb the energy contained in sugar, and that it is the fastest the human body can gain energy, and so it is better to eat before going through a protest area.
5. Stay On The Sidelines: Walk along walls, avoid taking sides in the riot, and try to look as innocent and, for a lack of better term, uncomfortable as possible. Don't film anything, don't take pictures, just keep walking as though you would rather be anywhere else than there (which is probably true). This will help you go along unnoticed by both the crowd and the police.
6. Avoid Getting Hit By Riot Control Chemicals: Again, speaks for itself.
7. Move Away From The Riot: Walk, don't run, as running will attract attention to yourself. Try to avoid moving against a crowd, as you will be pushed and shoved; instead move with the crowd until you can find a side street or a doorway where you can break away. Avoid major roads, as they are more likely to be filled with rioters or police officers. In Montreal, try and take the metro. The metro system is usually unaffected by riots. However, in other cities it is in fact considered a bad idea to use public transportation.
8. Don't Antagonize The Police: Your personal safety is your number one priority, and antagonizing the police force is the number one thing you don't want to do. If they talk to you, listen and answer honestly. Explain that you just want to get away from the riot. In most cases they will point you to a safe area. In the worst case, they will take you as a rioter (see point 9).
Now, these are all used to avoid being in trouble in a riot, but sometimes you cannot avoid being caught in. The following 3 points are things to do if you end up being in distress.
9. If You Are Mistaken As A Rioter: Do not resist! If you go peacefully you will not be roughed or hit by the police force. Follow their instructions. Police officers usually only arrest people who attack them or who vandalize the area, and so if you "play nice" you will be left alone. If you are arrested, then simply explain your situation as calmly as possible. Again, your personal safety is the most important thing, and even though it sucks, you are safe while arrested.
10. If You Are Sprayed With Chemicals: If you are sprayed by tear gas or other chemicals, a simple solution of sodium bicarbonate (baking soda) with water will usually wash off the chemicals. In fact, if you can it could be a good idea to keep a bottle of that solution with you so that you can apply it to affected areas as soon as you are sprayed. This solution does not work on pepper spray. If you are hit by pepper spray, find a water source as quickly as possible and wash it off. Keeping a bottle of water with you also greatly helps.
11. If You Are Stuck In Your Car: First of all, no matter what happens, don't run over rioters, as this will put you in the "oh-so-definitely-screwed" zone with both the rioters and the police, greatly reducing your chances of survival. Remain in your car while the riot continues around you, as your car becomes a shelter from projectiles. Do not attempt to respond if someone hits your vehicle. If, however, your car becomes a target for vandalism (i.e. several people are targeting it) then abandon your vehicle and move away as fast as possible without running. Pulling your shirt over your head can help protect you. Before thinking about the cost of your wrecked car, think about what can happen; cars are regularly torched and smashed in a riot, and if you are stuck in one you can become severely hurt or you may die.
Hopefully none of you will need to use any of these tips, but with the amount of riots we've had so far in 2012 increases the likelihood of an all-out riot breaking out around you.
This is it for today, readers. Stay safe, stay cool.
- Snowman
Survival: noun the act or fact of surviving, especially under adverse or unusual circumstances.
Living in Quebec, more specifically Montreal, these days has become slightly dangerous for students (and people in general) due to the abnormally-high amount of protesters. Now, I don't want to start preaching about which side is right; the tuition hike is a subject that I (along with all my friends I'm sure) am sick and tired of hearing about. Instead, I will post a few tips for those of you who might be stuck in the middle of a riot.
Now hold on, says Avid Reader, they are just peaceful protests! They won't turn into riots!
Yeah, just step outside and watch. More and more of these "peaceful protests" are turning to violence and vandalism. I have my own, very strong opinion about these protests and the reasons behind them, but like I mentioned earlier I do not believe it is my place to start debating.
So here are a few things you can do if you are ever stuck in a riot you never wanted to be part of in the first place:
1. Avoid Protests: If you feel like being yelled at and shoved around would effectively ruin your day (how someone can feel angry after being pushed and hit is beyond me), then it might be a good idea to avoid the protests altogether. Obviously this is not always possible for various reasons, but doing so effectively prevents you from being in a position you would like to avoid.
2. Keep Calm, Keep Your Head Down, and Keep Moving: Riots tend to make people edgy and angry and, sometimes, violent, so keeping calm will keep you from doing something which you would regret. If someone pushes or shoves you, don't respond (as that would just aggravate the situation) and keep walking.
Keeping your head down, shoulders hunched, will not only protect your face, but it will avoid drawing attention to you, thus reducing the risk that you'll be caught in a brawl.
Finally, keep moving. As soon as you stop, you are now basically part of the rioting crowd.
3. Stay Indoors: Well I think this one speaks for itself.
4. Carry Sugary Foods With You: This one is a little obscure and confusing. When in a riot, your adrenaline levels automatically increase, thus draining your energy reserves faster than normal. If that happens, then you may become too tired (especially at the end of the day) to make the right decisions based on your immediate situation. Sugary foods can therefore provide the energy you might temporarily need to leave the riot.
Edit: I was told by my better half that it takes up to an hour for the body to absorb the energy contained in sugar, and that it is the fastest the human body can gain energy, and so it is better to eat before going through a protest area.
5. Stay On The Sidelines: Walk along walls, avoid taking sides in the riot, and try to look as innocent and, for a lack of better term, uncomfortable as possible. Don't film anything, don't take pictures, just keep walking as though you would rather be anywhere else than there (which is probably true). This will help you go along unnoticed by both the crowd and the police.
6. Avoid Getting Hit By Riot Control Chemicals: Again, speaks for itself.
7. Move Away From The Riot: Walk, don't run, as running will attract attention to yourself. Try to avoid moving against a crowd, as you will be pushed and shoved; instead move with the crowd until you can find a side street or a doorway where you can break away. Avoid major roads, as they are more likely to be filled with rioters or police officers. In Montreal, try and take the metro. The metro system is usually unaffected by riots. However, in other cities it is in fact considered a bad idea to use public transportation.
8. Don't Antagonize The Police: Your personal safety is your number one priority, and antagonizing the police force is the number one thing you don't want to do. If they talk to you, listen and answer honestly. Explain that you just want to get away from the riot. In most cases they will point you to a safe area. In the worst case, they will take you as a rioter (see point 9).
Now, these are all used to avoid being in trouble in a riot, but sometimes you cannot avoid being caught in. The following 3 points are things to do if you end up being in distress.
9. If You Are Mistaken As A Rioter: Do not resist! If you go peacefully you will not be roughed or hit by the police force. Follow their instructions. Police officers usually only arrest people who attack them or who vandalize the area, and so if you "play nice" you will be left alone. If you are arrested, then simply explain your situation as calmly as possible. Again, your personal safety is the most important thing, and even though it sucks, you are safe while arrested.
10. If You Are Sprayed With Chemicals: If you are sprayed by tear gas or other chemicals, a simple solution of sodium bicarbonate (baking soda) with water will usually wash off the chemicals. In fact, if you can it could be a good idea to keep a bottle of that solution with you so that you can apply it to affected areas as soon as you are sprayed. This solution does not work on pepper spray. If you are hit by pepper spray, find a water source as quickly as possible and wash it off. Keeping a bottle of water with you also greatly helps.
11. If You Are Stuck In Your Car: First of all, no matter what happens, don't run over rioters, as this will put you in the "oh-so-definitely-screwed" zone with both the rioters and the police, greatly reducing your chances of survival. Remain in your car while the riot continues around you, as your car becomes a shelter from projectiles. Do not attempt to respond if someone hits your vehicle. If, however, your car becomes a target for vandalism (i.e. several people are targeting it) then abandon your vehicle and move away as fast as possible without running. Pulling your shirt over your head can help protect you. Before thinking about the cost of your wrecked car, think about what can happen; cars are regularly torched and smashed in a riot, and if you are stuck in one you can become severely hurt or you may die.
Hopefully none of you will need to use any of these tips, but with the amount of riots we've had so far in 2012 increases the likelihood of an all-out riot breaking out around you.
This is it for today, readers. Stay safe, stay cool.
- Snowman
Monday, March 19, 2012
Promela Verification
Topic: Programming
Experiment: noun a test, trial, or tentative procedure; an act or operation for the purpose of discovering something unknown or of testing a principle, supposition, etc.: a chemical experiment; a teaching experiment; an experiment in living.
Promela is a verification modelling language used to verify the correctness of various programming paradigms, ranging from distributed systems to complex algorithm verification. I am presenting this language to you so that you can conduct a simple experiment.
Most, if not all, programmers are familiar with the mutual exclusion (mutex) problem; if you have multiple concurrently-running systems, how can you ensure correctness of data modification? Here is a simple example:
Suppose your program consists of 3 phases: the non-critical (NC) phase, the wait (W) phase and the critical (C) phase. You can have multiple programs running concurrently, each in their own subphase. For example, if you have two programs running, P1 and P2, then at a specific example snapshot (at a given instant) of the runtime, P1 can be in its NC phase while P2 can be in its W phase; we will denote this {1:NC, 2:W}.
We labelled these phases as such to demonstrate the mutex problem; at any given moment, we want only one program to be in its critical phase. This means that no two (or more) programs are in their critical phase at the same time. Using the above notation with only two programs, we want to assure that the following snapshot is impossible: {1:C, 2:C}.
Various algorithms exist to ensure that mutex is respected and enforced, such as Peterson's 2-Process MUTEX Algorithm or Dekker's Algorithm. This is not news to anyone, and anyone who has had any experience with concurrent systems has probably encountered these.
However, as anyone who has had any experience with concurrent systems can agree, testing these algorithms can be a pain. Enter Promela.
I won't teach you Promela here, as I believe it is a language which is taught by experience, but I will tell you about its benefits. I was forced to learn Promela in a course which I did not want to take, and it has proven to be an invaluable tool in algorithm and system verification. While it is certainly not easy to learn and use, it almost provides direct proof that your algorithm works by allowing you to do certain things which might not be possible in other programming languages, such as atomicity constraints.
(Note for the wise: if you cannot do these things in other languages, than why is it useful to test them in Promela? Well suppose you develop a theoretical polynomial-time algorithm for the 3CNFSAT problem, and thus prove that P = NP, but that algorithm requires some atomicity constraints, then C++ cannot really be used to test it, whereas Promela can give you the tools required.)
For those wanting to learn Promela, multiple websites exist which provide everything you need to get started. Additionally, you would need an interpreter, SPIN (the Simple Promela INterpreter), which also provides real-time LTL formula checking with finite automata acceptance capabilities.
Oh, and the experiment I mentioned? Just take your favourite algorithm, and try to implement it in Promela!
If you have any questions, please feel free to post them in the comments section. Until then, stay cool readers!
- Snowman
Experiment: noun a test, trial, or tentative procedure; an act or operation for the purpose of discovering something unknown or of testing a principle, supposition, etc.: a chemical experiment; a teaching experiment; an experiment in living.
Promela is a verification modelling language used to verify the correctness of various programming paradigms, ranging from distributed systems to complex algorithm verification. I am presenting this language to you so that you can conduct a simple experiment.
Most, if not all, programmers are familiar with the mutual exclusion (mutex) problem; if you have multiple concurrently-running systems, how can you ensure correctness of data modification? Here is a simple example:
Suppose your program consists of 3 phases: the non-critical (NC) phase, the wait (W) phase and the critical (C) phase. You can have multiple programs running concurrently, each in their own subphase. For example, if you have two programs running, P1 and P2, then at a specific example snapshot (at a given instant) of the runtime, P1 can be in its NC phase while P2 can be in its W phase; we will denote this {1:NC, 2:W}.
We labelled these phases as such to demonstrate the mutex problem; at any given moment, we want only one program to be in its critical phase. This means that no two (or more) programs are in their critical phase at the same time. Using the above notation with only two programs, we want to assure that the following snapshot is impossible: {1:C, 2:C}.
Various algorithms exist to ensure that mutex is respected and enforced, such as Peterson's 2-Process MUTEX Algorithm or Dekker's Algorithm. This is not news to anyone, and anyone who has had any experience with concurrent systems has probably encountered these.
However, as anyone who has had any experience with concurrent systems can agree, testing these algorithms can be a pain. Enter Promela.
I won't teach you Promela here, as I believe it is a language which is taught by experience, but I will tell you about its benefits. I was forced to learn Promela in a course which I did not want to take, and it has proven to be an invaluable tool in algorithm and system verification. While it is certainly not easy to learn and use, it almost provides direct proof that your algorithm works by allowing you to do certain things which might not be possible in other programming languages, such as atomicity constraints.
(Note for the wise: if you cannot do these things in other languages, than why is it useful to test them in Promela? Well suppose you develop a theoretical polynomial-time algorithm for the 3CNFSAT problem, and thus prove that P = NP, but that algorithm requires some atomicity constraints, then C++ cannot really be used to test it, whereas Promela can give you the tools required.)
For those wanting to learn Promela, multiple websites exist which provide everything you need to get started. Additionally, you would need an interpreter, SPIN (the Simple Promela INterpreter), which also provides real-time LTL formula checking with finite automata acceptance capabilities.
Oh, and the experiment I mentioned? Just take your favourite algorithm, and try to implement it in Promela!
If you have any questions, please feel free to post them in the comments section. Until then, stay cool readers!
- Snowman
Friday, March 16, 2012
They Can't Take That Away From Me
Topic: Programming
Cat: noun a small domesticated carnivore, Felis domestica or F. catus, bred in a number of varieties.
This post will have nothing to do with cats, but everything to do with Frank Sinatra. See, Mr. Sinatra is what I call my programming buddy, and this post will be about programming buddies.
Almost everyone has one. A stress toy, a piece of lint, a coffee mug, a humorous image, anything which distracts them from their job and at the same time allows them to reflect on the problem at hand. From my experience, anything can be a programming buddy. My coworker has this block puzzle he has yet to solve, and another has a pyramid of beer cans. See, it does not have to be an active object, or something you necessarily need to interact with. It is simply something familiar with which you decorate your work space.
What's the use of a programming buddy? Well programming buddies are part of the office environment, something which when improved can increase creativity, productivity and motivation. For me, it's listening to Frank Sinatra.
If you haven't gotten one yet, try and find one! But remember to keep it relatively passive as to not completely drain your productivity; just as your buddy can help you, it can also, if poorly chosen, become an obsession of yours.
That was it for today. Have a nice weekend reader!
- Snowman
Cat: noun a small domesticated carnivore, Felis domestica or F. catus, bred in a number of varieties.
This post will have nothing to do with cats, but everything to do with Frank Sinatra. See, Mr. Sinatra is what I call my programming buddy, and this post will be about programming buddies.
Almost everyone has one. A stress toy, a piece of lint, a coffee mug, a humorous image, anything which distracts them from their job and at the same time allows them to reflect on the problem at hand. From my experience, anything can be a programming buddy. My coworker has this block puzzle he has yet to solve, and another has a pyramid of beer cans. See, it does not have to be an active object, or something you necessarily need to interact with. It is simply something familiar with which you decorate your work space.
What's the use of a programming buddy? Well programming buddies are part of the office environment, something which when improved can increase creativity, productivity and motivation. For me, it's listening to Frank Sinatra.
If you haven't gotten one yet, try and find one! But remember to keep it relatively passive as to not completely drain your productivity; just as your buddy can help you, it can also, if poorly chosen, become an obsession of yours.
That was it for today. Have a nice weekend reader!
- Snowman
Wednesday, March 14, 2012
March 2012 DERP
Topic: DERP
Derp: noun the word that describes a particularly retarded face: a retarded smile, and the eyes pointing in different directions.
Every month I will try to post a DERP I found in order for this blog to not be all full of serious stuffs. This is this month's DERP.
Snowman out!
Derp: noun the word that describes a particularly retarded face: a retarded smile, and the eyes pointing in different directions.
Every month I will try to post a DERP I found in order for this blog to not be all full of serious stuffs. This is this month's DERP.
Snowman out!
How To Become A Hacker
Topic: Programming
Hacker: noun 1. A person or thing that hacks. 2. Computer slang. A computer enthusiast.
I found this interesting little article entitled How To Become A Hacker while browsing the other day, and I thought I ought to share it with you guys. One of the most important things they talk about (in the introduction! convenient!) is the difference between a hacker and a cracker. Why do I bring this forward? Well, it annoys me when people mislabel crackers as hackers, and vice-versa.
Want a handy tip to remember the difference? You don't hack a safe.
Snowman out!
Hacker: noun 1. A person or thing that hacks. 2. Computer slang. A computer enthusiast.
I found this interesting little article entitled How To Become A Hacker while browsing the other day, and I thought I ought to share it with you guys. One of the most important things they talk about (in the introduction! convenient!) is the difference between a hacker and a cracker. Why do I bring this forward? Well, it annoys me when people mislabel crackers as hackers, and vice-versa.
Want a handy tip to remember the difference? You don't hack a safe.
Snowman out!
Thursday, February 23, 2012
On Our Responsibility Towards Pets
Topic: Ethics
Abandon: verb (used with object) to leave completely and finally; forsake utterly; desert: to abandon one's farm; to abandon a child; to abandon a sinking ship.
Although this blog is mostly related to programming, there will be some times where I will decide to talk about issues that are important to me; after all, this blog is mine, no? One of these issues is the large amount of people abandoning their newly-purchased pets after Christmas.
For those of you who have not heard of the issue, it goes a little along those lines: mister and missus Doe see an adorable puppy/kitty/birdy/etc at the pet store, and think "Aww shucks, this would be perfect for our son/daughter!" and buy said animal for the child on a whim. Christmas comes along, and the young Doe child/children are amazed to see their new puppy/kitty/birdy/etc. A month goes by, and mister and missus Doe realize that having a pet is a lot of work and costs money, work which they did not expect and money they did not plan to spend. So they, thinking they're doing well, drop off the new pet at an animal shelter.
That doesn't seem too bad, and granted it's better than simply abandonning them by the side of the road. However, hundreds, even thousands of families across North America (I don't want to think about the worldwide right now) find themselves in this exact same situation. The result is that animal shelters become overcrowded, and all those new pets have to be put down for the shelter to continue existing.
Now, I'm not saying I have a clear cut solution to this problem, but what I do have are a few tips which I hope people will pass around and, hopefully, reduce the amplitude of this problem.
Tip 1 - Think Before You Buy
People think before having a child. They think about the cost, the time they need, etc. Okay, some people don't think of that, but most people do. People should make an informed decision before buying a child a pet. What I suggest is that mister and missus Doe should ask the pet shop people about the responsibilities, look at food prices, and wait a week before buying their pet. What if there's less than a week before Christmas? Don't buy it. Take your time, and think before you decide to buy a pet.
Tip 2 - Why Buy When You Can Adopt?
Sure, animals found in animal shelters can be a bit dirtier, a bit older than what you would find in a pet store, but animals can be cleaned, and your children will love them just the same; maybe even more when you tell them that you saved it from homelessness. Animals in shelters will cost you less than buying at a pet store, and some of them even have all their shots already. Adopting an animal from a shelter is a win-win scenario!
Tip 3 - Shelters Are Good, Families Are Awesome
Before looking for a shelter to drop off your pet, try and find a nice family who would want it. There are a lot of people out there who would take a homeless animal in, but don't necessarily have the money to buy one. I'm speaking from experience on this. We used to have a dog at home. It was our first one, and we had no idea how to raise it. Eventually, it became too much for us, so we had to get rid of it. Instead of looking for a shelter, we asked around to see who would take it. Eventually we found this nice family with a nice big yard where our dog could run around and be happy. We gave our dog to that family. Now, giving that dog up was one of the hardest things we had to do, and it still brings tears to my eyes to remember it, but that dog is now happy with loving owners who can really take care of her. And that, in the end, is the biggest gift we could give to our dog; a chance to be happier than with us.
Now, I'm not crazy; I know only a handful of people, if any, will actually read this. But if, after reading this, at least one person can say "Geez, this guy is right, let's think before buying our pet." or "Wow, maybe someone else can take care of our pet...", if one animal can be kept in a loving family instead of being sent to the shelter because of what I wrote, then this post will have done its job.
TTFN,
Snowman
Abandon: verb (used with object) to leave completely and finally; forsake utterly; desert: to abandon one's farm; to abandon a child; to abandon a sinking ship.
Although this blog is mostly related to programming, there will be some times where I will decide to talk about issues that are important to me; after all, this blog is mine, no? One of these issues is the large amount of people abandoning their newly-purchased pets after Christmas.
For those of you who have not heard of the issue, it goes a little along those lines: mister and missus Doe see an adorable puppy/kitty/birdy/etc at the pet store, and think "Aww shucks, this would be perfect for our son/daughter!" and buy said animal for the child on a whim. Christmas comes along, and the young Doe child/children are amazed to see their new puppy/kitty/birdy/etc. A month goes by, and mister and missus Doe realize that having a pet is a lot of work and costs money, work which they did not expect and money they did not plan to spend. So they, thinking they're doing well, drop off the new pet at an animal shelter.
That doesn't seem too bad, and granted it's better than simply abandonning them by the side of the road. However, hundreds, even thousands of families across North America (I don't want to think about the worldwide right now) find themselves in this exact same situation. The result is that animal shelters become overcrowded, and all those new pets have to be put down for the shelter to continue existing.
Now, I'm not saying I have a clear cut solution to this problem, but what I do have are a few tips which I hope people will pass around and, hopefully, reduce the amplitude of this problem.
Tip 1 - Think Before You Buy
People think before having a child. They think about the cost, the time they need, etc. Okay, some people don't think of that, but most people do. People should make an informed decision before buying a child a pet. What I suggest is that mister and missus Doe should ask the pet shop people about the responsibilities, look at food prices, and wait a week before buying their pet. What if there's less than a week before Christmas? Don't buy it. Take your time, and think before you decide to buy a pet.
Tip 2 - Why Buy When You Can Adopt?
Sure, animals found in animal shelters can be a bit dirtier, a bit older than what you would find in a pet store, but animals can be cleaned, and your children will love them just the same; maybe even more when you tell them that you saved it from homelessness. Animals in shelters will cost you less than buying at a pet store, and some of them even have all their shots already. Adopting an animal from a shelter is a win-win scenario!
Tip 3 - Shelters Are Good, Families Are Awesome
Before looking for a shelter to drop off your pet, try and find a nice family who would want it. There are a lot of people out there who would take a homeless animal in, but don't necessarily have the money to buy one. I'm speaking from experience on this. We used to have a dog at home. It was our first one, and we had no idea how to raise it. Eventually, it became too much for us, so we had to get rid of it. Instead of looking for a shelter, we asked around to see who would take it. Eventually we found this nice family with a nice big yard where our dog could run around and be happy. We gave our dog to that family. Now, giving that dog up was one of the hardest things we had to do, and it still brings tears to my eyes to remember it, but that dog is now happy with loving owners who can really take care of her. And that, in the end, is the biggest gift we could give to our dog; a chance to be happier than with us.
Now, I'm not crazy; I know only a handful of people, if any, will actually read this. But if, after reading this, at least one person can say "Geez, this guy is right, let's think before buying our pet." or "Wow, maybe someone else can take care of our pet...", if one animal can be kept in a loving family instead of being sent to the shelter because of what I wrote, then this post will have done its job.
TTFN,
Snowman
Saturday, February 18, 2012
Java vs C++ vs Whatever
Topic: Programming
Pretentious: adjective characterized by assumption of dignity or importance.
I want to make this clear early on because I do not want to start a debate or have to defend any language choice I will make in the future: I do not think any language is "superior" to another. Period. I have the strong opinion that a good programmer can end up doing whatever they want, or whatever they need, using only the tools provided to them. And in situations where one language IS better than another for a particular task, then just use that language!
I am saying this because I code mostly in Java. I do not do it because I think it is superior. I do not do it because I think it is faster, stronger, etc. I do it because I like Java. So with that in mind, I will not reply to comments about whether my choice of language was good or not; I'm sure that me and many other programmers are tired of this debate.
Now that that's out of the way, let me get back to my coffee...
TTFN,
Snowman
Pretentious: adjective characterized by assumption of dignity or importance.
I want to make this clear early on because I do not want to start a debate or have to defend any language choice I will make in the future: I do not think any language is "superior" to another. Period. I have the strong opinion that a good programmer can end up doing whatever they want, or whatever they need, using only the tools provided to them. And in situations where one language IS better than another for a particular task, then just use that language!
I am saying this because I code mostly in Java. I do not do it because I think it is superior. I do not do it because I think it is faster, stronger, etc. I do it because I like Java. So with that in mind, I will not reply to comments about whether my choice of language was good or not; I'm sure that me and many other programmers are tired of this debate.
Now that that's out of the way, let me get back to my coffee...
TTFN,
Snowman
Wednesday, February 15, 2012
Say hi to Azure
Topic: Programming, AI
Artificial Intelligence: noun the capacity of a computer to perform operations analogous to learning and decision making in humans, as by an expert system, a program for CAD or CAM, or a program for the perception and recognition of shapes in computer vision systems.
Naturally one of the subjects I would like to discuss here is the research I do, namely in the domain of artificial intelligence. It is in that light that I would like to introduce you to Azure.
Azure is a baby AI (currently more of a foetus than an actual baby) that I am currently developing. It (or she if you prefer) is part of a project I've been working on for a while now, namely the Demeter Project, whose purpose it is to create controlled testing environments in which AI (which I call Units) can live, learn and strive in.
Azure is what I call an Asimov Unit, which is to say that it is a basic AI Unit; there is nothing "special" about it (contrasted with Twin, Nagato, Hive and Collective Units which I will discuss about in later posts). In fact, an AU is, in theory, the Unit which most resembles a human; an AU consists of roughly 1 body and 1 brain, while a Twin Unit has 2 bodies and 2 PSI-linked brains (which I will again discuss later).
Anyways, I just wanted to introduce Azure to you, and discuss a bit about the AI research I am doing. Now, I have to get back to work.
TTFN,
Snowman
Artificial Intelligence: noun the capacity of a computer to perform operations analogous to learning and decision making in humans, as by an expert system, a program for CAD or CAM, or a program for the perception and recognition of shapes in computer vision systems.
Naturally one of the subjects I would like to discuss here is the research I do, namely in the domain of artificial intelligence. It is in that light that I would like to introduce you to Azure.
Azure is a baby AI (currently more of a foetus than an actual baby) that I am currently developing. It (or she if you prefer) is part of a project I've been working on for a while now, namely the Demeter Project, whose purpose it is to create controlled testing environments in which AI (which I call Units) can live, learn and strive in.
Azure is what I call an Asimov Unit, which is to say that it is a basic AI Unit; there is nothing "special" about it (contrasted with Twin, Nagato, Hive and Collective Units which I will discuss about in later posts). In fact, an AU is, in theory, the Unit which most resembles a human; an AU consists of roughly 1 body and 1 brain, while a Twin Unit has 2 bodies and 2 PSI-linked brains (which I will again discuss later).
Anyways, I just wanted to introduce Azure to you, and discuss a bit about the AI research I am doing. Now, I have to get back to work.
TTFN,
Snowman
Tuesday, February 14, 2012
Assertion Failed!
Topic: Varia
Assertion: noun. a positive statement or declaration, often without support or reason: a mere assertion; an unwarranted assertion.
In programming, an assertion is an expression which we want to be true always. In fact, usually whenever there is a piece of code of the form "assert(x)", if ever x is false the program will halt, pause at that point, allowing a programmer to examine the code and see what's wrong.
That is where this blog comes in. I'm not an expert programmer (or maybe I am but I don't know it) and I won't pretend to be. I won't pretend to be better than someone. I won't try to push my beliefs on you, and the code I post might be wrong sometimes. This blog is simply a place for me to pause and write my own little opinions and stuff. Things I found interesting, or things I want opinions about.
For those who want to get involved in programming but feel intimidated about online blogs, this is a good place to start reading.
I don't see this blog like a classroom, where I will preach my beliefs. Instead, I see it more like a small café where people can sit around while I rant away, opening discussion. That being said, the topics here will mostly have to do about programming, but may diverge into other computer science-y stuff.
The title of this blog actually refers to a piece of obscure code I ran into. The subroutine in question was never used, and had a single line: "assert(FALSE)". This means that anytime that subroutine would run, the code would crash. I almost felt sad for that subroutine (before I destroyed it), how it was never called and would only cause trouble. Despite all the funny or depressing things I've seen devs (developers) write over the few years I've been programming, this one stands out.
Anyways, I've ranted for long enough. Next post: who am I? Perhaps.
Peace out,
Snowman
Assertion: noun. a positive statement or declaration, often without support or reason: a mere assertion; an unwarranted assertion.
In programming, an assertion is an expression which we want to be true always. In fact, usually whenever there is a piece of code of the form "assert(x)", if ever x is false the program will halt, pause at that point, allowing a programmer to examine the code and see what's wrong.
That is where this blog comes in. I'm not an expert programmer (or maybe I am but I don't know it) and I won't pretend to be. I won't pretend to be better than someone. I won't try to push my beliefs on you, and the code I post might be wrong sometimes. This blog is simply a place for me to pause and write my own little opinions and stuff. Things I found interesting, or things I want opinions about.
For those who want to get involved in programming but feel intimidated about online blogs, this is a good place to start reading.
I don't see this blog like a classroom, where I will preach my beliefs. Instead, I see it more like a small café where people can sit around while I rant away, opening discussion. That being said, the topics here will mostly have to do about programming, but may diverge into other computer science-y stuff.
The title of this blog actually refers to a piece of obscure code I ran into. The subroutine in question was never used, and had a single line: "assert(FALSE)". This means that anytime that subroutine would run, the code would crash. I almost felt sad for that subroutine (before I destroyed it), how it was never called and would only cause trouble. Despite all the funny or depressing things I've seen devs (developers) write over the few years I've been programming, this one stands out.
Anyways, I've ranted for long enough. Next post: who am I? Perhaps.
Peace out,
Snowman
Subscribe to:
Posts (Atom)