Showing posts with label Programming. Show all posts
Showing posts with label Programming. Show all posts

Tuesday, June 11, 2013

Out With The Old...

Topics: Programming, Life

Lesson: noun something to be learned or studied: the lessons of the past.

Today I have learned that the very basic things you learn in school never cease to be relevant. For example, I had a missing import statement. I remember telling some of my students to always check for those things. Yet I spent 15 minutes trying to figure out exactly that; a missing import.

Never be too quick to discard lessons you learned years ago.

That's all for now.

Stay frosty,
Snowman

Monday, April 22, 2013

A New Kind Of Game...

Topics: Programming, Video Games

Adventure: noun An unusual and exciting, typically hazardous, experience or activity.

Those who know me very well know that I have a great passion for video games, programming and storytelling. Therefore, it is no great secret that I've been trying to reconcile these three activities. While it would be pretty obvious to simply make a video game in order to convey a story, anyone who has dabbled in that medium knows it is no easy task; video games are still not quite accepted as a veritable and serious art form, and trying to tell a story while keeping the player interested is no easy task, though it has been done in the past.

The text-based adventure has been of interest to me lately. Most budding programmers with an interest in video games and a pinch of creativity dabbles into this guilty pleasure of generally bad design and simplified game mechanics. Many historically-important games were text-based adventures which are still played today. Other games take advantage of the graphical simplicity to create complicated gameplay.

I remember the first game I created. It was based on a fantasy setting I've been working on since June 2003 (and I just realized that the setting in question along with the multiverse I built around it will be 10 years old in a little over a month holy shit) and consisted mostly of some limited exploration and combat, with a small engine to procedurally generate weapons and armour.

The idea of creating a text-based video game where I could tell my stories was abandoned for a time. However, it has been coming back recently. I wondered how I could make not only a deep game story, but how I could make it accessible to the general public. I would like this engine to be available to many other players, who would be able to create their own stories without too much hassle.

I am still designing this system, which will be an interpreter for various "story" files. The goal is to create an easy-to-use system for people who are not programmers to create their own narrative-based video games. What do you think?

Stay frosty,
Snowman

Friday, March 1, 2013

On Learning

Topic: Learning, Programming

Learning: noun the acquisition of knowledge or skills through experience, practice, or study, or by being taught.

This post is for all those struggling, fledgling programmers out there worried about their lack of experience.

Throughout my years as a programmer (which are still minimal compared to my greater peers), I cannot say that I learned many computer languages. At this time, I wouldn't be able to program myself out of a C++ paper bag, Perl sounds like something you put on a necklace, and I still have no idea how to Python myself a program.

With keeping this in mind, what is my worth, I ask you, as a programmer?

To the untrained eye, my economic worth would be quite low; my lack of skills and experience would perhaps make me cheaper than an experienced developer, but my contribution would obviously not be as high.

Let us deviate a little for a moment. One person once said that willingness and desire to learn are the true qualifications for a person. In short, in this extremely interesting article Matt Gemmell discusses the subject of learning and problem solving, and argues that the worth of a software developer (or of any domain really) is in how much that person is willing to learn something new.

Let's get back to the subject at hand; the economic worth of an inexperienced programmer. If we believe what Mr. Gemmell says, then the worth of a person (in a professional environment) is directly related to their willingness and desire to learn. Notice how I didn't mention anything about the number of languages someone speaks, or the number of years of experience they have.

I guess my message would be to everyone worried about their lack of experience in any domain to grow and cultivate a desire to learn new things. It is quite easy to learn in school, when you are forced to do so, but it is significantly harder once you graduate and leave academia. At that moment, your personal growth becomes your entire responsibility, and sadly many people use that as an excuse to stop bettering themselves. But as long as you manifest a desire to learn and to grow, you will notice an exponential growth in your own abilities.

After all, we only stop to learn once we think we know everything.

Stay frosty,
Snowman

Tuesday, June 19, 2012

On Comments

Topic: Programming

Comment: noun a note in explanation, expansion, or criticism of a passage in a book, article, or the like; annotation.

Today I'd like to spend a little time talking about the controversial artifacts which are known as comments, and my current stance on commenting.

Each programmer knows what a comment is. However, for starting programmers, or people who are interested in programming but don't know anything about it yet, a comment is basically written text in a code file (also known as the source code) which will not be interpreted as code. It is basically text which is not part of the program. In most languages, they are usually marked by either two slashes (//) or a hash character (#). The following is an example:
int x = 0;
// Setting x to 1
x = 1;
As you can see, we are initializing a variable, x, to the value of 0. Then, we set the value to be 1. However, in between we have text saying what we are about to do. That text is clearly not part of the executable code; that text is known as a comment.

Now, the programming community is divided as to the usefulness of comments in code. One side states that comments improve code readability and should be used as often as possible. Using comments will also help new developers understand legacy code much faster, code which may be extremely confusing to read (confuse code). The other side states that comments are completely useless and can be replaced by good programming. The code should be in an of itself clear to understand, and that if comments are needed then the code is bad (bad notation). Additionally, people tend to rely too much on comments and place them in useless places (comment abuse).

Before stating my own opinion, let me show pseudo-code examples that I have encountered of both extremes. First, let us look at an example of confuse code:
int a = Math.sqrt(Math.pow(xValAfterModification, 9));
int b = methodFoo(a);
What is a? What does it represent? What is xValAfterModification, why are we raising it to the 9th power and why are we taking the square root of that? What does methodFoo do and why are we setting b to this? No matter where this code is found, this is extremely messy and hard to understand. Someone would have to spend a long time looking at this to understand what exactly this is doing.

Here is an example of bad notation:
// This variable will store the current product price
int n = 0;
At first, this seems okay. The programmer is simply specifying what the variable will be used for. However, in this case the comment is completely useless. Why doesn't the programmer simply rename the variable n to currentProductPrice? No one ever confuse something called currentProductPrice with, say, the number of available products. In this case the comment is used instead of good programming practices (specifically using clear variable names).

Finally, let us look at an example of comment abuse:
// Creating x and setting it to 0
int x = 0;
// Adding 1 to x
x = x + 1;
// Multiplying x by 2
x = x * 2;
Ignoring the obvious uselessness of this code (why didn't they just set x to 2?), the comments in this example just state exactly what the code does at every step. However, the comments simply state exactly what the next line is doing exactly as if the person would read the code out loud. These comments are useless since they state the obvious.

Now, what is my stance on comments? Well, I sit pretty much in the middle. I believe that code should be easy to understand without the need for comments. However, let us look at the following example:
x = x + 1;
x = x + 1;
Now, why did the programmer increment x twice instead of simply incrementing it only once?
// x is a control variable. Due to the
// volatile and multiconcurrent nature of this program,
// we need to increment this varibale
// twice instead of just adding
// 2 to it.
x = x + 1;
x = x + 1;
Ah, now the reasoning is clear. Since the program is multiconcurrent, the programmer wants to increment the value twice instead of just once. There might be other questions someone can ask (is x used to view the number of modifications? is x written in another thread?), the comment adds clarity to the code in a way that the code could not do itself.

As such, I believe that comments should not explain the how of code, but should be used to explain the why of design decisions.

Now, I'm curious to hear what you think of comments. Have an opinion? Then leave a comment! (Yes I am a shameless sellout.)

Anyways, that's it for today folks. Stay cool,
Snowman

Friday, March 30, 2012

Unit Brain Structure

Topic: Programming, AI

Brain: noun the brain as the center of thought, understanding, etc.; mind; intellect.

While this is not the first definition that is thought about when one thinks about the brain, I just wanted to make clear that this is what I intend by the word in the following discussion.

Earlier, I spoke about my personal research into artificial intelligence, and now I would like to speak more about the "brains" behind this.

As I mentioned before, the Demeter Project focuses on creating AI testing environments, as well as the AI themselves. These Units of AI quite obviously need some sort of brain; an area of code where they can come up with answers to questions, debate which action should be conducted next, etc. We humans have our actual biological brain which allows us to think and make decisions based on the environment around us.

So now we need to define a brain for our Unit; the brain itself will have the symbol [X], where X is the name of our Unit (Azure's brain would be denoted [Azure]). So let me propose the following definition for [X]:
[X] = ($, Π, δ, Γ)

Ah, Greek letters, the international symbols to denote pretentiousness. Indeed, the only reason I added Greek letters to my system was because it looked cooler. But each of these letters and symbols mean something, which I will now define.

$ stands for the Unit's Reasoning Module, and is currently the only (partially-)implemented part of the brain. This allows the Unit to make decisions based on its environment and come to conclusions such as "the robin is a bird, so it flies" or "a human has 'cheveux'". It can also come up to conclusions such as "this thing has fur, four legs and it meows, so it is probably a cat".

Π refers to the Unit's Memory Management Module (MMM), which would record the "important" events which occurred in the past. I put "important" between quotes because that is one of the biggest challenges about machine memory; we humans have a memory capacity which would roughly be equal to 2.5 million gigabytes, roughly the equivalent of 300 years of video content. So obviously, it is (so far) impossible to store this kind of data on a computer, so what do we "filter out" and what do we "leave in"? If we leave in too much, will the AI become "terminally nostalgic"? All unanswered questions.

δ and Γ both refer to the Decision Module of the Unit's brain, and would be responsible of communicating with $ and Π to make decisions based on input from the environment. δ more specifically refers to what I like to call the Frontal Lobe of the unit. When this module is active, it would be akin to a "thought process" in a human being.

I haven't delved too much into Γ, and there is a good reason for it; it is still extremely vague in my mind, and perhaps the only part of the brain implementation which would be tightly-coupled with the implementation of a body. Γ is called the Subconscious Brain, and would be responsible for every decision which would not need to be "thought" about. For example, suppose you have a Unit which wants to go from point A to point B. Let us suppose that this decision has been made by δ. Now, we do not want δ to be occupied with the actual task of moving the Unit's legs (wheels, w/e): we want δ to learn paths, memorize locations, sounds, images. It will be Γ's task to do all the "subconscious" moves; when you're walking, you don't think about your legs, or which muscles you need to use in order to walk. This is what would probably be the most complicated part of the Unit's brain.

Anyway, I think I have said enough today about the brain's structure. Please tell me what you think in the comments. Until then, stay cool!
-Snowman

Wednesday, March 21, 2012

Keep At It

Topic: Programming, Varia

Expert: noun a person who has special skill or knowledge in some particular field; specialist; authority: a language expert.

People have unrealistic expectations when it comes to programming (and this applies also in every other area of life). There are books out there with stupid, outrageous titles: Learn X In Y Days, with Y being anything between 3 and 10 and X being almost anything. I used to be a TA for an introductory course at McGill University, and it would aggravate me to no end when students would complain after getting a bad grade, stating that all they "needed" to do to pass the course was "read a 'for dummies' book".

You cannot expect to be an expert in a field the first time you enter it. True, there are some exceptions, but they are incredibly rare. Harold Ramis once said "It takes at least 10 years to get good at anything", and I couldn't agree more. And before you get good, you're going to have to be bad, and that's okay. Everybody starts everywhere.

What's my point? Well, people get discouraged when they're told they're bad at something, and most people quit trying to do it. And this is not limited to programming; learning to play an instrument, or learning to play a sport, or learning any field of study takes a long time.

So people, if you feel bad at something you want to do, just keep at it!

Stay cool,
Snowman

Monday, March 19, 2012

Promela Verification

Topic: Programming

Experiment: noun a test, trial, or tentative procedure; an act or operation for the purpose of discovering something unknown or of testing a principle, supposition, etc.: a chemical experiment; a teaching experiment; an experiment in living.

Promela is a verification modelling language used to verify the correctness of various programming paradigms, ranging from distributed systems to complex algorithm verification. I am presenting this language to you so that you can conduct a simple experiment.

Most, if not all, programmers are familiar with the mutual exclusion (mutex) problem; if you have multiple concurrently-running systems, how can you ensure correctness of data modification? Here is a simple example:

Suppose your program consists of 3 phases: the non-critical (NC) phase, the wait (W) phase and the critical (C) phase. You can have multiple programs running concurrently, each in their own subphase. For example, if you have two programs running, P1 and P2, then at a specific example snapshot (at a given instant) of the runtime, P1 can be in its NC phase while P2 can be in its W phase; we will denote this {1:NC, 2:W}.

We labelled these phases as such to demonstrate the mutex problem; at any given moment, we want only one program to be in its critical phase. This means that no two (or more) programs are in their critical phase at the same time. Using the above notation with only two programs, we want to assure that the following snapshot is impossible: {1:C, 2:C}.

Various algorithms exist to ensure that mutex is respected and enforced, such as Peterson's 2-Process MUTEX Algorithm or Dekker's Algorithm. This is not news to anyone, and anyone who has had any experience with concurrent systems has probably encountered these.

However, as anyone who has had any experience with concurrent systems can agree, testing these algorithms can be a pain. Enter Promela.

I won't teach you Promela here, as I believe it is a language which is taught by experience, but I will tell you about its benefits. I was forced to learn Promela in a course which I did not want to take, and it has proven to be an invaluable tool in algorithm and system verification. While it is certainly not easy to learn and use, it almost provides direct proof that your algorithm works by allowing you to do certain things which might not be possible in other programming languages, such as atomicity constraints.

(Note for the wise: if you cannot do these things in other languages, than why is it useful to test them in Promela? Well suppose you develop a theoretical polynomial-time algorithm for the 3CNFSAT problem, and thus prove that P = NP, but that algorithm requires some atomicity constraints, then C++ cannot really be used to test it, whereas Promela can give you the tools required.)

For those wanting to learn Promela, multiple websites exist which provide everything you need to get started. Additionally, you would need an interpreter, SPIN (the Simple Promela INterpreter), which also provides real-time LTL formula checking with finite automata acceptance capabilities.

Oh, and the experiment I mentioned? Just take your favourite algorithm, and try to implement it in Promela!

If you have any questions, please feel free to post them in the comments section. Until then, stay cool readers!

- Snowman

Friday, March 16, 2012

They Can't Take That Away From Me

Topic: Programming

Cat: noun a small domesticated carnivore, Felis domestica or F. catus, bred in a number of varieties.

This post will have nothing to do with cats, but everything to do with Frank Sinatra. See, Mr. Sinatra is what I call my programming buddy, and this post will be about programming buddies.

Almost everyone has one. A stress toy, a piece of lint, a coffee mug, a humorous image, anything which distracts them from their job and at the same time allows them to reflect on the problem at hand. From my experience, anything can be a programming buddy. My coworker has this block puzzle he has yet to solve, and another has a pyramid of beer cans. See, it does not have to be an active object, or something you necessarily need to interact with. It is simply something familiar with which you decorate your work space.

What's the use of a programming buddy? Well programming buddies are part of the office environment, something which when improved can increase creativity, productivity and motivation. For me, it's listening to Frank Sinatra.

If you haven't gotten one yet, try and find one! But remember to keep it relatively passive as to not completely drain your productivity; just as your buddy can help you, it can also, if poorly chosen, become an obsession of yours.

That was it for today. Have a nice weekend reader!

- Snowman

Wednesday, March 14, 2012

How To Become A Hacker

Topic: Programming

Hacker: noun 1. A person or thing that hacks. 2. Computer slang. A computer enthusiast.

I found this interesting little article entitled How To Become A Hacker while browsing the other day, and I thought I ought to share it with you guys. One of the most important things they talk about (in the introduction! convenient!) is the difference between a hacker and a cracker. Why do I bring this forward? Well, it annoys me when people mislabel crackers as hackers, and vice-versa.

Want a handy tip to remember the difference? You don't hack a safe.

Snowman out!