Before working on the first Interactive Prototype, I thought I'll take some time and look back at my Video Prototype.
I believe my Video Prototype was extremely effective at communicating and illustrating my idea. Although simulating and animating the game myself was tedious and took a long time, I think it was worth the effort. By showing the game itself, viewers can see what the game looks like and plays without any verbal explanation. My aim was to accomplish everything visually, and I can say this has worked.
The explanation of the rules were well integrated as a part of the game's animations so viewers could easily understand them. Even the physical input, the yoke, was shown visually to explain how it works with the game. Again, this was integrated as part of the animation.
I also think that the theme of the video contributed to its success. The familiar arcade aesthetic accompanied with the music captured viewers' attention. Additionally, the game trailer style effects made the game concept seem more legitimate and solid according to external feedback.
Overall, I believe the Video Prototype was a success and I am happy with the outcomes and results.
Friday, 29 August 2014
Saturday, 23 August 2014
Feedback From Video Prototyping
This week we had presented our Video Prototypes to the class. I had collected feedback using a survey I had created on Google Forms which can be found here: http://tinyurl.com/alphabet-invaders
The results of the feedback can be found here.
Overall, I had received positive feedback with a few great suggestions to improve certain aspects of the game. Some conclusions which I have drawn are:
The results of the feedback can be found here.
Overall, I had received positive feedback with a few great suggestions to improve certain aspects of the game. Some conclusions which I have drawn are:
- The concept of mashing up Hangman and Space Invaders is indeed creative.
- The rules of the game are very easy to understand.
- The interface is certainly familiar and easy to use.
- The yoke is fairly suitable to control the game.
- The user could have an option to import their own custom list of words.
- The game should be faster paced.
Week 4 Exercise 2 - Alarm Clock Prototypes
This exercise required us to design and describe a horizontal, vertical and a diagonal prototype for an alarm clock application for smartphones. The features of the application include:
My horizontal prototype simply tests the user interface of the alarm application. There are four main screens:
- Setting, editing and deleting multiple alarms
- Daisy-chaining alarms
- Setting different tones for different alarms
- Shaking the phone to snooze the alarm
Horizontal Prototype
User interface for alarm application |
My horizontal prototype simply tests the user interface of the alarm application. There are four main screens:
- The first screen the user is presented with when they launch the application is the list of alarms. Here they can view all created alarms, enable/disable each one using switches, edit/delete each alarm. Most importantly, the 'Add Alarm' button is located on the top for easy access.
- If the user presses 'Add Alarm' on the previous screen (or even edits one), they will be presented with the second screen where they will be able to customise the different options for the alarm. Options include repeating, tone, a descriptive label, daisy-chaining and of course the alarm's time. If the user wishes to daisy-chain alarms, they can select that option and they will be taken to the next screen (#3)
- On this screen, the user is presented with a list of all existing alarms they wish to daisy-chain with. They can select multiple alarms or none at all. The layout of this screen is very similar to the screen where users can choose the alarm's tone - it is simply a list.
- When the alarm rings, this screen will be displayed (taking up the entire display because it requires the user's attention). The alarm's time is shown topmost followed by the descriptive label. The two buttons are self-explanatory. On this screen, the user also has the option to shake the phone to snooze.
Vertical Prototype
The aim of my vertical prototype is to test the dismissing and snoozing of the alarm when it rings. Essentially, it will use screen #4 from above and test each aspect of it including overall layout, the readability of the alarm time and label, the effectiveness of the snooze button (and shake-to-snooze functionality).Diagonal Prototype
The diagonal prototype aims to test a a scenario where the user undertakes the following tasks:- Launch the application and view you alarms
- Add a new alarm for 5:00pm with the label "Tea time". The alarm should repeat on Mondays and Thursdays. It should have the tone 'Xylophone' and should trigger (i.e. daisy chain) Alarm 1.
Week 4 Exercise 1 - Car Dashboard
What components are relevant to driving behaviour and what are the interactions of those components with the driver when driving?
- Steering wheel - driver rotates to steer the car
- Speedometer, tachometer, fuel gauge, etc. - driver should glimpse at it and get the desired information
- Gear stick - driver operates it by moving it into positions
- Handbrake - driver can pull or release (by pressing the button)
- Side-mirror adjustment controls - driver selects which mirror to adjust using a switch, and then adjusts the position using buttons
- The screen in the centre of the dashboard - driver can control it with a joystick
What would you test?
I would test the screen in the centre of the dashboard. I would like to know how effective it would be controlling it with a joystick as opposed to using touch.How would you test it?
I would test the controls by giving the user a task to accomplish while driving. I will see if they have difficulty in navigating the interface and how long it takes to complete the task.Sunday, 17 August 2014
Saturday, 16 August 2014
Development of the Video Prototype
I have commenced development of the Video Prototype. I am using Adobe After Effects to animate the entire video. The video will show a emulation of my game. What better way to explain my game than to show the game itself? My aim is to present something along the lines of an actual game trailer.
The bulk of the video will be showcasing the game. I have also included an explanation of the physical input which is now the yoke (instead of the toy gun mentioned in a previous blog post). I believe the yoke is more suitable for this type of game since it just consists of horizontal movement and clicking (triggering a shot). The gun would have felt unintuitive since you won't be able to aim it freely.
My explanation of the yoke is simply showing what it looks like (stylised and not as a photograph) and animating the yoke rotating concurrently with the player's ship on the bottom of the screen.
I have created the game's sprites in Adobe Illustrator and imported them into After Effects where I have animated my game.
Animation took quite a while to do mostly because I haven't touched After Effects in a couple years. But after brushing up on my skills, I was able to get into it pretty easily. Each shot fired and letter dropped was animated manually as well as the player's ship and aliens.
Below is a screenshot of my progress nearing completion.
My purpose for Sony Vegas is for compiling the raw animated footage with the music as well as cleaning up/trimming the start and end of the video/audio.
The bulk of the video will be showcasing the game. I have also included an explanation of the physical input which is now the yoke (instead of the toy gun mentioned in a previous blog post). I believe the yoke is more suitable for this type of game since it just consists of horizontal movement and clicking (triggering a shot). The gun would have felt unintuitive since you won't be able to aim it freely.
My explanation of the yoke is simply showing what it looks like (stylised and not as a photograph) and animating the yoke rotating concurrently with the player's ship on the bottom of the screen.
I have created the game's sprites in Adobe Illustrator and imported them into After Effects where I have animated my game.
Animation took quite a while to do mostly because I haven't touched After Effects in a couple years. But after brushing up on my skills, I was able to get into it pretty easily. Each shot fired and letter dropped was animated manually as well as the player's ship and aliens.
Below is a screenshot of my progress nearing completion.
Getting busy in After Effects and Sony Vegas. What will I do without dual monitors. |
Tuesday, 12 August 2014
Ideas for the Video Prototype
I have now begun to think about how I should create my video prototype. I am most likely going to animate most of the parts of the video in Adobe After Effects. These parts will be showcasing the game itself. The titling, editing and compiling will be done in Sony Vegas Pro. I will also need to show my physical interaction device which is the toy gun. I plan to either take a photograph or record a video of me holding it to show how it works with the game. The aim is to move the gun back and forth horizontally concurrently with the player in the video. This will make it appear like the gun is controlling the player.
I have drawn up a very rough storyboard to illustrate the parts of the video.
I have drawn up a very rough storyboard to illustrate the parts of the video.
Rough storyboard for the video |
Saturday, 9 August 2014
Game Mashup Idea
After a long time of deciding which games to choose, I have finally selected to mashup Space Invaders with Hangman.
The gameplay will be very simple and almost similar to the original Space Invaders. The objective of the game is to guess the word (like in Hangman). However, the way you select letters is different. The attacking aliens will drop random letters. To select a letter you want to form part of the word, you must let the letter fall to the ground. You can shoot any unwanted letters before they land on the ground. If an incorrect letter lands, the aliens will move down closer to the player.
A sketch of the game is below.
Another idea I could add is that the letters fall in designated columns. This might add an extra challenge.
For the physical input/interaction, I am thinking about using an old Konami Justifier gun for the PlayStation 1. I cut it's wire a long time ago so I could use it as a pretend gun and run around with it back when I was a young lad.
Shooting will be easy since all that is required is to trigger a click/key press upon the squeezing of the gun's trigger. The only problem with this is how to control the movement of the player. The gun does have two extra buttons (one on the side and on the rear), but I don't think it will be intuitive or easy to use to control movement with them.
For the next few days I will be working on my Video Prototype. I'm thinking about either doing stagnant mockups or even an animated mockup of the game. I will also need to think about what other content I need to explain my idea.
The gameplay will be very simple and almost similar to the original Space Invaders. The objective of the game is to guess the word (like in Hangman). However, the way you select letters is different. The attacking aliens will drop random letters. To select a letter you want to form part of the word, you must let the letter fall to the ground. You can shoot any unwanted letters before they land on the ground. If an incorrect letter lands, the aliens will move down closer to the player.
A sketch of the game is below.
Rough game sketch - the word to guess is 'Telephone' |
For the physical input/interaction, I am thinking about using an old Konami Justifier gun for the PlayStation 1. I cut it's wire a long time ago so I could use it as a pretend gun and run around with it back when I was a young lad.
Konami Justifier Gun for the PlayStation |
For the next few days I will be working on my Video Prototype. I'm thinking about either doing stagnant mockups or even an animated mockup of the game. I will also need to think about what other content I need to explain my idea.
Tuesday, 5 August 2014
Week 2 Exercise - Mixed/Augmented Reality Prototype
For this week's exercises we were required to think of a device we use regularly at home and design variations to the way that we interact with that device. To test the different types of input, we were supposed to design a mixed or augmented reality prototype.
One of the most common devices that everyone on our table uses is a television. A remote is the primary way to interact with a television. The most common commands we send to the TV is to change channels, adjust volume, change video inputs and such. However, most remotes are packed with various buttons which can become overwhelming. What if there was another method of performing these basic commands in a much quicker way that is familiar?
In today's world, most of us have smartphones with a touchscreen interface. The primary way we interact with them is with the use of gestures such as swiping, sliding, grabbing/pinching. For our TV operation, we will be bringing these gestures into the mix.
The input device instead of the remote will be a device very similar to a tablet and will be slightly larger than your hand. On the surface is where you will perform your gestures:
The prototype will mimic the results of the above commands using mixed/augmented reality in order to test the input device. The tablet remote will simply be a plain panel of glass or any smooth surface. The user (tester) will be equipped with the augmented reality (AR) gear while holding the fake input device. The AR equipment will display an image on the TV screen as well as on the device all on the user's visual feed. The user is able to perform gestures on the device which actually appears like they are giving the TV commands. For example, if the user swipes left or right to change channels, the AR's visual feed will show the channel being changed on the TV screen.
One of the most common devices that everyone on our table uses is a television. A remote is the primary way to interact with a television. The most common commands we send to the TV is to change channels, adjust volume, change video inputs and such. However, most remotes are packed with various buttons which can become overwhelming. What if there was another method of performing these basic commands in a much quicker way that is familiar?
In today's world, most of us have smartphones with a touchscreen interface. The primary way we interact with them is with the use of gestures such as swiping, sliding, grabbing/pinching. For our TV operation, we will be bringing these gestures into the mix.
The input device instead of the remote will be a device very similar to a tablet and will be slightly larger than your hand. On the surface is where you will perform your gestures:
- Swiping left and right will change the channel
- Sliding your fingers vertically will adjust the volume
- Grabbing/pinching with all fingers will make the TV image zoom out and display all video inputs
- All other controls will be available in an icon layout menu which can be accessed on the device
Image reference: http://www.prweb.com/releases/2012/1/prweb9083508.htm |
Subscribe to:
Posts (Atom)