Friday, November 30, 2012

Interlude 14 - Update on MineSweeper Sales


14.1 Overview


MineSweeper was initially launched at the end of August 2012. To date it has been downloaded by 2,484 people which is an average of 20 per day. As the graph above shows, averages don't paint an accurate picture of the download distribution. The first month (September) accounts for 2,393 or 96% of these downloads. Since the end of September, the average download rate has been 1 per day.


14.2 Version 1.7 Release


The first published update to MineSweeper was version 1.7 which made the App universal, included Game Center support and added iAds. You can see the bump in downloads on the release day in the graph above. This bump is a pale shadow compared to the initial release bump (11 vs 471).

14.3 Geographic Distribution


37% of downloads has been from the US which is not unexpected, but the next largest region is Germany at 9% which is curious.



14.4 What can we deduce?


These are only statistics from one application over a 3 month period and the enhancements from v1.7 have only be around for 3 days so we need to be careful not to identify trends that don't exist.
  1. iAds - Requests and Impressions can given you a feel for usage of your App each day. It is very early days so hard to conclude anything yet. In order to maximise revenue the download curve would suggest that you would be better off including iAds in your initial release rather than a subsequent upgrade.
  2. Game Center - will also give you an indication of app usage. In the 3 days that Game Center has been available, 9 people have recorded high scores in the Easy Difficulty leader board.
  3. Geographic Distribution - Based on the geographic download distribution, if we were to localise the app then German would be the first language that we should implement.
  4. Upgrades - For the 3 days that v1.7 has been released there has been 419 upgrades downloaded. We would expect another 100 or so over time but upgrades give you a feeling for how many people keep your App on their device after the initial download.

Saturday, November 24, 2012

Tutorial 24 - Basic 3D Graphics

Figure 1. Ripple Shader.

24.1 Setting the Scene


Version 1.5 of Codea is a huge update. In addition to camera access, image blend modes and a tween library for simple animation, it includes full access to shaders and a shader editor. This feature gives you full access to GLSL (OpenGL Shading Language) vertex and fragment shaders (which can be used to apply the ripple shader effect shown in Figure 1). To understand how to implement and use shaders we need to take a few steps back and provide some graphical foundations.

24.2 OpenGL


OpenGL is a multipurpose open-standard graphics library. Although it is actually a specification, it is usually thought of as an Application Programming Interface (API), which is the manifestation of this specification. The OpenGL API uses C and GLSL is very similar in structure to C but has its own peculiarities. As a C API, OpenGL integrates seamlessly with Objective-C based Cocoa Touch applications. The OpenGL API is defined as a state machine (see Tutorial 5), and almost all of the OpenGL functions set or retrieve some state in OpenGL. The only functions that do not change state are functions that use the currently set state to cause rendering to happen.

OpenGL for Embedded Systems (OpenGL ES) is a simplified version of OpenGL that provides a library which is easier to learn and implement on mobile graphics hardware. Apple provides implementations of OpenGL ES v1.1 and OpenGL ES v2.0. Codea uses v2.0.

OpenGL ES 2.0 is very similar to OpenGL ES 1.1, but removes functions that target the fixed-function vertex and fragment pipeline stages. Instead, it introduces new functions that provide access to a general-purpose shader-based pipeline. Shaders allow you to write custom vertex and fragment functions that execute directly on the graphics hardware (which is very fast). 

24.3 Rendering Graphics


Everything displayed on your iPad screen is a 2 dimensional array of pixels. Each pixel has a particular colour defined by a red, green, blue and alpha (transparency) value in the range 0 to 1. It is the purpose of the graphics pipeline to determine what colour to put in each pixel to provide a representation of your image. Displaying a 2D image is fairly straight forward but what about 3D? The process of converting a 3D world into a 2D image is called rendering.

There are many different rendering systems. The one that we will concern ourselves with is called rasterization, and a rendering system that uses rasterization is called a rasterizer. In rasterizers, all objects that you see are represented by empty shells made up of many triangles. These series of triangles are called "geometry", "model" or "mesh". We will use the term mesh as that is what is used in Codea.


Figure 2. Rasterizing a Triangle.

The process of rasterization has several phases. These phases are ordered into a graphics pipeline (figure 3), where the mathematical model of your image, consisting of a mesh of triangles, enter from the top and a 2D pixel image comes out the bottom. This is a gross simplification but may help in the understanding of the process. The order which triangles from your mesh are submitted to the pipeline can effect the final image. Pixels are square, so they only approximate the triangles (Figure 2), just as the triangles approximate the 3D image. The process of converting your triangles to pixels is called scan conversion, but before we can do this we need to perform some mathematics to check whether the triangle is visible and convert it from 3D to a 2D representation.




24.4 Graphics Pipeline Overview


Triangles are described by 3 vertices, each of which define a point in three dimensional space (x, y, z). To represent these in two dimensions we have to project the vertex co-ordinates onto a plane. We maintain the illusion of depth by using tricks like perspective (i.e. things the same size appear smaller the further away they are). We get to influence the graphics pipeline at two points, the vertex shader and the fragment shader, shown in orange in Figure 3.

If you are interested in a much more detailed explanation then we suggest that you read Andrew Stacey's tutorial on Using Matrices in Codea.

Step 1 - Vertex Shader (Clip Space Transformation)

The first phase of rasterization is to transform the vertices of each triangle into "clip space". Everything within the clip space region will be rendered to the output image, and everything that falls outside of this will be discarded. In clip space, the positive x direction is to the right, the positive y direction is up, and the positive z direction is away from the viewer. Clip space can be different for different vertices within a triangle. It is defined as a region of 3D space with the range [-w, w] in each of the x, y, and z directions. 

This is difficult to visualise and use so the vertices are normalised by dividing each co-ordinate (x, y, z) by w. After being normalised, the (x, y, z) co-ordinates will be in the range of -1 to +1. Dividing by w also applies a perspective effect to each of our triangles.

The entire process can be thought of as a mapping from the projection volume to a 2 unit cube with the origin at (0, 0, 0).


Figure 4. Clip Space Transformation & Normalisation.

In terms of the graphical pipeline (figure 3), this transformation is coded in the Vertex Shader. Open up the Shader Lab in Codea and tap on the vertex shader tab. In the main() function, the line:

gl_Position = modelViewProject * position;

performs the clip space transformation for you. 

The inputs to the vertex shader consist of:
  • Attributes - per vertex data supplied via vertex arrays (e.g. position, color and texCoord). They are signified by the attribute tag in GLSL;
  • Uniforms - constant data used by the vertex shader (e.g. modelViewProjection). Labelled as uniform in GLSL; and
  • Samplers - a specific type of uniforms that represent textures used by the vertex shader. These are optional.
The outputs of the vertex shader are called (somewhat redundantly) varying variables.

Step 2 - Primitive Assembly

A primitive is a geometric object which can be drawn by OpenGL ES (e.g. a point, line or triangle). In this stage, the shaded vertices are assembled into individual primitives.

Normalisation and Clipping will happen automatically in the Primitive Assembly stage between the vertex shader and fragment shader. Primitive Assembly will also convert from normalized device coordinates to window coordinates. As the name suggests, window coordinates are relative to the window that OpenGL is running within. Window coordinates have the bottom-left position as the x, y (0, 0) origin. The bounds for z are [0, 1], with 0 being the closest and 1 being the farthest away. Vertex positions outside of this range are not visible. The region of 3D space that is visible on the screen is referred to as the view frustum.

Step 3 - Rasterization

Rasterization converts the graphic primitives from the previous stage to two dimensional fragments. These 2D fragments represent pixels that can be drawn to the screen (Figure 2).

In this stage, the varying values are calculated for each fragment and passed as inputs to the fragment shader. In addition, the colour, depth, stencil and screen co-ordinates are generated and will be passed to the per-fragment operations (e.g. stencil, blend and dither).

Step 4 - Fragment Shader

The fragment shader is executed for each fragment produced by the rasterization stage and takes the following inputs:
  • Varying variables - outputs from the vertex shader that are generated for each fragment in the rasteriser using interpolation (e.g. vColor in the Ripple Shader Lab example).;
  • Uniforms - constant data used by the fragment shader (e.g. time and freq in the Ripple Shader Lab example).; and
  • Samplers - a specific type of uniforms that represent textures used by the fragment shader (e.g. texture in the Ripple Shader Lab example).
The output of the fragment shader will either be a colour value called gl_FragColor or it may be discarded (see Step 5).

Step 5 - Per Fragment Operations

The final step before writing to the frame buffer is to perform (where enabled) the following per fragment operations.
  1. Pixel ownership test - checks if the pixel is currently owned by the OpenGL context. If it isn't (e.g. the pixel is obscured by another view) then it isn't displayed.
  2. Scissor Test - if enabled, is used to restrict drawing to a certain part of the screen. If the fragment is outside the scissor region it is discarded.
  3. Stencil & Depth test - if enabled, OpenGL's stencil buffer can be used to mask an area.The stencil test conditionally discards a fragment based on the value in the stencil buffer. Similarly, if enabled the depth buffer test discards the incoming fragment if a depth comparison fails.
  4. Blending - combines the newly generated fragment colour value with the corresponding colour values in the frame buffer at that screen location.
  5. Dithering - simulates greater color depth to minimise artifacts that can occur from using limited precision. It is hardware-dependent and all OpenGL allows you to do is to turn it on or off.

24.5 A Simple Shader Example


Version 1.5 of Codea comes with a sample ripple shader (see Figure 1). The following fragment shader code will tint a texture with the tint colour by the tint amount.

// A basic fragment shader with tint.


// This represents the current texture on the mesh
// uniform lowp sampler2D texture;

// The interpolated vertex color for this fragment
// varying lowp vec4 vColor;

// The interpolated texture coordinate for this fragment
// varying highp vec2 vTexCoord;

void main()
{
    // Sample the texture at the interpolated coordinate
    
    lowp vec4 texColor = texture2D( texture, vTexCoord );
    
    // Tint colour - red is currently hard coded.
    // Tint amount - select a number between 0.0 and 1.0
    // Alternatively you could pass the tint color and amount
    // into your shader by defining above:
    //
    // uniform lowp vec4 tintColor;
    // uniform lowp float tintAmount;
    
    lowp vec4 tintColor = vec4(1.0,0.0,0.0,1.0);
    lowp float tintAmount = 0.3;
    tintColor.a = texColor.a;

    // Set the output color to the texture color
    // modified by the tint amount and colour.
    
    gl_FragColor = tintColor * tintAmount + texColor * (1.0 - tintAmount);
}

24.6 Appendix - GLSL Precision Qualifiers


You will notice the lowp, mediump and highp precision specifiers in the shader lab example. It is much faster to use lowp in calculations than highp.The required minimum ranges and precisions for the various precision qualifiers are:


Apple provides the following guidelines for using precision in iOS applications:
  • When in doubt, default to high precision.
  • Colours in the 0.0 to 1.0 range can usually be represented using low precision variables.
  • Position data should usually be stored as high precision.
  • Normals and vectors used in lighting calculations can usually be stored as medium precision.
  • After reducing precision, retest your application to ensure that the results are what you expect.

Saturday, November 17, 2012

Tutorial 23 - Implementing iAds with Codea


23.1 Introduction


A common business model in the App store is to give away your app for free and generate revenue by serving ads to your application. This model has benefits but can annoy your users if done in excess. Everyone likes stuff that is free, so setting a zero cost removes most of the barriers to downloading your app and should maximise download numbers.

Since iOS 4.0 the iAds framework has been available for developers. Using this you can add banner or full screen advertisements to your application. If you are planning to include ads on a screen then you need to ensure that you have left space within your interface to display them. Full screen ads (referred to as interstitial ads by Apple) are only available on the iPad and are only suitable in certain situations, so we wont cover those in this tutorial.

You receive revenue when users view (called impressions) or click advertisements on your app. Some apps use a bit of social engineering to maximise their click through rate. For example the Foxtel (cable TV) app in Australia has ads which appear at the top of the channel selection screen. As per the Apple guidelines these ads are only displayed once the banner has been loaded from the ad server. When this occurs, the ad slides in from the top and moves the content down by the height of the banner. However what happens 50% of the time (for us at least) is that you accidentally tap the ad when you are trying to select a channel.

It is difficult to estimate exactly what revenue you might receive from including iAds in your app. A graph of the impressions that we got for the last year is shown below. There was about 98k in total with 90% coming from one app (Personality). According to Apple the effective revenue per thousand ads (eCPM) served over this period was $0.82. In practice, we get on average $0.25 per day. So while this may not be a get rich quick scheme, it is not bad passive income, particularly given the app I'm deriving revenue on is not optimised for ads. 

We suspect you could do much better with a free game app with high reusability (people tend to use Personality only 2-3 times at best). Apple says that "hundreds of developers are making more than $15,000 per quarter". Given that there are 275,000 iOS developers in the US alone, this is perhaps not that impressive - albeit not all of those developers have iAd enabled apps.




23.2 Banner Views


From a Codea perspective, implementing iAds is easy. If you are happy to display ads on every screen then you just need to leave the appropriate amount of space on your screen to display the ad. In most cases there will be screens that you don't want to display ads on (e.g. the main game screen), so we have extended Codea Lua with a function, showBannerAd(true) to turn ads on and off. For our example, we will only display ads on the Menu and High Score screens.

For the iPad, banner sizes are set at:
  • Portrait: 66 (height) x 768 (width) points - see Interlude 13 if you dont understand the difference between pixels and points.
  • Landscape: 66 (height) x 1024 (width) points.
For completeness, the iPhone banner sizes are 320 x 50 points for a portrait advertisement and 480 x 32 points for a landscape advertisement.

The banner ads are intended to typically appear either at the top or bottom of the display and are provided in two formats so that ads may be displayed when the device is either in portrait or landscape orientation.


23.3 Setup iAds in iTunes Connect


As with Achievements and in-App purchasing, there are some things that you need to enable in iTunes Connect before your app will be able to receive ads from Apple's servers. Log in to your developer account, select iTunes Connect and then click on Manage your Apps. Click on the appropriate App icon and then click on the Setup iAds link (right hand side towards the top).

Answer the question regarding whether the primary target audience for your app is users under 17 years of age and then click Enable iAds and Save.

That's it! You can disable iAds just as simply but changes to your iAd settings will not take effect until your next app version is approved.


23.4 Creating Your Banner Views in the Codea runtime


You should only create a banner view when you intend to display it to the user. Otherwise, it may cycle through ads and deplete the list of available advertising for your application. As a corollary to this, avoid keeping a banner view around when it is invisible to the user. To demonstrate the implementation of iAds in the runtime we will use our old faithful MineSweeper application.

Based on our careful UI planning (or perhaps just through good luck) we happen to have enough space at the top of the MineSweeper Menu screen to fit in a banner ad, so that is where we will put it. As MineSweeper is now a universal application we will need to handle the iPhone sized banners as well. We have more room at the bottom of the iPhone Menu screen when it is in the portrait orientation, so when we detect that device and orientation we will place the banner at the bottom of the view. There is no room for a banner on an iPhone in landscape orientation, so we will disable ads when we detect this combination.

Banner views use a delegate to communicate with your application. Your application will thus need to implement a banner view delegate to handle common events. In particular we need to handle:
  1. when the user taps the banner.
  2. when the banner view loads an advertisement.
  3. when the banner view encounters an error.
Usually you would place this code in the relevant view controller but for simplicity we will handle this in the CodifyAppDelegate (the alternative is to extend the BasicRendererViewController class in the runtime).

If your application supports orientation changes, your view controller must also change the banner view’s size when the orientation of the device changes.

Step 1 - Add the iAd Framework to your App.

Fire up Xcode and load the runtime version of your Codea application. Click on the CodeaTemplate file at the top of the project navigator then in the summary tab, scroll down to the linked libraries area. Click on the "+" button below your existing frameworks to add a new framework. Find iAds and click on "Add".

Step 2 - Update the CodifyAppDelegate Files.

In CodifyAppDelegate.h you need to import the framework you just added using:

#import <iAd/iAd.h>

then modify the class interface definition so that it looks like this:

@interface CodifyAppDelegate : UIResponder<UIApplicationDelegate, ADBannerViewDelegate>

This indicates that the ADBannerViewDelegate protocol will be implemented. We will provide a link to download the updated CodifyAppDelegate files below. We have added two other things to the CodifyAppDelegate files:
  1. A boolean instance variable called displayingAds which is controlled from your Lua code via the modified aGameCenter_Codea class; and
  2. A call back method called iAdDisplayStatusChanged which is used to update the banner view if the MineSweeper game state or orientation changes. This also comes via the modified aGameCenter_Codea class.
Step 3 - Update the other Codea runtime Files.

As we saw in Tutorial 19 when integrating Game Center, there are changes required in LuaState.m, OSCommands.h and OSCommands.m, to allow Lua to communicate with our Objective C code. This mostly comprises registering our new showBannerAd(true or false) function.

In LuaState.m add the following to the  - (void) create method:

    //iAds Function
    LuaRegFunc(showBannerAd);

In OSCommands.h add the following below the similar Game Center definitions:

    //iAds
    int showBannerAd(struct lua_State *L);

Finally, in OSCommands.m add the following method below the Game Center definitions:

int showBannerAd(struct lua_State *L){
    [CodeaGameCenter showBannerAd:lua_toboolean(L,1)];
    return 0;
}

23.5 Codea Lua Code Changes


As mentioned in section 23.2, if you are happy to display adds on every screen then there is no changes required to your Lua code (assuming you left space for the ads). If like us you want to select when ads are displayed, then you need to use the new showBannerAd() function. If you pass true in this function then ads will be displayed and if you pass false, they wont.

As an example, the following shows our updated orientationChanged() function. 

function orientationChanged(newOrientation)

    print("Orientation Change Detected.")
    
    if setupHasRun and gameState == stateRun or gameState == stateWon or 
        gameState == stateLost then
            updateGridLocation()
    end 
    
    if setupHasRun and (gameState == stateMenu or gameState == stateScore) then
        showBannerAd(true)
    else
        showBannerAd(false)
    end
    
end

We also use showBannerAd()  when the gameState changes due to button taps.

The other class we need to modify is the SpashScreen. In the fadeAnimation call back, we turn on ads once the splash screen has faded away.

function fadeAnimationDone()
    
    -- Call back function for Fader complete
    
    gameState = stateMenu
    showBannerAd(true)
    
end

23.6 Download the Updated Runtime & Codea Files


The aGameCenter_Codea and CodifyAppDelegate files are available on GitHub. In addition, the following links allow downloading individual files from DropBox:

Saturday, November 10, 2012

Interlude 13 - Pixels vs Points in iOS

In the previous tutorial we had to deal with a number of different resolutions in order to be able to develop a universal app. The following is a quick refresher on how to do this.

From iOS 4, dimensions are measured in “points” instead of pixels. All coordinate values and distances are specified using points, which are floating-point values. The measurable size of a point varies from device to device and is largely irrelevant. The main thing to understand about points is that they provide a fixed frame of reference for drawing.

The iPhone screen is 320 x 480 points on both iPhone 4 and older models. The iPhone 4 has a screen resolution of 640 x 960 pixels (i.e. a scale factor of 2.0) while older models have a resolution of 320 x 480 pixels (i.e. a scale factor of 1.0).

The iPad screen is 768 x 1024 points. This scales to the following pixel resolutions:
  • iPad 1 - 768 x 1024 pixels (i.e. a scale factor of 1.0)
  • iPad 2 - 768 x 1024 pixels
  • iPad 3 - 1536 x 2048 pixels (i.e. a scale factor of 2.0)
  • iPad 4 - 1536 x 2048 pixels
  • iPad Mini - 768 x 1024 pixels
Retina is a marketing term developed by Apple. It refers to devices and monitors that have such a high pixel density (normally greater than 300 pixels per inch) that most people can't discern individual pixels at a normal viewing distance. We wont use this term in our explanation because it is inexact and differs by device (e.g. iPhone 4/5 is 326 ppi, iPad mini is 163 ppi, iPad 1 and 2 is 132 ppi, iPad 3 and 4 is 264 ppi - note that the iPad 3 and 4 don't strictly meet the Retina "standard"). 

Querying the scale factor is the best way to determine the relationship between pixels and points on the current device. In iOS 4 and later, UIScreen has a property called scale which is a floating point number that defines the pixel/point relationship. In Objective C you can get this number using:

[[UIScreen mainScreen] scale]

On an iPhone 4 and 5 a point is a four pixel square, so if you draw a one-point line, it shows up as two pixels wide. The theory is that you specify your measurements in points for all devices, and iOS automatically draws everything to the right proportion on the screen. The result is that if you draw the same content on two similar devices, with only one of them having a high-resolution screen, the content appears to be about the same size on both devices. This is why you go to the trouble of creating your @2x images.

The iPhone 5 has a physical resolution of 1136 x 640 pixels which equates to 568 x 320 points. Your WIDTH and HEIGHT constants return points not pixels. So the best test for an iPhone 5 within Codea is to look for the 568 dimension.

Within your Objective C code you can determine the device using something like the following:

if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
        NSLog(@"iPad device detected.");
else if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPhone)
{
        CGSize result = [[UIScreen mainScreen] bounds].size;
        if (result.height == 480)
        {
            NSLog(@"iPhone/iPod Touch detected (< v5).");
        }
        if (result.height == 568)
        {
            NSLog(@"iPhone/iPod Touch detected (> v5).");
        }
}