Let's break down the code from the previous page into a series of steps.
1. Specify a set of vectors which describe a vertex (I added color information).
float[] vertices = { 0, 0, 2, // Vertex 1 (x, y, z) 1, 25, 64, 1, // Vertex 1 (r, g, b, a) 0, 1, -3, // Vertex 2 (x, y, z) 10, 60, 164, 1, // Vertex 2 (r, g, b, a) -2, 2, 0 // Vertex 3 (x, y, z) 30, 125, 20, 1 // Vertex 3 (r, g, b, a) };
2. Create buffer to upload vector list to and tell OGL to use it
ByteBuffer verticesBB = ByteBuffer.allocateDirect( vertices.length * 4 ); verticesBB.order( ByteOrder.nativeOrder() ); fBuffer = verticesBB.asFloatBuffer(); fBuffer.put( vertices ); fBuffer.position( 0 ); vbo = Gdx.gl.glGenBuffer(); Gdx.gl.glBindBuffer(Gdx.gl.GL_ARRAY_BUFFER, vbo); Gdx.gl.glBufferData( Gdx.gl.GL_ARRAY_BUFFER, vertices.length * 4, vertexBuffer, Gdx.gl.GL_DYNAMIC_DRAW );
3. Set up view and perspective transformation matrices
float ratio = (float) width / height; float zNear = 0.1f; float zFar = 1000; float fov = 0.75f; // 0.2 to 1.0 float size = (float)( zNear * Math.tan(fov / 2) ); Matrix.setLookAtM( viewMatrix, 0, -13, 5, 10, 0, 0, 0, 0, 1, 0); Matrix.frustumM( projectionMatrix, 0, -size, size, -size / ratio, size / ratio, zNear, zFar ); Matrix.multiplyMM( MVPMatrix, 0, projectionMatrix, 0, viewMatrix, 0 );
4. Create and load shaders
int vertexShader = loadShader( GLES20.GL_VERTEX_SHADER, vertexShaderCode ); int fragmentShader = loadShader( GLES20.GL_FRAGMENT_SHADER, fragmentShaderCode ); shaderProgram = GLES20.glCreateProgram(); GLES20.glAttachShader( shaderProgram, vertexShader ); GLES20.glAttachShader( shaderProgram, fragmentShader ); GLES20.glLinkProgram( shaderProgram );
private int loadShader( int type, String source ){ int shader = GLES20.glCreateShader( type ); GLES20.glShaderSource( shader, source ); GLES20.glCompileShader( shader ); return shader; }
5. Get pointers to shader code variables
positionVariableLocation = GLES20.glGetAttribLocation( shaderProgram, "position" ); uMVPVariableLocation = GLES20.glGetUniformLocation( shaderProgram, "uMVP" );
6. Tell OpenGL where and how to read the data specified in steps #1 and #3
GLES20.glUseProgram( shaderProgram ); GLES20.glUniformMatrix4fv( uMVPVariableLocation, 1, false, MVPMatrix, 0 ); GLES20.glVertexAttribPointer( positionVariableLocation, 3, GLES20.GL_FLOAT, false, 0, fBuffer ); GLES20.glEnableVertexAttribArray( positionVariableLocation ); GLES20.glDrawArrays( GLES20.GL_POINTS, 0, 3 );
Now let's explore each of these steps.
Step 1 - Specify a set of vectors which describe a vertex
Let's begin with just points. Everyone is familiar with the ubiquitous 'pixel' (just zoom in on any digital image to see one up close). A pixel is defined by position (x, y) and color (red, green, blue, alpha). Both of these properties of position and color --OGL uses the word 'attributes' instead of properties-- are defined as a group of values. OGL refers to this group of values as 'vectors' and it generalizes the idea of a 'pixel' (a place in space with properties) to a 'vertex', A vector can be between 1 and 4 values large (see: OGL vertex spec). For our pixel's case, the pixel's position would be a 2-value vector <x,y> and it's color would be a 4-value vector <r,g,b,a>. LibGDX defines a single vertex attribute (a vector) in its VertexAttribute.java class like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | public final class VertexAttribute { /** the attribute {@link Usage} **/ public final int usage; /** the number of components this attribute has **/ public final int numComponents; /** whether the values are normalized to either -1f and +1f (signed) or 0f and +1f (unsigned) */ public final boolean normalized; /** the OpenGL type of each component, e.g. {@link GL20#GL_FLOAT} or {@link GL20#GL_UNSIGNED_BYTE} */ public final int type; /** the offset of this attribute in bytes, don't change this! **/ public int offset; /** the alias for the attribute used in a {@link ShaderProgram} **/ public String alias; /** optional unit/index specifier, used for texture coordinates and bone weights **/ public int unit; private final int usageIndex; } |
The field to notice is 'numComponents', it is defined further down in the .java file as "the number of components of this attribute, must be between 1 and 4."
This field here is where libGDX maps the vector definition of OGL to its framework. The other variables defined in the class we'll get to later, though the
descriptive comments should give you a good hint already ;)
Next, we need some way to group the position and color vectors together so we can collect all the attributes of a single pixel together for organization and rendering. OGL gives us full flexibility about how we can go about doing this (see: VBO offset and stride). However, libGDX uses the interleaved data approach so that's what we'll focus on. Interleaved data is when we place all the attribute vectors for a single pixel back to back before we insert data for another pixel into our Vertex Buffer Object (VBO). We'll get to what a VBO is in the next section. For now, just think of it as a flat array of values. Visually, it looks like this:
[ { <x,y>,<r,g,b,a> }, { <x,y>,<r,g,b,a> }, {...}, ... ]
[ { <x,y>,<r,g,b,a> } <- pixel #1, { <x,y>,<r,g,b,a> } <-pixel #2, {...} <-pixel #3, ... and so on]
LibGDX let's us specify this interleaved data structure for our vertices (pixels) with the VertexAttribute.java and VertexAttributes.java classes like this:
1 2 3 | VertexAttribute vA = new VertexAttribute( VertexAttributes.Usage.Position, 3, "a_position" ); VertexAttribute vC = new VertexAttribute( VertexAttributes.Usage.Position, 4, "a_color" ); vAs = new VertexAttributes( new VertexAttribute[]{ vA, vC } ); |
We define two vertex attributes, one for position and one for color (yes, we are using Usage.Position for both position and color...getting to that in a moment). We are allocating three floats for position <x,y,z> and four floats for color <r,g,b,a> . The 'Usage.Position' is used to calculate offsets for the VBO's offset and stride variables mentioned above. This takes place further down in VertexAttributes.java in the 'calculateOffsets()' function:
1 2 3 4 5 6 7 8 9 10 11 12 13 | private int calculateOffsets () { int count = 0; for (int i = 0; i < attributes.length; i++) { VertexAttribute attribute = attributes[i]; attribute.offset = count; if (attribute.usage == VertexAttributes.Usage.ColorPacked) count += 4; else count += 4 * attribute.numComponents; } return count; } |
Earlier I noted that we are using the Usage.Position for the color attribute as well. This is because libGDX normally uses something called 'packed color data' for storing color information. You can see that mentioned in the 'calculateOffsets()' function above. As explained here, packed color data is when we combine the four, single byte components of red (8 bits), green (8 bits), blue (8 bits), alpha (8 bits) into one 32-bit float value. LibGDX then uses the Usage.ColorPacked flag to calculate the proper offset. To keep things clear for demonstration, I chose to specify the r,g,b,a components as separate float values and used Usage.Position so that libGDX would calculate the offsets properly. Also, to clarify, the '4' being added to the 'count' variable represents the number of bytes (4) in a 32-bit float variable. The 'count' variable is keeping track of how many bytes are in a single vertex.
The last thing we need to pay attention to are "a_position" and "a_color" at the end of specifying our VertexAttribute objects above. These are set to the "alias" field in the VertexAttribute class outlined above. It is *critical* that these match the variable names we create within our shader code because the alias is used to create pointers to our shader code variables. We haven't covered shaders yet, but when we do I'll refer back to this alias section.
Next.