The new Mobile Line of Business Accelerator 2008 includes a new data layer that can be used in your own projects to speed up development. When the Add Item screen for example is displayed, the user uses a combo box filled with inventory objects to pick inventory items to add to an order. This article walks through how the data gets from the database to the UI.
The first questions developers typically ask when first looking at the Mobile Line of Business Accelerator 2008 are
- How is the application code partition i.e. the architecture of the client?
- How does data get from the database to the UI?
The first question is answered by the diagram below. The application is divided into a UI, Application and Data layers. A layer can only interface with the layer below it and/or above it. So the UI layer only interacts with the Application layer. The Application layer interacts with both the UI and Data layer. This is typical application architecture (see my entry on the architecture for more information).
To access data the Application layer needs to initialize a data store. The concept behind data store is that the Application layer should not care how data is actually persisted. Instead the application logic only cares about how data is presented from the data store to the Application layer. Similarly the UI layer is expecting strongly typed application objects and does not particularly care about how those objects are retrieved or persists. The Application logic is expecting data from the Data layer in a row and column fashion. Basically it hands the Data layer requests for data and does not care how the data store handles the request. It just wants “DataRows” which is a Dictionary <string, object> where the string key is the column name and the object the value of the column. The Application logic then takes the “DataRows” and transforms them into Customer or Inventory objects that the UI is expecting.
The second question takes up the rest of the article.
Building the Connection
Before the Data layer can work it must be initialized by the Application. During the startup of the application the Logon form calls into the Application layer to initialize the Application layer using HardwareDistributor.Business.Services.CacheDataSourceConnection method(see Listing 1 - Initialize _dataStore variable). This method sets up the data store and sets the variable _dataStore. In this case the data store will be a SQL CE database.
Listing 1 - Initialize _dataStore variable
_dataStore = DataStore.GetDataStore(GlobalCache.Instance.ConnectionString);
_dataStore.Trigger += new EventHandler<TriggerEventArgs>(DataStore_Trigger);
DataStore is an abstract class used by the Application layer to isolate it from the actual Data layer implementation. The DataStore object has a static method called Microsoft.Mobile.Data.DataStore.GetDataStore. This method a connection string and correctly constructs a DataStore implementation object.
The GetDataStore is an overloaded method which actually calls another version of itself giving it default values for missing arguments. The first missing argument is the fully qualified class name that actually implements the DataStore which by default is SSCEDataStore in the SqlServerCE folder of the Microsoft.Mobile.Data project. The other argument is a collection of parameters that can be used to modify certain aspects of the actual data store implementation for example controlling the size of the command cache.
The actual implementation of GetDataStore method checks to see if the data store already exists in its own internal cache. The internal cache is a Dictionary where the key is the connection string and the value a DataStore for that connection string. This allows for sharing of the DataStore by identifying it with its connection string. If the DataStore has not already been constructed it builds a DataStore in this case a SSCEDataStore object using the passed in connection string.
SSCEDataStore is actually a subclass of another DataStore abstract class in this case SqlDataStore (see diagram above). SQL Server CE is not the only available database for the Windows Mobile platform. So an intermediate class was made to hold the code for talking to any ADO.Net enabled database. Only two methods are required to talk to an ADO.Net compliant database CreateConnection and oddly enough CreateParameter. CreateConnection creates the actual DBConnection implementation. CreateParameter is used to create the actual implementation of DBParameter.
Going back to the construction of SSCEDataStore the actual code is spread between DataStore and SqlDataStore. The base constructor ensures that a connection string has been passed into the constructor. The SqlDataStore setups a parameter that controls how many DBCommands are cached before creating a DBConnection using the actual implementations CreateConnection. The last step is to open the connection.
At this point _dataStore points to an open SQL Server CE database. Taking one last look at the CacheDataSourceConnection method an event delegate is created for the Trigger event. Subscribers to the event will get a notification when anything is inserted, deleted, or updated by the data store. A related event TriggerError can be used to subscribe to notification of an error during the Trigger event. The Trigger event can be used like a database trigger for example retrieving and setting an auto incrementing key. In this case the Trigger event is listening for insertions into Orders and OrderDetails. Upon getting notified of an insert the event method asks the Database for the last identity used. It then sets either the Order or OrderDetail of the object being saved. This is important as the OrderDetails are inserted in a new order. The OrderDetail needs to know what order it is assign in the database.
To application is now ready to retrieve and save objects. So how does the AddItem form actually fill the combo box with inventory items? In the AddItem.cs file is a Load event. This event calls the BindInventoryDisplay method which holds the code to bind a collection of inventory to the combo box. The UI calls a deceptively simple method HardwareDistributor.Business.Services.GetInventory which return a list of inventory objects. The method builds the collection of inventory items using a LINQ statement and the ToList method from a LINQ query.
Listing 2 - GetInventory method
// Build a Linq
Query to build a list of inventory
// you could modify this to also find only specific inventory
// with a where clause
var inventoryQuery = from currentRow in _dataStore.Read(StoredProcs.GetInventory)
inventoryQuery as a List<Inventory>
The first part of the method setups a query variable, inventoryQuery, using LINQ. What this statement says is assign the variable currentRow to any values that are returned as the query loops through a RowEnumerator returned from _dataStore.Read method. The Read method takes the contents of StoredProcs.GetInventory which is a SQL string and for each currentRow calls the method CreateInventory. You could accomplish the same thing using a foreach statement.
Listing 3 - Example of building Inventory using FOREACH instead of LINQ
List<Inventory> results = new
foreach (DataRow currentRow in _dataStore.Read(StoredProcs.GetInventory))
Managed Stored Procedures
The simple part of this is StoredProcs.GetInventory. In the Business folder in HardwareDistributor project a resource file called StoredProcs exists. Opening up StoredProcs brings up the resource designer. To see the GetInventory file the resource design needs to be in the “File” mode. This is selected by clicking on the button arrow next to the label of the first button. A list of modes is displayed including near the bottom “File”. Selecting “File” should show the “File” mode designer with an icon view of the files being managed by this resource manager.
One of those files is GetInventory. The magic of managed stored procedures is that the parameterized SQL is housed in these files. Visual Studio then automatically builds a class in this case named StoredProcs with a property for each file to return the contents held by the file. In the case of SQL files the resource manager class generator assumes the contents should be returned as strings. The contents of GetInventory is simply.
Listing 5 - GetInventory SQL
Name, Picture, Price, InStock,
If a file is deleted or renamed the compiler will cache anywhere that file is used. This will cause a compiler error. Another benefit is a reduction in the threat of a SQL injection attack. The SQL is used to build parameterized queries and not concatenated strings.
Getting the Data
The SQL is passed into the Read method. The actual implementation of Read is in the SqlDataStore class. Again the Read method is an overloaded method that fills in the missing argument for the actual Read implementation. The Read method accepts a string and a collection of parameters and returns an object that implements the RowEnumerator interface. The Read method does the following:
- Takes the SQL and any parameters and uses the BuildCommand method to create a DBCommand. The BuildCommand not only builds DBCommands it also cache the most used commands to speed up frequently used commands. In addition it not only understands SQL but has a limited capability to build a table direct command. If the contents of the string is “Table:name of the table” for example “Table:Customers” then it builds a table direct DBCommand to return all the rows and columns of the requested table.
- Next the DBCommand has its ExecuteDataReader method called to return a DataReader.
- Finally the returned DataReader is passed into the constructor of EnumerateDataReader and returned. EnumerateDataReader implements RowEnumerator interface.
The first two steps are straight forward but at this point you might be wonder why the DataReader is put into EnumerateDataReader. The main reason is to have a unified way of passing data from different kinds of data stores. The only requirement is that the data be able to be returned as a collection of rows and columns. RowEnumerator is the abstract class that any calling code into the data store needs to understand. The EnumerateDataReader acts as a mediator between the DataReader which holds the actual data and the calling codes expectation of getting a RowEnumerator which enumerates a collection of DataRows. A DataRow is short hand for Dictionary<string, object> where the string is the name of the column and the object is the value of that column for the current row.
Implementing a Custom Enumerator
In order to fully implement a custom enumerator two generic interfaces need to be implemented IEnumerator<DataRow> and IEnumerable<DataRow>. This enforces that any implementation of RowEnumerator only returns DataRow objects. A custom enumerator enabled in .Net 1 and 2 the ability to use foreach. Now with LINQ custom enumerators can give even more power. To build a custom enumerator the following methods and properties need to be implemented Current, Reset, MoveNext, and GetEnumerator. GetEnumerator just returns the right interface to interact with your enumerator. The other three are the meat of a custom enumerator.
Current is a property that holds the latest DataRow. Current is set when MoveNext is called. When foreach or LINQ needs the next object it calls MoveNext. MoveNext gets the value of Read of the DataReader. If Read was true then data is available otherwise EnumerateDataReader has come to the end of the DataReader. If data is available MoveNext builds a new current row of type DataRow. It then loops through the DataReader columns and putting the column name and value in the DataRow. This was done because during maintenance a SQL statement might be changed. So rather than have an order dependency a Dictionary is used to give quick access to the column and its value by the name of the column.
Listing 6 - MoveNext of EnumerateDataReader
// read a row from the data reader
result = _dataReader.Read();
// if a row was read then build a Row
// build Row current row from DataReader
currentRow = new DataRow();
// loop through the fields and building the Columns
// and add them to the row
for (int loop = 0; loop < _dataReader.FieldCount; loop++)
tempString = _dataReader.GetName(loop);
// if no more rows set the currentRow to null
currentRow = null;
The Reset method allows for returning the enumerator to its initial state. In this case a DataReader is forward only. Therefore Reset throws a not implemented exception if called. This could be altered in a custom implementation. You could pass in the DBCommand used to build the DataReader. Thus when Reset is called you would just close and dispose of the current one before re-executing the DBCommand.
The other responsibility of EnumerateDataReader is properly disposing of the DataReader it is managing. In the Dispose method the DataReader is closed before the DataReader is disposed.
The benefit of this approach is at the end of a Read from the data store you have an enumerable collection of objects regardless of the actual structure of the data. However nothing is built unless the code actually needs it in an on-demand fashion. The DataReader does not read anything until requested at which point it the EnumerateDataReader creates a DataRow and hands it out almost like you have a cursor setup in your query. In addition it is both FOREACH and LINQ friendly. This allowed for the use of LINQ. So you could execute a SQL statement and then use a “where”, “orderby”, “groupby”, “sum” clause etc. of LINQ to further manipulate and refine the data.
Building the Inventory Items
Returning back to the LINQ statement it now knows where it will get objects (see Listing 2 - GetInventory method). When it gets a DataRow it is called currentRow and passed into CreateInventory method. This method simply builds the inventory object which is returned to the LINQ statement. One thing you might notice is the use of a generic helper method called GetValue.
Listing 7 - CreateInventory Method
return new Inventory(GetValue<Nullable<int>>(currentRow[Inventory.INVENTORYID_COLUMN]),
GetValue method simply checks to ensure that the value is not DBNull. If it is DBNull it sets the value to the default for that specified type. Since a lot of the columns or values can at times be null the Nullable type is used. To retrieve values simply reference the DataRow indexer passing in the name of the column.
The LINQ statement is setup at this point. The statement alone does not do much other than setup the query and build the EnumerateDataReader. In order to actual get the LINQ to do work an operation must be done which in this case fill a List using ToList method of the query. The resulting collection of inventory objects are returned to the BindInventoryDisplay method which then uses data binding to bind the collection to the combo box.
Retrieving a Single Value
If the Application layer requires a single value the Data layer implements a GetValue method. This method is similar to the Read method. The only difference is that it uses a DBCommand GetScaler method. GetScaler returns an object that is passed back to the caller. What actually gets past back is dependent on the actual underlying ADO. Net implementation. Most implementation should hand back the first column value of the first row.
To modify, delete and create objects from the UI layer requires that it just create, modify or delete an object. The Mobile Line of Business Accelerator implements an object lifecycle mechanism. All the Application objects that are persistent inherit from DataObject. DataObject contains the building blocks for the lifecycle of the object.
The object can be in one of four states:
- Delete – the object is marked to be deleted when it is persisted back to the database. At this point no change can be done to the object.
- Modify – an object has been modified and when it is persisted back to the database its state will be updated.
- New – the object is a new non-persisted object.
- Unchanged – when an object is retrieved from the database it starts out in an unchanged state. At this point the object can stay unchanged, be modified or deleted.
The state is maintained in the constructors, delete method and property set methods. When an object is created with a no argument constructor the object is marked as new. When an object is created with arguments it is considered an unchanged freshly hydrated object. The base method Delete marks an object state as “Delete”. When a property is changed to helper generic methods can be used ChangeKey and ChangeValue. Both values do essentially the same thing enforces type and correctly set the object state when a property is changed. The difference is that the ChangeKey enforces that once set the property cannot change.
Listing 8 - Change Properties
/// Unique Order Id
public Nullable<Guid> OrderId
bool changable = _orderId == null;
_orderId = ChangeKey<Nullable<Guid>>(_orderId, value, ORDERID_COLUMN);
if (changable && (_orderId != null))
foreach (OrderDetail currentDetail in OrderDetails)
currentDetail.OrderId = _orderId;
/// Current state of the order as it
/// moves through the production process
public Nullable<OrderState> OrderState
// If needed put additional editing before the change value.
_orderState = ChangeValue<Nullable<OrderState>>(_orderState, value, ORDERSTATE_COLUMN);
When the UI is read to save an order it calls the SaveOrder method. This method saves the order header based on the object state. Then loops through the order detail saving them based on the object state.
Listing 9 - Save Order
// Create order header
if (order.ObjectState != ObjectState.Unchanged)
parameters = new Parameters();
_currentOrder = order; // setup for trigger
Can't just add the order signature here since we may not even have it yet
// either that or this query needs to take null as a parameter for signature
// parameters.Add("@Signature", order.Signature);
//get new order guid
order.OrderId = Guid.NewGuid();
//get new display id.
order.DisplayId = GetNewDisplayId();
order.ObjectState = ObjectState.Unchanged;
if (order.ObjectState != ObjectState.Delete)
parameters = new Parameters();
foreach (OrderDetail currentDetail in order.OrderDetails)
_currentOrderDetail = currentDetail; // setup for trigger
if (currentDetail.ObjectState != ObjectState.Unchanged)
parameters["@InventoryId"] = currentDetail.InventoryId;
parameters["@Quantity"] = currentDetail.Quantity;
parameters["@InventoryId"] = currentDetail.InventoryId;
parameters["@Quantity"] = currentDetail.Quantity;
currentDetail.ObjectState = ObjectState.Unchanged;
_currentOrder = null; // teardown for trigger
_currentOrderDetail = null; // teardown for trigger
The team did performance testing. The first test was simply using ADO.Net to build the objects. The second test was using a FOREACH and finally LINQ. The test table contained over 42 thousand records. Each test did the following:
- Open the database connection
- Read the table (this is to JIT the underlying code and classes)
- Read another table (randomize the file so the database needs to actually do the I/O rather than use cached results)
- Read the table again this time measuring the time to accomplish this
The test was done using the emulator. The interesting thing is that the difference between the three tests was within 100-150 milliseconds. The average was around 6.5 seconds to build 42,000 objects or a rate of around 6500 objects hydrated per second. And in using an index and picking a subset of records (approximately 2000 objects) happened within half a second. Could it be faster? Always but the performance for a version 1 framework exceed the teams expectation.
The new data layer was designed to give developers a powerful way to access data while hiding the detail of using ADO.Net. At the same time the data layer gives developers a way to use LINQ on data coming out of the data layer. Finally the three layers of the UI are isolate from the implementation details and can be swapped out with a different implementation for example swapping out where information is stored.