‘Software is critical for matches between soccer robots and human teams’

Can a team of soccer robots ever beat a human team? According to the organisers of the Robot Soccer World Cup (Robocup), the answer is yes. What’s more, they think it will happen by 2050. ICT Group and Fontys hope to contribute to this goal by founding a soccer robot team. Oscar Reynhout and Wim Hendriksen, consultant at ICT Group and lecturer at Fontys ICT college respectively, tell the story of how things came together and what challenges the team faces. The Netherlands have been active in the Robocup since 1998. The TechUnited soccer robot team from the Technical University Eindhoven (TU/e) has participated regularly and were world champions in 2012 and 2014. For many years now, the team has been alone at this high level. This is partly because the competition has been thinning out, largely due to a steep entry cost of about 25,000 euros per soccer robot, but also due to the high technical complexity of the robots. Affordable soccer robots In order to attract more competitors, the TU/e took up the challenge of developing an affordable soccer robot a few years ago: the Turtle 5K. This redesigned version of the 2012 cup winner is available for ‘just’ 5.000 euros. The main difference with its illustrious predecessor is that the Turtle 5K does not know how to play soccer. It comes with just a bare bones Linux-based Robot Operating System (ROS). The ROS contains the necessary drivers for the available hardware, like the omni-wheels the robot uses to move around. The buyer has to tweak both software and hardware to achieve a functional soccer robot. One of the buyers, photolithography systems manufacturer ASML, has achieved success in this regard, taking part in the 2015 Robocup in China. Software development Near the end of last year, ICT Group got involved with the project following a request from Frank Steeghs, project leader of Turtle 5K. As a side project, ICT’s Oscar Reynhout guided sixteen students of ICT college and Avans colleges through the development of soccer robot software. This software aims to improve the soccer robot’s performance. For ICT group, this project is a unique opportunity to gather experience both with robotics and artificial intelligence. Oscar comments, “For my colleagues and I this is a fantastic hobby project. As a business we recognize the immense value of being able to experiment with new ideas and technologies, especially given the rise of robotics in sectors like care taking and industry. Additionally, we maintain intense relationships with both education institutes and SME’s thanks to this project. It is very interesting, both from a business and recruitment point of view.”

Their own soccer team The students started developing the software in September 2015. They were divided into four teams: architecture, motion, model driven development and ROS. In just 6 months’ time, they got the robot to detect the ball, drive towards it, and kick it at the goal. Fontys’ Wim Hendriksen speaks with pride about his students, “They started from scratch, so this is really an astonishing feat”. This success got Wim and Oscar dreaming about a soccer robot team of their own. Wim elaborates, “We are now looking for SMEs with complementary abilities, in order to found a soccer robot team together. Such a partnership would allow us to purchase several robots at once, cutting down production costs. A lot of companies showed interest at the High-Tech Systems expo last March, so I fully expect the team to materialize.” Model driven development According to Oscar, the new team will initially focus on beating ‘mechanical engineering’ teams from the region. “TechUnited and ASML’s robots for example, are primarily made by folk working in electronics and mechanics. We want to beat them with excellent software.” One of the biggest challenges will be avoiding as many mistakes as possible during programming. Wim explains, “Robots can’t make logical mistakes like a dead lock (robots waiting for one another). Currently, one in a hundred lines of code contains a bug. However, thanks to programming methods like model driven development we can code at a more abstract level, reducing the chances that faults will slip in.” Oscar adds, “You can compare it to Microsoft Word’s spellchecker. Currently, if you make a mistake you’ll see a red line. It’d be a lot better if Word automatically corrected grammatically incorrect sentences, and in an ideal world the tool would even change sentences with faulty content. That last example is what model driven development allows us to do.” According to Robocup’s organisers, software will allow soccer bots to beat human players by 2050. Oscar elaborates, “The difference between FC Barcelona and PSV is not in physical conditioning, but rather in technique and strategy. This is no different for robots: their technique needs to be on point to allow for skilful passing and dribbling. However, great team play will ultimately bring the soccer bots victory. Unfortunately, by the time that happens, I think I will be retired.” Reading about robot soccer is fun, but really you should see it. For more information on the cooperation between ICT and Fontys refer to this page.

Get more out of testing automation and professional testers

According to Peter de Winter, Test Architect at ICT Group, organizations are still not fully benefitting from automated testing. In fact, testing automation can sometimes even detract from the strengths of professional testers. In this blog, Peter explains how software businesses can complete new functionalities quicker and better by making smart use of testing automation. The digital development cycle has drastically impacted time to market and the way markets work. Organizations would try to push their software products onto the markets yesterday, if they could. As a result, businesses are looking to testing automation as a way to speed up product development cycles. In theory, this is a good idea. Testing automation has the potential to boost both efficiency and effectiveness. In practice, a lot of organizations end up feeling cheated because they do not respect a very important caveat to the theory; only automate when it makes sense. What typically happens is the following: once a company decides to go forward with testing automation, they tend to stick with it… a little too much. Skilled testing professionals end up spending endless hours creating automated tests. This leads to an effective net loss in time ‘saved’ and the number of bugs found. Another unintended consequence is that human testers have a lot less time to look for bugs manually. Ironically, this is where human testers add the most value. How to test thoroughly? My advice to businesses is to carefully consider and decide in advance to what extent you want to implement testing automation. Weigh the expected efficiency gains against time invested to create the automation. And use risk analysis to test the functionalities that have the most potential for negative consequences. This also leads nicely into regression testing. For example, try testing entry fields with all possible and impossible combinations, but limit your regression test (or part of a larger test) to a single combination (which you preferably vary). In this regard it is very important for the test data to be independent of the testing scripts. This gives you the advantage of being able to add variation, while ensuring fast and easy maintenance of testing scripts.  You can add to these advantages by keeping automated tests small (by focusing on a small piece of functionality), which grants you the additional benefit of using it in other, more abstract automatic tests. Finally, it is good practice to determine whether or not all automated tests are used and need maintenance, potentially saving a lot of valuable time. By pre-determining these steps, organizations can get the most out of testing automation and their testing professionals. This lets your testing professionals focus on what they’re good at and passionate about: finding the bugs that fall outside of the range of automated testing. Focused manual testing combined with smart testing automation results in highly qualitative functionalities that get delivered on time. This allows businesses to quickly capitalize on new market opportunities. Want to learn more about smart testing automation? Get in touch with Peter de WinterTest Architect bij ICT Group.

The magic of the Internet of Things

The British science fiction author Arthur C. Clarke posited three laws while writing his vision of the future: 1. When an expert says something is possible, they’re probably right. When they say something is impossible, they’re probably wrong. 2. The only way to really discover what the limits are, is by crossing them a little bit. 3. Every sufficiently advanced technology is indistinguishable from magic. When it comes to the Internet of Things (IoT), these laws certainly seem to be true. No longer does technology limit us from making magic. The blending of physical and virtual technologies brings the world around us to life, as if we were living in a Pixar movie. I regularly get sent clips of Kickstarter projects or TED talks that showcase amazing new applications. One that I recently received demonstrated Flic: the wireless smart button. In short, it’s a button that can be connected to all kinds of services. For example: You can stick it on a wall in your living room, push it, and the lights will go out, the TV will turn on and automatically tune in to Netflix. If my mother were to see this, she would be convinced it is magic. The difference is that at ICT we spend so much time working with technology that within 10 seconds the “illusion” can be explained, and can be recreated after half an hour of playing around with various components. You would think that this would quickly take away the magic, yet nothing is less true. This is because Flic shows us something beautiful: it’s not the enabling technology that brings the illusion to life; it’s the show around it. Or in case of IoT solutions: the ease of use, the aesthetic design and the seamless integration with the services that makes it all possible. So what makes the illusion reality? The integration into the end user’s life! That is where everything comes together. Users do not have to adapt to the technology any longer. They do not have to learn special skills, or take time to learn the ‘trick’. The illusion is effortless and without thought. And this is the real power of IoT. Not everything in my life has to be connected, but if it improves my quality of life then it is indeed very welcome. The power of ICT Group is that we have spent nearly four decades building up our expertise in deploying a wide range of technological solutions across several market segments. Along the way we have encountered limits, and stretched them. With our signature approach that places security and privacy at the core, and focuses on added value for the user, we are looking for challenges. We want to make magic: an illusion for the end user, and an added value for smarter cities, smarter industries and smarter healthcare.

Beware where you are POSTing!

Recently I had the pleasure to work with Highcharts, a Javascript library for creating dynamic diagrams. Recommended! The client also wanted the ability to download the data that is used in the diagrams as a CSV file. A quick browse in the documentation learned that Highcharts supports this scenario. There are multiple ways to do this but the one I’ve seen the most involves POSTing your diagram data to a page that resides within the Highcharts domain. See: That csv.php page only adds the headers to create a download: This means that if you use this construction all your diagram data will be passed to a page that is within the control of Highcharts. Remember, I’m not claiming that Highcharts will do anything malicious with your data! On the contrary, Highcharts even advises in their documentation that you should create your own page if you don’t want to expose your data. Not to mention that they explicitly tell you that the page could disappear at any moment. However, a quick search on Github learned that a number of projects are still using the Highcharts csv.php page meaning that all their data will be posted to another party (over HTTP as well). So kids, whenever you start to POST data to a third party, ask yourself if you don’t mind that the data being posted is now potentially public. And create your own page to handle that download. In ASP.NET MVC it is as simple as creating the following controller method:

Small tip: use QueueBackgroundWorkItem for asynchronous work in ASP.NET

This is a small tip that I’m mainly publishing as a reminder to myself, but it could come in handy for someone else. Background processing tasks in ASP.NET are hard. At any time IIS could decide to recycle the application lifecycle. The usual solution is to farm out these tasks to a (Azure) queue and let some other machine (for example an Azure worker role) process that queue. However, with ASP.NET 4.5.2 Microsoft introduced the QueueBackgroundWorkItem method. This makes it possible to create small background processing tasks within the application lifecycle context. See the following (extremely simple) example:  

  • Caveats: A task started this way will only delay the recycling of the app pool for 30 seconds. So you need to complete your work within those 30 seconds. If not, the task will be killed.
  • You need ASP.NET 4.5.2.

See for more detail the following links: http://blog.simontimms.com/2014/05/23/background-tasks-in-asp-net-redux/ http://www.davidwhitney.co.uk/Blog/2014/05/09/exploring-the-queuebackgroundworkitem-in-asp-net-framework-4-5-2/ http://blogs.msdn.com/b/webdev/archive/2014/06/04/queuebackgroundworkitem-to-reliably-schedule-and-run-long-background-process-in-asp-net.aspx

Techorama 2015

Voor het tweede jaar op rij wordt op 12 en 13 mei Techorama georganiseerd. Dit technologieseminar wordt niet door Microsoft georganiseerd (zoals TechDays, TechEd en Build) maar door de .NET community in Belgie. Locatie is opnieuw bioscoop Utopolis in Mechelen. Grote zalen, grote schermen en fantastische stoelen gegarandeerd! Dinsdagmorgen op tijd richting Mechelen want de keynote start om kwart voor negen. Er was een keur aan sprekers aanwezig waaronder Hadi Hariri (keynote), Marcel de Vries, Jan Tielens, Bart de Smet en Mike Wood. Twee volle dagen (tot woensdag 21:00 uur) met heel veel interessante presentaties. Tracks Alle sessies zijn onderdeel van een track. Tijdens Techorama 2015 worden de volgende tracks verzorgd:

  • ALM & Testing
  • Buildorama
  • Cloud
  • Language & Tools
  • Mobile & Xamarin
  • Web
  • Other

Voor ICT en voor mij een uitgelezen kans om nieuwe kennis op te doen en con-collega’s te ontmoeten. Kijk op de Techorama website voor informatie over het programma van de twee dagen. Presentaties worden daar ook geplaatst. Twee handige links tot slot:

EntityFramework: Updating EF4.x to EF6

During the migration of an ASP.Net MVC application for a customer from an on premise environment to Microsoft Azure, I’ve run into some troubles updating EF so that we use the EF6 Retry Logic for Microsoft SQL Azure. This is a short how to overcome the problems I ran into. 1: Install the EntityFramework package First things first: if you didn’t do this already, add the EntityFramework NuGet package to your project. You can do so by using the tool (right click on solution > Manage NuGet packages for Solution) or if you need a specific version using the console app (Tools > NuGet Package Manager > Package Manager Console). 2: Check assembly redirects and namespace using’s A big difference between the 6.x and earlier version of EntityFramework are the required assembly redirects. This should be automatically fixed during the installation of the NuGet package, but make sure all references in your project(s) to System.Data.Entity are removed. If you used EF’s ObjectContext, make sure all the namespace using’s are still valid: all code regarding the ObjectContext has been moved to a new namespace (from System.Data to System.Data.Entity.Core). 3: Replace current code generation files The next step is to update all the code generation (yourmodel.tt / yourmodel.context.tt) items for your models and for that to work you first have to delete all existing files (tip: before you do this, make you sure you know if and which modifications have been made to the templates, you probably will want to make these changes again in your new templates). Add the EF6.x code generation files by right clicking on the design surface of your model and select Add Code Generation Item… from the context menu. In the following window select EF 6.x DBContext Generator. 4: Connection Resiliency Next thing to do is adding the actual Connection Resiliency. Just add a new Class to your project, preferably in the same project as your Model and call it something like MyConfiguration.cs. Add this content:

using System.Data.Entity; 
using System.Data.Entity.Infrastructure; 
using System.Data.Entity.SqlServer; 
 
namespace ModelProjectNamespace
{ 
    public class MyConfiguration : DbConfiguration 
    { 
        public MyConfiguration() 
        { 
            SetExecutionStrategy("System.Data.SqlClient", () => new SqlAzureExecutionStrategy()); 
            SetDefaultConnectionFactory(new LocalDbConnectionFactory("v11.0")); 
        } 
    } 
}

What’s next Hit compile and you’re on your way. ObjectContext versus DBContext (and Microsoft SQL Azure) I’m well aware of the fact that there’s a EF 6.x EntityObject Generator template available for Visual Studio 2012 and 2013. If you didn’t install the template yet and do want to use the ObjectContext: while adding code generation files, instead of selecting the EF 6.x DBContext Generator, search online for the EF 6.x EntityObject Generator template. I didn’t find a way yet to make this work with Microsoft SQL Azure and Retry Logic. In short: you should consider the DBContext as an extra layer around the ObjectContext class. This extra layer provides extended API’s (such as DBConfiguration) that ObjectContext doesn’t have.