A Roadmap to the Programmable World: Software Challenges in the IoT Era

The emergence of millions of remotely programmable devices in our surroundings
will pose signicant challenges for software developers. A roadmap from today’s cloud-centric, data-centric Internet of Things systems to the Programmable World highlights those challenges that haven’t received enough attention.

Author’s post-print version (pdf), which in is content equal to the fully formatted, published version available from IEEE:

Taivalsaari, Antero and Mikkonen, Tommi, 2017. A Roadmap to the Programmable World: Software Challenges in the IoT Era. Software, IEEE, 34(1), pp.72–80. doi:10.1109/MS.2017.26

 

Scientific Writing – Guide of the Empirical Software Engineering Research Group

We have written some instructions to help the students in scientific writing. Learning scientific writing provides the ability to express one’s thoughts with particular clarity and communicate them in a manner that seasoned scientists find easy to follow.

The guidelines are applicable for seminar reports, B.Sc. and M.Sc. theses and also when aiming to write your first scientific publication. The guide is intended for software engineering and related areas of research. They are not necessarily directly applicable to other fields, e.g., theoretical computer science.

Please, read the guide before starting your thesis work: Scientific Writing – Guide of the Empirical Software Engineering Research Group

BTW. From the ESE research group’s web pages, you can also find some thesis topics of interest to the research group, some provided by our industrial collaborators.

Improving the Delivery Cycle

Together with researchers from the Tampere University of Technology and Aalto University, we have recently been studying the continuous deployment phenomenon from different angles. In our latest installment, we studied the toolchains and the development processes of software intensive companies based in Finland. We tried to figure out what kind of development and deployment pipelines companies have, how well the development stages are automated with tools and to explore the relationship between the usage of tools and release frequency in practice. The article Improving the delivery cycle: A multiple-case study of the toolchains in Finnish software intensive enterprises was just published in the December 2016 issue of Information and Software Technology.

A picture drawn by one of the respondents about their software development process.

A process picture drawn by a company representative in an interview

Our case study data consists of information collected from 18 cases (17 companies) using semi-structured interviews as the data collection method. Many of the companies are from the Need for Speed research program which we have been part of since 2014. In the interviews, we asked them to draw (yes, with an actual marker or a pen) their development processes and name the concrete tools they use in each development stage. We also inquired of their release and deployment protocols. Using thematic analysis, a toolchain for each case was built from this data.

Looking at the results from the cases, tools from several categories could be considered as the de facto standard for software development. Version control and build tools fit in this category. Then again, many of the development stages were completed without any automatic tools at all, requiring manual intervention or omitted altogether as development activities. Fully automated toolchains were rare and deployments were often done manually. Acceptance testing is one example which was missing from the toolchains of companies as an automated activity.

Fastest of the companies were able to release software pretty much daily. Some of the slower companies had the ability to release once a month or so. The actual production release frequency differed, however, from the ability to do a release. Sometimes the actual release cycles were longer than a year depending on the domain. Domain differences can make quite a difference and the gaming companies stand out in their choice of not having so many automated tools such as continuous integration. It seemed that companies with more complete toolchains were on the faster end when comparing the release frequencies so perhaps the automated toolchains do help in increasing the release frequency but the relationship is not a simple one. An internal capability to release in the range of two weeks can be achieved with relatively few tools and the release frequency can also be low when a company has a solid toolchain in use.

We can conclude that a good, automated, toolchain can be an asset if a company wishes to strive for continuous deployment or to generally improve their release capability. Cultural factors should not be overlooked but tools might just help in the process.

If you want to read the full story behind the study, go read the article from Elsevier. Elsevier has provided us with a link that gives free access to the article until November 25th 2016 so you have a month to check out the article after which normal Elsevier subscription rules and fees apply. I hope you enjoy reading the article and I hope to hear any comments you might have on the subject.

Simo Mäkinen, Marko Leppänen, Terhi Kilamo, Anna-Liisa Mattila, Eero Laukkanen, Max Pagels, Tomi Männistö, Improving the delivery cycle: A multiple-case study of the toolchains in Finnish software intensive enterprises, Information and Software Technology, Volume 80, December 2016, Pages 175-194, ISSN 0950-5849, http://dx.doi.org/10.1016/j.infsof.2016.09.001. Open access link http://authors.elsevier.com/a/1Tqqq3O8rCGzPw (expires 25th November 2016).

Wishing all the readers a colorful autumn,
Simo Mäkinen
University of Helsinki
Department of Computer Science

A good start for the Autumn semester – 4 papers accepted to Profes 2016

Profes, aPROFES-LOGO-2016n International Conference on Product-Focused software Process Improvement, is among the top recognized software development and process improvement conferences. Profes 2016 which will be held in Trondheim, Norway in November 22-24th, 2016 had a tough competition as they received close to 80 submissions. Thus at ESE, we are glad to announce the acceptance of four papers. The papers with the following titles and author list were accepted and will be made available here once they have been published:

  • Transitioning Towards Continuous Experimentation in a Large Software Product and Service Development Organization – A Case Study: Sezin Gizem Yaman, Fabian Fagerholm, Myriam Munezero, Jürgen Münch, Mika Aaltola, Christina Palmu and Tomi Männistö
  • Towards Continuous Customer Satisfaction and Experience Management: A Measurement Framework Design Case in Wireless B2B Industry: Petri Kettunen, Mikko Ämmälä, Tanja Sauvola, Susanna Teppola, Jari Partanen, Simo Rontti
  • Supporting management of hybrid OSS communnities – A stakeholder analysis approach: Hanna Mäenpää, Tero Kojo, Myriam Munezero, Terhi Kilamo, Fabian Fagerholm, Mikko Nurmela, Tomi Männistö
  • DevOps Adoption Benefits and Challenges in Practice: A Case Study: Leah Riungu-Kalliosaari, Simo Mäkinen, Lucy Ellen Lwakatare,  Juha Tiihonen and Tomi Männistö

Congratulations to all the authors!

Challenges faced with Continuous Experimentation

One of ESE’s research focus and core competencies is introducing and conducting continuous experimentation with software product-intensive companies. Based on our competency, we are working on releasing a continuous experimentation handbook to guide companies on carrying out continuous experimentation. To help us garner evidence of what people would like to see in the handbook, we conducted a small survey with fellow researchers and company representatives in December, 2015. For the survey, people were asked to rank the importance of five common challenges with continuous experimentation. The ranking was from 1 to 5, with 1 being the biggest challenge and 5 being the least. The challenges are: (A) Finding the right hypothesis, (B) Designing the right experiment, (C) Getting the right usage data, (D) Integrating experimentation and delivery, and (E) Changing the organizational culture.

In total, we received 28 responses to the survey. Based on the responses, we found that the majority of respondents find topic E: Changing the organizational culture, to be the biggest challenge.

In addition, we found that the most frequent ranking, from the biggest to the least, was as follows:
E: Changing the organizational culture
A: Finding the right hypothesis
B: Designing the right experiment
C: Getting the right usage data
D: Integrating experimentation and delivery

We also performed correlations on the collected responses and identified the following:

A B C D E
A 1.00000000 -0.3682992 0.04056504 -0.47364549 -0.18414527
B -0.36829924 1.0000000 0.28114367 -0.12108926 -0.48277004
C 0.04056504 0.2811437 1.0000000 -0.51615429 -0.63038456
D -0.47364549 -0.1210893 -0.51615429 1.0000000 0.01702777
E -0.18414527 -0.4827700 -0.63038456 0.01702777 1.0000000

If we set the cutoff at .6, then we can see that the topics C and E are negatively correlated, thus if one ranked “Getting the right usage data” high, then they also tended to rank “Changing the organizational culture” low and vice versa. This could be interpreted as follows: there are two groups of respondents, those who tend to focus on “organizational” concerns (for instance, managers) and those that tend to focus on “technical” concerns (for instance, developers).

On the other hand, if we set the cutoff at .5, then we can see that C and D are negatively correlated, thus if one ranked “Getting the right usage data” high, then they tended to rank “Integrating experimentation and delivery” low, and vice versa. This could be interpreted as follows: there might be another group in addition to those who tend to focus on “technical” concerns, namely those that tend to focus on process (for instance, DevOps).

We would love to hear your thoughts! Let us know how you would rate the five challenges and/or if there are other challenges that you have faced.

Refactoring: a Shot in the Dark?

A paper published in IEEE Software on how the practitioners viewed the role and importance of refactoring, and how and when they refactored. The study was based on the interviews of 12 seasoned software architects and developers at nine Finnish companies.

The respondents considered refactoring to be valuable but had difficulty explaining and justifying it to management and customers and did not use measurements to quantify the need for or impact of refactoring. Refactoring often occurred in conjunction with the development of new features because it seemed to require a clear business need.

mso20150600c1

Author’s post-print version (pdf): Leppänen et al. 2015, “Refactoring-a Shot in the Dark?”

Leppänen, Marko; Mäkinen, Simo; Lahtinen, Samuel; Sievi-Korte, Outi; Tuovinen, Antti-Pekka; Männisto, Tomi, “Refactoring-a Shot in the Dark?,” in Software, IEEE, vol.32, no.6, pp.62-70, Nov.-Dec. 2015.
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7310989

 

Empirical evidence on software engineering productivity

Now that even the Finnish government is striving for productivity leaps and cost-efficiencies with digitalization, it is increasingly important to understand, what exactly those general terms mean in software engineering and which factors influence them. Moreover, empirical evidence is needed to justify the underlying assumptions (e.g., working hours vs. value-add).

Performance variability in Software Product Lines (Paper in Empirical Software Engineering journal)

It is typically features that vary in a software product line. This paper reaches further with the aim of understanding the quality attribute variability, performance in this case. The paper is a result from the close collaboration of ESE research group at the University of Helsinki with Aalto University and the case company Nokia.

Myllärniemi, V, Savolainen, J, Raatikainen M and Männistö T, 2015. Performance variability in software product lines: proposing theories from a case study. Empirical Software Engineering (published online: .
http://link.springer.com/article/10.1007%2Fs10664-014-9359-z

Empirical Software Engineering blog!

This is the blog of the Empirical Software Engineering Research Group at the University of Helsinki. We address software engineering research problems and challenges with industrial relevance or origin. We emphasise the empirical aspect of the research, in particular by applying research methods that enable us gaining deep understanding of software development.