Following the successful completion of the DIMECC N4S program, a publicly available N4S Treasure Chest has been released.
See our story in the narratives section in there.
Today, most of us in the industrialised world use at least two computers – a phone and a laptop. Many of us have a third device, a tablet, and more devices such as health monitoring and entertainment systems are being taken to use. Even cars are becoming powerful computing platforms that can be harnessed to serve our increasing needs for apps and data to consume.
Unfortunately, our practices, methods and techniques are not a good fit for such cornucopia of computers. We still treat them more like pets that each requires constant attention rather than accepting their new role as cattle, where no individual plays a defining role in our life. Therefore, new approaches are needed to harness the true power of computers around us to practical use. So-called liquid software is an attempt to build applications that from the end-user perspective flow from one computer to another, offering seamless computation experiences. This initiative is somewhat new, with the 2nd international workshop on liquid software being hosted in Rome, Italy, as a part of the International Conference on Web Engineering.
In addition to the techniques that will help us design software for numerous devices, another dimension to consider is what all these computers mean for humanity. What implications will the increasingly intelligent environment of the brave new programmable world have on our behaviour and actions as humans, as well as to what extent we should consider computers as part of our society are interesting questions for future research. Obviously, such work should be joint endeavour, executed by philosophers, social scientists and software engineers all together.
This blog post repeats the core message Tommi Mikkonen delivered in his inaugural presentation on May 31, 2017.
The ESE research group has authored a cookbook for continuous experimentation based on the research in the N4S research program.
Continuous experimentation Cookbook – An introduction to systematic experimentation for software-intensive businesses provides an introduction to continuous experimentation, which is a systematic way to continuously test your product or service value and whether your business strategy is working.
An increasing number of companies are involved in building software-intensive products and services – hence the popular slogan “every business is a software business”. Software allows companies to disrupt existing markets because of its flexibility. This creates highly dynamic and competitive environments, imposing high risks to businesses. One risk is that the product or service is of only little or no value to customers, meaning the effort to develop it is wasted. In order to reduce such risks, you can adopt an experiment-driven development approach where you validate your product ideas before spending resources on fully developing them. Experiments allow you to test assumptions about what customers really want and react if the assumptions are wrong.
This book provides an introduction to continuous experimentation, which is a systematic way to continuously test your product or service value and whether your business strategy is working. With real case examples from Ericsson, Solita, Vaadin, and Bittium, the book not only gives you the concepts needed to start performing continuous experimentation, but also shows you how others have been doing it.
The cookbook was a deliverable in the N4S program.
We have recently completed successfully the DD-SCALE (Distributed dynamic software development work in global value networks – framework, tools and work expertise practices) joint-project with the University of Tampere and Haaga-Helia University of Applied Sciences as research partners and ABB, Comptel, Napa and Nokia as the industrial partners. The Tekes-funded project period was 9/2014-9/2016 with a closing seminar in February 2017.
Productivity in software-intensive product and service development has been a persistent research challenge for decades. Considering total productivity, it is essential to understand holistically the role of software and their development in organizations. In practice, it is not possible to explain conclusively all the business impacts of software development related decisions. Furthermore, the net customer value provided by software is influenced by many company external, non-controllable factors.
However, key factors affecting the total productivity are knowledge and competencies coupled with the ability of the company to leverage them. The company can influence those with various decisions and activities both positively and – possibly unintentionally – negatively. In terms of total productivity, it is imperative to understand that even single, determined decisions and the roles of certain individuals may have major impacts of the performance of the entire organization (e.g., software architectural solutions). In our DD-SCALE research work we have shed light on such factors and events in our industry partner cases. Resulting research publications are currently in preparation.
Further reading (in Finnish): DD-SCALE -tutkimusprojektin päätösseminaari
The emergence of millions of remotely programmable devices in our surroundings
will pose signicant challenges for software developers. A roadmap from today’s cloud-centric, data-centric Internet of Things systems to the Programmable World highlights those challenges that haven’t received enough attention.
Author’s post-print version (pdf), which in is content equal to the fully formatted, published version available from IEEE:
Taivalsaari, Antero and Mikkonen, Tommi, 2017. A Roadmap to the Programmable World: Software Challenges in the IoT Era. Software, IEEE, 34(1), pp.72–80. doi:10.1109/MS.2017.26
We have written some instructions to help the students in scientific writing. Learning scientific writing provides the ability to express one’s thoughts with particular clarity and communicate them in a manner that seasoned scientists find easy to follow.
The guidelines are applicable for seminar reports, B.Sc. and M.Sc. theses and also when aiming to write your first scientific publication. The guide is intended for software engineering and related areas of research. They are not necessarily directly applicable to other fields, e.g., theoretical computer science.
Please, read the guide before starting your thesis work: Scientific Writing – Guide of the Empirical Software Engineering Research Group
Together with researchers from the Tampere University of Technology and Aalto University, we have recently been studying the continuous deployment phenomenon from different angles. In our latest installment, we studied the toolchains and the development processes of software intensive companies based in Finland. We tried to figure out what kind of development and deployment pipelines companies have, how well the development stages are automated with tools and to explore the relationship between the usage of tools and release frequency in practice. The article Improving the delivery cycle: A multiple-case study of the toolchains in Finnish software intensive enterprises was just published in the December 2016 issue of Information and Software Technology.
Our case study data consists of information collected from 18 cases (17 companies) using semi-structured interviews as the data collection method. Many of the companies are from the Need for Speed research program which we have been part of since 2014. In the interviews, we asked them to draw (yes, with an actual marker or a pen) their development processes and name the concrete tools they use in each development stage. We also inquired of their release and deployment protocols. Using thematic analysis, a toolchain for each case was built from this data.
Looking at the results from the cases, tools from several categories could be considered as the de facto standard for software development. Version control and build tools fit in this category. Then again, many of the development stages were completed without any automatic tools at all, requiring manual intervention or omitted altogether as development activities. Fully automated toolchains were rare and deployments were often done manually. Acceptance testing is one example which was missing from the toolchains of companies as an automated activity.
Fastest of the companies were able to release software pretty much daily. Some of the slower companies had the ability to release once a month or so. The actual production release frequency differed, however, from the ability to do a release. Sometimes the actual release cycles were longer than a year depending on the domain. Domain differences can make quite a difference and the gaming companies stand out in their choice of not having so many automated tools such as continuous integration. It seemed that companies with more complete toolchains were on the faster end when comparing the release frequencies so perhaps the automated toolchains do help in increasing the release frequency but the relationship is not a simple one. An internal capability to release in the range of two weeks can be achieved with relatively few tools and the release frequency can also be low when a company has a solid toolchain in use.
We can conclude that a good, automated, toolchain can be an asset if a company wishes to strive for continuous deployment or to generally improve their release capability. Cultural factors should not be overlooked but tools might just help in the process.
If you want to read the full story behind the study, go read the article from Elsevier. Elsevier has provided us with a link that gives free access to the article until November 25th 2016 so you have a month to check out the article after which normal Elsevier subscription rules and fees apply. I hope you enjoy reading the article and I hope to hear any comments you might have on the subject.
Simo Mäkinen, Marko Leppänen, Terhi Kilamo, Anna-Liisa Mattila, Eero Laukkanen, Max Pagels, Tomi Männistö, Improving the delivery cycle: A multiple-case study of the toolchains in Finnish software intensive enterprises, Information and Software Technology, Volume 80, December 2016, Pages 175-194, ISSN 0950-5849, http://dx.doi.org/10.1016/j.infsof.2016.09.001. Open access link http://authors.elsevier.com/a/1Tqqq3O8rCGzPw (expires 25th November 2016).
Wishing all the readers a colorful autumn,
University of Helsinki
Department of Computer Science
Profes, an International Conference on Product-Focused software Process Improvement, is among the top recognized software development and process improvement conferences. Profes 2016 which will be held in Trondheim, Norway in November 22-24th, 2016 had a tough competition as they received close to 80 submissions. Thus at ESE, we are glad to announce the acceptance of four papers. The papers with the following titles and author list were accepted and will be made available here once they have been published:
Congratulations to all the authors!
One of ESE’s research focus and core competencies is introducing and conducting continuous experimentation with software product-intensive companies. Based on our competency, we are working on releasing a continuous experimentation handbook to guide companies on carrying out continuous experimentation. To help us garner evidence of what people would like to see in the handbook, we conducted a small survey with fellow researchers and company representatives in December, 2015. For the survey, people were asked to rank the importance of five common challenges with continuous experimentation. The ranking was from 1 to 5, with 1 being the biggest challenge and 5 being the least. The challenges are: (A) Finding the right hypothesis, (B) Designing the right experiment, (C) Getting the right usage data, (D) Integrating experimentation and delivery, and (E) Changing the organizational culture.
In total, we received 28 responses to the survey. Based on the responses, we found that the majority of respondents find topic E: Changing the organizational culture, to be the biggest challenge.
In addition, we found that the most frequent ranking, from the biggest to the least, was as follows:
E: Changing the organizational culture
A: Finding the right hypothesis
B: Designing the right experiment
C: Getting the right usage data
D: Integrating experimentation and delivery
We also performed correlations on the collected responses and identified the following:
If we set the cutoff at .6, then we can see that the topics C and E are negatively correlated, thus if one ranked “Getting the right usage data” high, then they also tended to rank “Changing the organizational culture” low and vice versa. This could be interpreted as follows: there are two groups of respondents, those who tend to focus on “organizational” concerns (for instance, managers) and those that tend to focus on “technical” concerns (for instance, developers).
On the other hand, if we set the cutoff at .5, then we can see that C and D are negatively correlated, thus if one ranked “Getting the right usage data” high, then they tended to rank “Integrating experimentation and delivery” low, and vice versa. This could be interpreted as follows: there might be another group in addition to those who tend to focus on “technical” concerns, namely those that tend to focus on process (for instance, DevOps).
We would love to hear your thoughts! Let us know how you would rate the five challenges and/or if there are other challenges that you have faced.
A paper published in IEEE Software on how the practitioners viewed the role and importance of refactoring, and how and when they refactored. The study was based on the interviews of 12 seasoned software architects and developers at nine Finnish companies.
The respondents considered refactoring to be valuable but had difficulty explaining and justifying it to management and customers and did not use measurements to quantify the need for or impact of refactoring. Refactoring often occurred in conjunction with the development of new features because it seemed to require a clear business need.
Leppänen, Marko; Mäkinen, Simo; Lahtinen, Samuel; Sievi-Korte, Outi; Tuovinen, Antti-Pekka; Männisto, Tomi, “Refactoring-a Shot in the Dark?,” in Software, IEEE, vol.32, no.6, pp.62-70, Nov.-Dec. 2015.