Existing data collection methods for content analysis of software features in computer science
There is no single established methodology that could be adopted for this study, which in itself is indicative of what a new field this is. Even within computer science, researchers call for standardisation of web services and techniques and methodologies to analyse variability within a single framework (Sun, Rossing, Sinnema, Bulanov, & Aiello, 2010; Wohlin, Höst, & Henningsson, 2006) and point out the lack of established research methods for specific empirical software engineering problems (Easterbrook, Singer, Storey, & Damian, 2008). Instead, principles and practices adopted in several fields were combined to create a new quantitative approach to auditing software features of music apps for BYOD music education that would generate empirical, non-biased data.
Variability modelling techniques
In computer science, studies that seek to audit software features use variability modelling techniques, but these are principally designed for engineers to trace features in software families (several software packages that share code developed by a single engineer, team, or company) (Deelstra, Sinnema, & Bosch, 2005, 2009). Nonetheless, approaches developed in methodologies such as Feature-Oriented Domain Analysis “establish parameters for measurement and analysis such as a feature model” (Lisboa, Li, Morreale, Heer, & Weiss, 2014, p. 1).
In this study, all software was analysed for feature sets and categories that were coded as they arose. Core to the analysis was the identification of platforms and hardware that software would run on, but extensions around connectable hardware and related features (for example, the ability to play back audio or video, or to record audio or MIDI) were also established, coded, and tabulated under commonality or variability parameters.
Cross-Browser Compatibility (CBC) Testing
Aspects of Cross-Browser Compatibility (CBC) testing (Prasad, 2012) were adopted. While a key feature of this approach to testing browser-based software is the process’ automation, this study could not fully automate testing since the software-hardware combination required physical configuration. Nonetheless, key approaches of CBC such as identifying functional consistency by analysing applications in this case not under different browsers but instead OSes and device hardware, and formal comparison of generated models for equivalence on a “pairwise-basis” to expose observed discrepancies (Prasad & Mesbah, 2012, p. 561) were adopted within the tabulated variability modelling outlined above.
Browser-based applications and native applications were tested on mobile devices that would covered all likely combinations of technology in a BYOD music classroom: iPad, iPhone, Android Phone, Android tablet, Windows laptop, Windows tablet, Windows phone, Chromebook and Apple laptop, with at least two devices for each OS used to confirm results of each test. Unless software did not support audio, video, or MIDI input and/or output, a range of other hardware devices (as well as built-in audio and video capabilities, an external class-compliant USB audio microphone and a class-compliant USB MIDI keyboard) were tested for their compatibility on each device.
Sampling
Identifying software to analyse was key to the research design. While the term BYOD may have become mainstream, music software manufacturers do not necessarily advertise their products under this term. In this study we wished to audit the entire population rather than sample any range of software, so identification was key.
Approaches to identifying software included internet searches through a variety of related terms in a number of search engines; through the Google Chrome Play Store (because Chrome is a browser that runs on all of the above devices, and software developers can self-categorise their applications under “music”), and as word of mouth to both researchers through their own teaching practice. Non-probability sampling cannot be guaranteed to be free of unintended bias, nor to have identified all relevant software. Nonetheless, 38 software titles were identified, and as the first study in this particular field, the population was significant enough at the very least to offer a useful starting point for further studies.