This paper addresses education and communication in hydrology and
geosciences. Many approaches can be used, such as the well-known
seminars, modelling exercises and practical field work but out-door
learning in our discipline is a must, and this paper focuses on the
recent development of a new out-door learning tool at the landscape
scale. To facilitate improved teaching and hands-on experience, we
designed the Studienlandschaft Schwingbachtal. Equipped with field
instrumentation, education trails, and geocache, we now implemented
an augmented reality App, adding virtual teaching objects on the
real landscape. The App development is detailed, to serve as
methodology for people wishing to implement such a tool. The
resulting application, namely the
Learning by doing is an old concept. At the university, education includes practical work, commonly when the topic allows it, and field excursions, more seldom due to travel financial and organisational constraints. Water sciences, and environmental sciences in general, are topics for which learning in the field is necessary (Nash et al., 1990): observing landscapes already tells a lot about the water cycle. Field learning also allows better understanding of measurement devices. Keeping this in mind, we decided to create an interdisciplinary landscape-based teaching facility that includes many different learning methods (Wagener et al., 2007). A further aim was to involve science-based projects in the landscape to facilitate a discussion between undergraduate and graduate students on the one hand, but also allow science to go public (Nash et al., 1990; Seibert et al., 2013).
In 2008, we established the study site Studienlandschaft Schwingbachtal. The goal was to create an education tool for the students of environmental sciences of the Justus Liebig University of Giessen, Germany. The chosen site needed to be in reasonable distance to the University and easily accessible with large group of students. It should be a headwater catchment, which size allows touring in a limited time (half day). Human impact such as settlement and agriculture should be present to raise awareness about their impact on the environment, as partly targeted by the majors in environmental sciences. The Schwingbach catchment, located 20 min drive away in the south-west of Giessen met the above-mentioned criteria. A potential outlet stood as evidence with the existence of a retention basin, which offered facilities for further instrumentation (secured bridge above a regular straight stream section).
The Schwingbach catchment encompasses the Schwingbach and several small
tributaries, of which the Vollnkirchener Bach is the largest one. A retention
basin defines the outlet of the overall study catchment (Fig. 1). It covers
a total area of about 23.5
The Studienlandschaft Schwingbachtal project includes various educational
approaches from the start, all considering the principle of learning by
doing.
Excursions with large groups of students, subdivided in smaller
groups, are organised as part of lectures: many monitoring techniques, orally
described with supporting pictures during the seminars are turned into
gestures. For instance, students measure discharge using the salt dilution
method, the speed of the water, by flumes (RBC, Eijkelkamp, NL). Three
educational trails (Fig. 1), consisting of informative boards on hydrology,
ecology and soils are installed in the catchment. Both students and locals
can read them autonomously (though some locals commented that some boards
were too technical). Thus, this trail is also a communication tool, informing
locals of the University's involvement in the catchment. Since summer 2014, the trail hosts a geocache ( To deal with the collected data, assure coherence, and no loss of
data a database with an online interface was created (Institute for Landscape
ecology and resource management, 2012). The web frontend of the database is
designed to assist students, researchers and technical assistants from
planning the maintenance tasks, to quality assurance and data analysis, over
the data collection in the field. Users can upload manually collected data,
like groundwater level with electric contact gauges, discharge using the salt
dilution method, as well as lab results, using spreadsheet templates. Data
collected from automatic data loggers can be imported directly into the
database and calibrated with the manual measurements from the same site;
invalid values due to instrument failures are masked by the database manager.
Users can export the time series, stored in a structured database, using
filters for sites, instruments and dates. However, time series can also be
plotted and transformed with the web application. Specific data sets, like
online available meteorological data, can be viewed by the public. The Studienlandschaft Schwingbachtal also hosts many diploma works,
from Bachelor to Doctorate. Since 2008, over 30 theses were conducted in the
study area, requiring students and PhD candidates to install measuring
stations according to their research questions. As a result, the study site
database counts over a hundred sampling points e.g. for discharge,
groundwater level, surface- and ground-water chemistry, trace gaz emissions,
meteorology, etc. While several dynamic parameters are regularly measured
(e.g. discharge, groundwater table depth, instream nitrate concentrations),
others parameters are only measured once to improve the spatial coverage of
information (e.g. infiltration, saturated conductivity). Student jobs and
internships are the backbone of routine measures. This is also a mean for the
students to explore their interest in academic studies (helping them to
orientate towards their best interest).
All these ways of learning are well-known and nowadays commonly used. Students' feedbacks to such approaches of hydrology are positive. But, as technology is continuously developing and part of our daily life (i.e. smart phone and the set of sensors they contain) and, as students presently entering the universities are part of the so called y-generation (Macky et al., 2008), defined as “technologically sophisticated”, new approaches need to be implemented. One of the most promising is augmented reality.
Augmented reality (AR) can be defined as any system that fulfil the
three following requirements: it “combines real and virtual [world],
is interactive in real time and is registered in three dimensions” as
stated by Azuma (1997). Milgram and Colquhoun (1999) refine this
definition assigning AR in a
Since Azuma's review (1999), outdoor AR improved (for instance: Langlotz et al., 2011; Veas et al., 2012) and some native AR-Apps are nowadays available on the market, such as enhanced trail road in natural parks or touristic city guided tours (Guttentag, 2010; Jung et al., 2015). Native Apps are those developed for use on a particular platform or device, using smartphones specific hardware and software. They often use multiple tracking as to overcome the above-mentioned issues, i.e. they combine Global Positioning Systems, reckoning trackers (rate gyroscopes), passive optical trackers (video sensors, line-of-sight), electronic compass, and tilt sensors (inclinometer).
The present paper focuses on the development of an AR-App for the Studienlandschaft Schwingbachtal. First, the concept and technical aspects of the development will be described, as a methodology to develop such a tool, then, the resulting application will be depicted. To finish, the use of such AR tool for teaching hydrology will be discussed.
“Mobile learning” means learning with mobile media (e.g. smartphone, phablet, tablet) (Kingston et al., 2012), i.e. learning whatever the situation and whenever. Thus, mobile learning is not a substitute to a more classic form of learning, but an extension. The benefit is to allow spontaneous learning, when a stimulus provokes an urge for learning, in particular learning something small, in a short time. The Schwingbachtal application lays in a restriction of mobile learning as it is meant to be site-dependent. However, the content should be accessible without a predefined order, warranting as much flexibility and spontaneity as the user wishes. The technical aspect of the use of AR should not be a barrier to the learners and thus should be intuitive.
The target audience needs to be defined as it conditions the level of prior knowledge and the scientific concepts to explain. In the case of the Schwingbachtal App, the audience is meant to be the same as for the educational trail and for the outside teaching lectures, i.e. undergraduate students. However, there might be other interested visitors (public audience, local people, school classes) who wander on the trail and are wishing to learn more. Thus, the new App should take into account that the educational trail was reported to be not so accessible for the locals, i.e. content should be made more accessible.
The points of interest – POIs, where to provide knowledge – also needs to be defined. We decided that the App will mostly be a technological update of the existing written static information of the educational trail. It should keep the interdisciplinary theme “Soil, Ecology and Water”, adding video and audio contents, as well as interactive contents to enter the domain of active learning: (i) real-time picture recognition (AR component) to guide visitors around the trail (visual tracking), (ii) see “inside” devices (in a tipping bucket for instance), (iii) games such as memory cards for plant recognition complementing ecology panels, (iv) flow calculation after reading the water level on the scale of the RBC flume complementing the discharge measures panel), (see results for a complete description of the App content).
Every AR App needs to provide a common set of features, (i) to detect the reality to be augmented, like reading the absolute position in space using the GPS and gyro sensors or interpreting the actual camera picture etc., and (ii) to display content that augments the reality. While specialized technical knowledge is needed to provide the infrastructure, domain knowledge is needed to provide the content. So-called AR-browsers are native Apps which provide location and recognition techniques to show content, as web browsers have the ability to open and display web sites. Publishing content for an AR-browser is very similar to create web sites. The user only needs to download the AR-browser from a store (e.g. PlayStore, AppStore) and install it on the smartphone. At the time of development of the Schwingbachtal App three different AR-browsers were available on the market: Layar, Junaio and Wikitude. To our understanding, the Wikitude App, most used worldwide, offers advanced image and location recognition, full assistance for site-specific service and image recognition. In addition, a wide range of content types including super-imposed videos is available. Also, we decided to construct an App which codes are accessible to the community (Data availablity) and which is free to download. Thus, we chose the Wikitude App (GmbH, Salzburg, Austria) as AR-browser. Within this App, sites – so called “worlds” – are stored, including our Schwingbachtal App. When the world is loaded in the App, the user browses the augmentation of the reality, not by entering web site names and using search engines, but by moving through the real space and real objects. Since the content is loaded live from a web server, a constant mobile internet connection needs to be available in the area of interest. For rich content, like video and audio services a 3G connection, or better, is needed. We checked the 3G- and 4G-reception in the Schwingbach. Given the four mobile network providers in Germany, it was observed that the study site is in general well-covered with 3G- and 4G-reception, apart local exceptions.
The augmented reality App offers location-based services. Depending on the current position of the user, points of interest (POI) are displayed on the basis of predefined geo-coordinates. All information of a POI (coordinates, descriptive text, title, images, links) are stored on a server. POI locations are marked at their geographical coordinates (latitude and longitude). Clicking on a POI makes information of the POI appears on the right-hand side of the image together with a link. This way, before getting the scientific content, the user can decide to access this information or walk to the next POI. The implementation used in this work originates from a template provided by Wikitude (2015a).
To assist the user to orientate, a radar – small compass showing the location of the POI –, a map view, and a help button are added. The radar adjusts the visible POI according to the user's location and his/her distance to it. The radar and range-restriction are implemented using a Wikitude template (2015a). The map view, from Google Maps, shows the POI (in JavaScript, stored in an HTML page). It also marks the current user location (using a HTML script). The help button is integrated over a continuously displayed i-Button at the bottom of the App. Linking the help button to the help page browser is done via integration in the index-HTML.
In addition to the location-based tracking, the AR-App offers image recognition. Tracking visual objects are required, i.e. recognizing images. These can include for instance images displayed on the boards or QR codes (Denso Wave Incorporated, Kariya, Japan). For image recognition in Wikitude, a tracking file (.wtc) is needed. The images to be recognized are uploaded to the server of Wikitude as.wtc-files in a “target collection”. The tracker code aims at examining continuously the camera image from the smartphone and compares it with the images stored in the target collection specified in the program code (in our case the Studienlandschaft Schwingbachtal target collection, JavaScript code).
Once the image is recognized, i.e. visual tracking performed, a linked virtual object can be displayed on the screen, augmenting reality. This act of adding an image to the real world is called image overlay. This is done in Wikitude (2015b) by the target manager tool. Each image of the “target collection” are linked with the content to overlay, either image, audio or video files, for image-, audio-, video-overlay, respectively. When the image is recognized, the linked content is allocated in a virtual place holder. Both the camera image and the place holder are then combined into a Trackable2DObject, which results in what is displayed on the smartphone screen. In the Schwingbachtal App, a quiz on self-propagating plants in agricultural fields, a discharge calculator and the access to subpages are realized using the image overlay.
Video overlay works similar to image overlay and is started when the
triggering visual object is recognized and stop as soon as the object
leaves the camera picture. Videos for the Schwingbachtal App have
Audio-only overlays are not supported by the browser. However, a complete transparent video without any visual content is effectively stored as video codec, making this a valid alternative approach where audible information is needed. In fact, the resulting file size is, due to the modern audio compression in modern video codecs, usually smaller than an MP3 file of the same audio quality.
HTML overlay works similar as image overlay. The HTML information is given in a JavaScript code, together with previously created images and link buttons. These elements are combined into a Trackable2DObject by the tracker code. In the Schwingbachtal App, current climate data of the weather station, transferred by the University server as JSON file, are called by the HTML overlay.
The user is now able to connect from his smartphone to the AR-browser. The App forwards the user's request and location information to the Wikitude server. In turn, the Wikitude server then forwards the request to the university IT-infrastructure where the content of the App is stored. Then, the Institute server sends the data back to the Wikitude server, which forwards it to the user: the POI appears on its mobile with a proper style sheet.
Twenty-five POIs compose the Schwingbachtal App (Table 1), among which three types are identified: static information POI (providing HTML links), live data POI (continuously providing data that were just measured) and rich information POI (including video and audio content, as well as games). The trail is now composed of the landscape elements and information boards (reality), complemented with the virtual objects from the AR App. Two sets of videos were shot and cropped for the AR (video comments) and 25 web pages written (textual comments and links). The POI are presented in Table 1 and snapshots of the App in Fig. 2. A video is available as online Supplement. Points can be reached in any order as they bring independent information.
Different navigation options are available for the App user. First, one can use the map representation. The overview map shows the location of all 25 POIs. The map can be zoomed in and out, and also used for navigation when the current position is marked. Second, when using the AR-browser-view, the POIs are marked by blue labels on the viewed landscape. These labels are defined by their latitude and longitude and are downloaded once, at the start of the App (via JSON-file). The navigation is supported by a radar view, small compass on the left upper corner of the screen, which indicates the limit of sight and the POIs with small dots. The number of POIs in view sight and the coverage, can be reached by a “range” button, the user can decide how far the displayed POIs are. At any time, the navigation mode can be changed.
We would like to emphasize that explaining complex concepts within the limits of small smartphone display was challenging, in particular due to the screen size. For future similar project, we recommend to focus on the use of larger screens such as tablets or phablets. However, one needs to be aware of the trade-off between comfort of the user and size of the App, i.e. download speed. Offering a fluid entrainment is a topical concern for AR App (Tönnis, 2010). Indeed, the propagation delay between the tracking system, the conversion of the data, and the rendering explain the un-smoothness. This could be improved either by reducing the data size of the content (for instance of the video) or by creating an offline App. Our current solution is optimised for smartphones to facilitate download, using a quality of the visuals not suitable for bigger screens. At the moment, a good in-situ connection is necessary: it is currently not possible to pre-download the content and recall the already-downloaded content on the field. This latter option would also be questionable, as it is not expected that a user wish to read from all the POIs, and thus pre-downloading everything would just be memory consuming.
During tests, mistakes in localisation of POIs occurred. This is certainly due to shadowing of the signal and the rather low quality sensors that compose our smartphones (Langlotz et al., 2011). Location and visual tracking are still research topics (Veas et al., 2012). In our case, thanks to the preview (map-view), the user can understand if there is a location tracking mistake, and then, look for the right location changing navigation mode. This location tracking mistakes being expected, we decided to offer many POIs linked to image recognition. However, for a performant visual tracking, images should show prerequisites (i.e. showing high-contrast, large structured areas and angular elements; “Wikitude,” 2015a), which was not always the case, as the trail was pre-existing. Light effect and mirror effect of the Plexiglas are also reasons for incorrect image tracking (Tönnis, 2010).
A last point to be discussed is the fact that, as such, the Schwingbachtal App is energy consuming. All sensors need to be continuously activated for the tracking (location and image) and for the data download. Incorporating in the App scripts for activation/deactivation of the sensors would optimize the energy used, and warranty longer exploration.
A group of 12 students (B.Sc. and M.Sc. with major on environmental
and resources management, biology, nutrition and medicine, aged
20–30
Other benefits from learning with the Schwingbachtal App can be discussed. First, it is a intermodal tool, using imaged, audio and textual contents. Intermodal learning helps to keep learning for longer time and is more efficient (Barron, 2003; Bitter and Corral, 2014). Second, it is a kind of active learning, thus, it develops self-training skills, and it was observed that the autonomous search for POIs raises the curiosity of the user. Third, we think that the Schwingbachtal App displays intuitive navigation and object manipulations, thus, presenting low learning costs and limiting cognitive loads. Cognitive loads happen when activities require working memory and thus reduce learning potential (Radu et al., 2010).
The AR tour is a supplementary tool that add to the existing others. It is developed following the concepts of “micro-teaching” module (Brall and Hees, 2007), aiming at keeping the users stimulated and actively thinking. The Schwingbachtal App adresses mostly students who have to learn, but is expected to meet the curiosity of strollers eager to learn. Adapting the contents to the target public significantly reduces displeasure to use the App (Herber, 2012). To check if the App is adapted to our scholar target, a thorough evaluation of the App will be carried on in the summer semester of 2016: is the App an effective complementary learning tool? In addition, the App brings the understanding that processes happens on the field, at any moment the user is standing there, as well as before (for instance by providing instant measures but also whole time series). It therefore enhances the local and temporal aspect of the processes that can be hard to otherwise explain (Herber, 2012).
Of course, a limiting point to the use of the Schwingbachtal App is the necessity to owe a smartphone – but as it is optional resource, we do not consider it as discriminating. Another limit worth-mentioning is the workload for keeping contents up to date. The future use and success of the App rely on a good communication/advertising on the tool from the University-site, but also in local media, a task which remains to be done.
As a whole, we consider the App as a clever way to use a smartphone for teaching purposes. As stated by Teacher et al. (2013), “the phone in our pocket deserves better application as a collector and handler of data, while its inbuilt links to the internet reveal the potential for real-time information flow between collectors, databases, interpreters, and users of information. These technologies provide an unusual opportunity to link science with society […]”. We are likewise thinking how to use smartphones not only as a “top-down” link between science and society, but also as a bottom-up link, where any users could actively participate e.g. as data collector and provider, in other words, how to implement the use of smartphone in Schwingbachtal in a citizen science project.
The Studienlandschaft Schwingbachtal is an out-door full-scale study site since 2008. It deals with hydrology in an interdisciplinary approach and enhances active learning by various means (field monitoring, education trails and geocache). In order to adapt to the change in students habits and to suit better as a communication tool for the locals, it is newly equipped with augmented reality which adds virtual objects on the real landscape. Such an App is useful for communication and education purposes, making learning pleasant, and offering personalized options.
The source code of Schwingbachtal App can be downloaded from the
following server:
A video showing the use of the Schwingbachtal App is available as an online supplement.
Themes, titles and media of the points of interests (POI) of the AR-tour in Studienlandschaft Schwingbachtal.
Map of Studienlandschaft Schwingbachtal showing land use, digital elevation model, points of interest (POIs) along educational trails.
A few snapshot of the Schwingbachtal App: