Mobile Computing (521147S)


This course focuses on one of the core demands of the industry today: a deep understanding of mobile interaction, mobile computing constraints and mobile development. After this class, students will possess the:

ability to design and prototype a mobile user interface taking into account usability aspects of interaction on smaller displays

ability to explain and leverage the fundamental concepts of context awareness using smartphone hardware, software and human sensors

– ability to understand and implement from scratch a mobile application that leverages both usability and context to create engaging mobile experiences

The course is aimed at masters and doctoral students. Upon completing the course the student is able to implement mobile user interfaces, interface with online social network applications, explain the fundamental concepts of context awareness and access to information on the go.

This course is taught only in English and the class material is available in Google Classroom ( You may contact for the access code or check in Optima.

Topics & Timetable

Period 3

Week 1:
Lecture: Introduction to MobiSocial & Android
Lab: Getting started with Android Studio, emulator and “Hello World”

Week 2:
Lecture: Interacting with the user
Lab: Prototyping your first Android application
LAB 1 (in class)/HW 1 (non-attending): Designing an interface according to specs

Week 3:
Lecture: Sensing the world
Lab: Hardware, Software sensors
LAB 2 (in class)/HW 2 (non-attending): Designing an novel sensor

Week 4:
Lecture: Multitasking on the go
Lab: Data storage: SharedPreferences, ContentProviders
LAB 3 (in class)/HW 3 (non-attending): Working in the background (Weather App)

Week 5:
== Project team members decided ==
Lecture: Context-aware Mobile Services
Lab: Maps, Geocoding
LAB 4 (in class)/HW 4 (non-attending): Location-aware reminder

Week 6:
Lecture: Crowdsourcing and Workers
Lab: Twitter
LAB 5 (in class)/HW 5 (non-attending): Twitter social sensor

Week 7:
Lecture: Multimodal interaction: voice, touch, haptic, vision
Lab: Text-to-Speech, Voice Recognition, Camera, Vision API
LAB 6 (in class)/HW 6 (non-attending): Smiley camera


Period 4: Independent Team Projects (7 weeks)

Week 5: Midway presentations (5 minutes, to teaching staff)
Week 6: Submit 1-minute video promo of project
Week 7: Peer-evaluation of projects
Week 8: Final presentations (8 minutes, public)


There are two options for passing this course.
== For attending students*, there are no exams. The final grade is calculated as:
– 20% lecture attendance. Skip 1, penalty 10%, skip 2, penalty 20% if no HW is submitted.
– 20% laboratory attendance. Skip 1, penalty 10%, skip 2, penalty 20% if no HW is submitted.
– 20% average lab exercises (LAB 1-6)**, done and submitted individually in class. If unable to complete, the student will have until the corresponding HW submission deadline to submit his LAB exercise.
– 40% team project **

== For non-attending students, there is 1 exam. The final grade is calculated as:
– 40% individual assessments (HW 1-6)**
– 20% midterm exam (end period 3)(1-5, 0 is fail/no show)
– 40% team project **

The team project is peer-evaluated (30% of the grade). The final grade is on a scale 1-5 (0 is fail) on:
– 50% implementation and complexity
– 30% peer-assessment average
– 20% quality of video demo (max. 1-minute long)

 *All LAB/HW, exams and project require a passing grade.
** A student can skip up to 2 classes. If more, the student is considered as non-attending and will need to start submitting HW and do the midterm exam.

Showcase from previous years:

2016: Challenge: Family coordination.

2017: Challenge: Open-ended.

2018: Challenge: Quantified-self.

Comments are closed.