It was that time of the year again, Oneflare Ignite Hackathon! Everyone was so excited about sharing their ideas and recruiting needed talent to bring those ideas to life. As a mobile developer at Oneflare, I’ve always had this idea of adding AR features to our apps. Since a hackathon is the best place to do crazy experiments, I submitted the idea and that’s where our story began.
The first big challenge was the idea itself. Since AR is a new concept recently introduced into the mobile app market, the idea of implementing this new reality in our existing app looked scary at the beginning. On the other hand, having two front ends with no mobile development experience, one full stack developer with some experimental mobile projects and I, as the only mobile developer, almost everyone in our team was new to mobile development .and AR concepts.
In addition there was significant ambiguity regarding the scope of the project, what needs to be achieved and the work that must be done to deliver it. It was a two day hackathon so the initial planning phase was the key to manage our time effectively.
Also due to the fact that AR experiences highly depend on lighting conditions and texture of the surface, lots of testing sessions were required to make sure everything would be as expected during the showcase.
Apart from the above mentioned challenges, there were some technical requirements that could be considered as time consuming tasks, such as getting custom 3D content for the application or choosing best UI/UX practices for presenting information in AR world.
We allocated half of our first day to plan and scope the project. During this phase It was my job, as the leader of the team, to provide as many details as possible to make both the idea and the subject of AR clear to everyone. Then we had a brief introduction to swift and Xcode in order to make sure that everyone was familiar with main concepts of mobile application development and had basic knowledge of swift to start working on the project. Next we, as a team, listed all possible functionalities of the app, identified the main features and kept “nice to haves” for the end. Later we came up with a list of tasks that must be done to deliver those features and assigned them to team members taking into account both their interest and their experience. Since we were doing this project for demo purposes, it was essential to decide what is achievable in two days and what can wait. In addition to the initial planning phase, we had few other planning sessions during the hackahton to rescope our remaining work and avoid going over time.
The rest of our time was spent mainly on design and programming. Since we did not have a designer in our team, we kept the user interface as simple as possible. Regarding other technical challenges, we approached them by having group discussions, suggesting different ideas, evaluate them and choosing the best solution to tackle those issues.
Finally we created a slideshow for the presentation and tested our final product few times, with almost the same conditions as the presentation, to make sure nothing goes wrong during the demo.
What we did good
- We won the hackathon!
- We delivered a working example of implementing AR in an existing app, just in two days
- Successfully breaking things up into smaller parts with achievable millstones
- Despite coming from different squads and backgrounds, everyone was able to work together harmoniously
- We had so much fun during the hackathon, we stayed back, watched movies together, wore costumes to the showcase, we simply made coding fun!
As we only had two days to create a working demo, a lot of compromises were made. If we had more time, there are some parts of our app that could be improved.
- Some AR concepts like saving and restoring the world map with anchors from the AR session were more complicated than what we first expected. Once achieved, a world map can be shared between different users to provide them with exactly the same AR experience
- Currently our app only works for horizontal surfaces. Next step can be adding support for vertical plane detection and object anchoring
- AR objects would be visible no matter how far away you got from them. It would be strange to see objects that were placed in another room even after leaving. Deciding how to take into account obstacles and walls in the app can be a critical challenge