../swiftui-4days

How I develop an app with SwiftUI in 4 days! *Bro never developed an app with SwiftUI before:)


For me it's all about Android development and played around with Flutter before. Diving into SwiftUI was like stepping into a whole new world. The reason I had to learn SwiftUI quickly was because I was faced with a pretty interesting project: I had to create an app for iOS that are has image recognition feature. Since I've never done that before, I had to dive in and figure it out on the go. Here I will share my journey doing that, I will not write all the things that I exactly do, but I will give you the highlights.

My Knowledge Base at the Start of This Journey

How Did I Learn?

Out of all the things I needed to learn, which included CoreML, AVFoundation, SwiftUI, and SwiftData, what should I learn first? That's right, I learned Swift! Because without Swift, I couldn't do anything.

Since I was too lazy to read all the documentation on the web, I just googled “Swift for Dart Developer” or “Swift for Java Developer” because I was already familiar with Dart and Java, making it easier to understand Swift. Here example article. From there, I began to understand Swift, like how to create variables, functions, and so on. That’s when I started feeling confident coding in Swift (turns out it's easy bruhhh!!). Just Read the The Swift Programming Language (5.10) it's pretty straightforward.

Once I got comfortable coding in Swift, the next thing I did was, you guessed it, learn CoreML. Why CoreML first and not SwiftUI? Good question. I learned CoreML first because, well the app design wasn’t ready yet👀 So I worked on what I could, which was CoreML. Initially, I was confused about CoreML, but after learning, I understood, basically it’s used to run ML models on Apple platforms. Let me explain how I learned CoreML.

Learning CoreML and CreateML

The app I developed needed to identify household items like rice cookers, pans, etc. I searched for existing models like YOLO, ResNet, and MobileNet, but none met my requirements. So, I decided to create my own model. Crazy, right? Well not really. I used a tool in Xcode called Create ML, which lets you create custom models using your data. In my case, I needed Object Detection. Initially, I scraped images from Bing using Python, but then I found a website called Roboflow with ready-to-use datasets. So, I used it. Long story short, I created the model and integrated it into CoreML. Now, how do I capture images and feed them into the ML model I made? That’s where AVFoundation comes in, which I’ll explain next.

Learning AVFoundation

AVFoundation is an Apple framework for handling audio and video tasks, including accessing the iPhone camera API. Since I needed photos of household items for detection, I needed this framework. Here are the problems I encountered:

I started with the AVFoundation documentation, but it didn’t have clear examples. So, I turned to Medium tutorials, but most articles were behind a paywall. Finally, I found a decent YouTube tutorial by a German creator. I followed along and tried to understand each function he wrote. Surprisingly, it didn’t work in my project initially. After debugging, I realized the problem was that the session needed to start in a background task.

DispatchQueue.global(qos: .background).async {
  self?.setupCamera(completion: completion)
}

This tiny problem took me hours to figure out, well I’m a beginner after all. After fixing that, I was able to take pictures and okay I’m little bit happy here. Because it still using UIKit, we are required to create a UIViewRepresentable to wrap the UIKit view for use in SwiftUI. Initially, this was confusing as I thought I could just use it directly. The last issue was the unclear error messages, making it hard to diagnose problems since searching error codes often yielded no results. But through trial and error, I managed to get AVFoundation working smoothly, though I still struggle sometimes.

Learning Layout in SwiftUI

Layout in SwiftUI was straightforward because I had developed with Flutter before. Concepts like HStack, VStack, and ZStack were quickly grasped since they were similar. GeometryReader and other components were easy to understand because of this similarity. However, the modifiers in SwiftUI confused me initially because, unlike Flutter, you not really specify them in the initialization parameter. But once I understood that everything in SwiftUI is basically a struct, I quickly caught on. There’s a struct that takes a closure as a parameter, and I got the hang of it. I started creating layouts, slicing UIs, integrating them, and more. Everything was coming together until I faced an issue with navigation in SwiftUI.

In SwiftUI, navigation is handled using NavigationStack and NavigationLink. The problem is that NavigationLink is essentially a ready-to-use button, which makes custom programmatic navigation very complicated. You need a global state to manage the navigation stack path manually, which is quite tedious. I needed this because there were flows in the app requiring navigation to specific pages based on certain logic, like showing a success page after a successful ML scan or displaying a different page otherwise. This couldn’t be achieved easily with just NavigationLink. I eventually found a routing package on GitHub that wrapped NavigationStack to handle navigation more easily, and I used that to solve my problems. Here's the logic that I mentioned:

CameraView(cameraService: cameraService) {
  result in
  viewModel.handleCaptureResult(
    result: result,
    completion: { status in
      switch status {
      case .congratulation:
        router.navigate(
          to: .successScanScreen(storyId: storyId, questId: questId, mission: viewModel.mission))
      case .moveCloser:
        router.navigate(
          to: .successScanScreen(storyId: storyId, questId: questId, mission: viewModel.mission))
        showingMoveCloserAlert = true
      case .failed:
        showingRetryAlert = true
      }
    })
}

Learning SwiftData

SwiftData is used for local storage, and surprisingly, I had no major issues with its usage because there are plenty of tutorials available. However, I found some unique aspects of SwiftData. For instance, updating data in SwiftData is as simple as altering the model data directly without any specific functions, like whatt???. Initially, this seemed very straightforward, but I wondered about debugging and error-catching. For example, if you have a model Person with tall and weight, you can update tall by just getting the data and altering the variable directly, and it updates in the database immediately. This felt magical but also make me have trust issues because I can’t really handle the error, well I don’t know yet how to do that.

Conclusion

I could understand SwiftUI quickly because of my prior app development knowledge and experience with Flutter, which helped a lot. The official SwiftUI documentation can be unclear, but there are plenty of tutorials available. The SwiftUI and Xcode ecosystem was new, and I had to shift my coding mindset to adapt to this simplified yet restricted environment. Despite these challenges, SwiftUI makes development faster and cooler. I plan to write more about my learning journey in the future, possibly covering how to create models in CreateML and how to use them. Thank you for reading!

/swift/ /swiftui/ /coreml/ /createml/