Mastering Image Handling in SwiftUI: From Assets to User Uploads and Beyond
A Practical Guide to Loading, Transforming, and Uploading Images in Real-World iOS Apps
You're building a tangible iOS application.
This means your images aren't going to come only from your asset catalog.
Your users will provide images — either through a camera, photo library, or the network.
Many times you will need to compress, transform, upload, or render images from formats such as base64.
And while SwiftUI isn't going to help you with any of this, it isn't going to do any of it for you by default, either!
This is your all-in-one opportunity to manage all of these possibilities.
Real APIs. Real Data. Real SwiftUI.
Whether you are building a hobby application or a new App Store submission, it is important you understand image handling.
Start simple: Your assets
When your images are in your app bundle, you can use:
Image("demo")
When you call this, SwiftUI will find demo
in your Assets catalog, regardless of raster format (PNG, JPEG, etc).
This is the quickest way to load your static images while maintaining device-level rendering consistency.
Just keep in mind that populating too many high resolution images in your Assets catalog may increase your app’s binary size.
Limit this approach to app images such as icons, logos, or placeholder graphics that you absolutely need.
UIImage
The Image(uiImage:)
initializer provides access to UIKit from your SwiftUI interface.
This is your go-to method when you have images that came directly from user input:
Image(uiImage: yourUIImage)
It is very flexible, and it is the method you need when dealing with images taken or selected during runtime, too.
However, you rarely will be starting with a UIImage
.
The system gives you either data or items from a picker — you'll have to cross that bridge.
Importing images from the photo library
In iOS 16 and later, SwiftUI offers a native photo picker called PhotosPicker
.
When a user picks a photo, an associated PhotosPickerItem
is returned, though the item cannot be used as-is.
Instead, it needs to be handled.
Step 1: Convert PhotosPickerItem
to Data
let data = try await item.loadTransferable(type: Data.self)
Take note that this async call "safely" loads the raw data of the photo.
Step 2: Convert Data
to UIImage
let image = UIImage(data: data)
At this point, you have an image and can call Image(uiImage:)
in SwiftUI to show the image in your view hierarchy.
We can do this in a cleaner, more reusable fashion using extension methods:
extension PhotosPickerItem {
func convert() async -> Data? {
try? await self.loadTransferable(type: Data.self)
}
func convertUIImage() async -> UIImage? {
if let data = await convert() {
return UIImage(data: data)
}
return nil
}
}
By having methods like these, you can write readable and concise logic when working with results of a photo picker.
Taking photos with the camera
SwiftUI doesn't yet have a native camera view, but UIKit does in the form of UIImagePickerController
.
You can wrap it using UIViewControllerRepresentable
so you can use it in your SwiftUI views:
struct CameraView: UIViewControllerRepresentable {
@Binding var image: UIImage?
@Environment(\.dismiss) var dismiss
func makeUIViewController(context: Context) -> UIImagePickerController {
let picker = UIImagePickerController()
picker.sourceType = .camera
picker.delegate = context.coordinator
return picker
}
func updateUIViewController(_: UIImagePickerController, context: Context) {}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
class Coordinator: NSObject, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
let parent: CameraView
init(_ parent: CameraView) {
self.parent = parent
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
parent.image = info[.originalImage] as? UIImage
parent.dismiss()
}
func imagePickerControllerDidCancel(_: UIImagePickerController) {
parent.dismiss()
}
}
}
With the above call, when the photo is taken, the result will be in your @Binding
and will be immediately accessible in SwiftUI.
Uploading: transform UIImage
to Data
Most APIs expect images sent to them as Data
. Given you have a UIImage
, you can transform it to data in order to be uploaded:
let data = image.jpegData(compressionQuality: 0.8)
Reduce compressionQuality
to lower the file size.
Alternatively, if you want to keep the resolution and use non-lossy compression:
let data = image.pngData()
Once you have the Data
, you can use it to upload the image to your server.
Here’s a very basic example using URLSession
to upload an image to the endpoint:
func uploadImage(data: Data, to url: URL) async throws {
var request = URLRequest(url: url)
request.httpMethod = "POST"
request.setValue("application/octet-stream", forHTTPHeaderField: "Content-Type")
request.httpBody = data
let (responseData, response) = try await URLSession.shared.data(for: request)
guard let httpResponse = response as? HTTPURLResponse, httpResponse.statusCode == 200 else {
throw URLError(.badServerResponse)
}
// You can do something with responseData here if need to
print("Upload completed, response from server: \(responseData)")
}
In a production app, you may need to send authentication headers or wrap the image in a multipart form data request, but that depends on what is required by your server to receive files.
However, this should serve as a basic representation of functionality related to sending image bytes to a backend.
Downloading: Data
to UIImage
If your backend directly sends image bytes, you can create a displayable image from those bytes with:
let image = UIImage(data: downloadedData)
Here's a more detailed example of downloading image data from a URL and creating a UIImage
from it:
func downloadImage(from url: URL) async throws -> UIImage? {
let (data, response) = try await URLSession.shared.data(from: url)
guard let httpResponse = response as? HTTPURLResponse, httpResponse.statusCode == 200 else {
throw URLError(.badServerResponse)
}
return UIImage(data: data)
}
Once you have a UIImage
, you can use Image(uiImage:)
in SwiftUI.
This pattern will allow you to easily download dynamic content, remote avatars, thumbnails or off-the-shelf resources located on CDNs.
Base64 Encoding and Decoding
If you're working with base64 strings for your API (which is quite common), you can convert to and from like this:
// Encode
let base64String = imageData.base64EncodedString()
// Decode
if let data = Data(base64Encoded: base64String) {
let image = UIImage(data: data)
// Display the image or however else you want to use it
}
If you want to upload base64 to a server as part of a JSON payload, you can do this:
func uploadBase64Image(base64String: String, to url: URL) async throws {
var request = URLRequest(url: url)
request.httpMethod = "POST"
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
let body: [String: String] = ["image": base64String]
request.httpBody = try JSONEncoder().encode(body)
let (_, response) = try await URLSession.shared.data(for: request)
guard let httpResponse = response as? HTTPURLResponse, httpResponse.statusCode == 200 else {
throw URLError(.badServerResponse)
}
}
If you want to download and decode a base64 image string from the server, you can do something like this:
func downloadBase64Image(from url: URL) async throws -> UIImage? {
let (data, response) = try await URLSession.shared.data(from: url)
guard let httpResponse = response as? HTTPURLResponse, httpResponse.statusCode == 200 else {
throw URLError(.badServerResponse)
}
let decoded = try JSONDecoder().decode([String: String].self, from: data)
if let base64 = decoded["image"], let imageData = Data(base64Encoded: base64) {
return UIImage(data: imageData)
}
return nil
}
You often see base64 in APIs that return images as JSON blobs.
Base64 is a useful way to upload images when you cannot do binary transfer, but it's important to remember that base64 adds ~33% overhead and will increase your memory usage.
Compressing Images Under a Size Limit
If you need to make sure files are under a certain size (say, under 5MB), you can use an iterative compression algorithm:
extension Data {
func compressTo(maxSizeKB: Int, compressionStep: CGFloat = 0.1) -> Data? {
guard let image = UIImage(data: self) else { return nil }
var quality: CGFloat = 1.0
var compressed = image.jpegData(compressionQuality: quality)
while let data = compressed, data.count > maxSizeKB * 1024, quality > 0 {
quality -= compressionStep
compressed = image.jpegData(compressionQuality: quality)
}
return compressed
}
}
How It Works:
The function starts by converting the Data object to a UIImage.
Then it compresses the image by applying JPEG compression in increments, decreasing quality by a small amount (default: 0.1) until the data is less than the target size.
The process can be terminated either when the image is small enough or the quality reaches zero.
This approach gives you a knob that you can tune for fidelity versus size.
Essential when you have file uploads with limits.
CIImage
CIImage
is a Core Image asset for processing images through a high-performance pipeline.
You do not use it as an image for rendering.
You use it to apply filters, chain effects to images, or build your own processing pipeline.
To create a CIImage
from a UIImage
:
let ciImage = CIImage(image: uiImage)
To apply a Core Image filter:
let filter = CIFilter(name: "CISepiaTone")!
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(0.8, forKey: kCIInputIntensityKey)
let output = filter.outputImage
To convert from a CIImage
to UIImage
(for rendering in SwiftUI):
let context = CIContext()
if let cgImage = context.createCGImage(ciImage, from: ciImage.extent) {
let resultImage = UIImage(cgImage: cgImage)
}
In this example,
CIContext
has been responsible to rendering theCIImage
out into aCGImage
.Core Image itself is lazy; it defines image operations but never executes them until rendering.
A
CGImage
is a concrete representation of the pixel data and will be available to you once converted.The
UIImage
wraps this back into UIKit and SwiftUI.
Use this when you are creating real-time effects and image transformations, or anything straight from the camera.
CGImage
CGImage
is a low level, pixel-based image representation primarily used in Core Graphics.
Unlike CIImage
, which is more abstract, less pixel-oriented, and more appropriate for processing images, CGImage
allows you direct access to the pixels.
Obtain a CGImage
from a UIImage
:
let cgImage = uiImage.cgImage
Convert a CGImage
back to a UIImage
:
let uiImage = UIImage(cgImage: cgImage)
Convert a CIImage
to CGImage
:
let context = CIContext()
let cgImage = context.createCGImage(ciImage, from: ciImage.extent)
Use a CGImage
in SwiftUI:
if let cgImage = cgImage {
let image = Image(decorative: cgImage, scale: 1.0)
}
Both CGImage
is useful in situations where control over performance and pixel access is to be had like when you want to generate a thumbnail or perform transforms without automatic abstraction.
AsyncImage
AsyncImage
is a SwiftUI-native view class, since iOS 15, to download and display remote images.
Basic use:
AsyncImage(url: URL(string: "https://example.com/avatar.png"))
You can customize its appearance with loading and error states:
AsyncImage(url: imageURL) { phase in
switch phase {
case .empty:
ProgressView()
case .success(let image):
image.resizing()
case .failure:
Image(systemName: "photo")
@unknown default:
EmptyView()
}
}
AsyncImage
is the simplest way to download and display an image when your requirement is relatively simple, i.e., display profile images, thumbnails, or image assets from a CDN and want to do so in the least amount of code.
when your application needs to do more than fetch and display an image, AsyncImage starts to feel quite limiting.
Here’s the reality check:
No built-in image caching! Each reload is another network call.
Don't even think about customizing request headers, cache policies, retries…. AsyncImage doesn't do any of that.
Customizing placeholders/loading states? The options are minimal.
If your project requires real-world flexibility—intelligent caching, robust networking, rich image processing—AsyncImage isn’t going to cut it.
Enter Kingfisher or SDWebImage.
Great libraries for developers who need control: full-caching, customizable networking, and a developer toolkit for everything from placeholders to progressive loading.
Or, if you really want control, build your own loader using URLSession.
Whether you’re just getting started or optimizing code in production, we’re all learning and improving together.
This guide is meant to help you solve real problems—and to be a starting point for sharing, discussion, and growth as a community.
Let’s explore, learn, and build better products together—so every user benefits from what we create.