Set Up Your Own S3-Compatible MinIO Server in Under 5 Minutes with Docker (No AWS Bills)
If you've ever worked with AWS S3, you know how powerful object storage can be. But what if you could run the same thing on your own hardware? That's exactly what MinIO does.
MinIO is an open-source object storage server that speaks the same language as Amazon S3. This means any application built for S3 will work with MinIO without changing a single line of code. You get all the benefits of S3's API while keeping your data exactly where you want it.
Why MinIO Matters
The real power of MinIO lies in its S3 compatibility. Think about it: you can develop against MinIO locally, test your application with real object storage operations, and then seamlessly switch to AWS S3 in production. Or maybe you want to keep everything on-premises for compliance reasons. MinIO gives you that flexibility.
It's also incredibly lightweight. Unlike traditional storage solutions that require complex setup and configuration, MinIO is a single binary. You can have it running in under a minute.
What You'll Need
Before we start, make sure you have:
- A Linux machine, Mac, or Windows with WSL
- Docker and Docker Compose installed (Docker Desktop for Mac/Windows includes this)
- A terminal and about 5 minutes
Setting Up MinIO
Let's get MinIO running with Docker Compose. This is the cleanest and most repeatable way to manage the service.
- Create a file named
docker-compose.yml.services: Defines the applications we want to run.image: Pulls the official MinIO image.ports: Maps port 9000 (for the S3 API) and 9001 (for the web console) from the container to your host machine.environment: Sets your root login credentials. You should change these for any serious use.volumes: Mounts a local directory on your computer (~/minio/data) into the container's/datadirectory. This is crucial—it ensures your data persists even if you stop or remove the container.command: Tells MinIO to start the server, use the/datadirectory for storage, and make the console available on port 9001.
Paste the following content into that file:
services:
minio:
image: quay.io/minio/minio
container_name: minio
ports:
- "9000:9000" # For the S3 API
- "9001:9001" # For the MinIO Console
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin123
volumes:
- ~/minio/data:/data
command: server /data --console-address ":9001"
Here's what this file does:
Now, open your terminal in the same directory as your docker-compose.yml file and run:
docker compose up -d
Note: Modern Docker Compose uses the docker compose command, but docker-compose also works if you have the older version.
Docker Compose will pull the image and start the container in the background. Wait a few seconds for the container to start, then open your browser and go to http://localhost:9001. You'll see the MinIO login screen. Use the credentials we set:
- Username:
minioadmin - Password:
minioadmin123
Creating Your First Bucket
Once you're logged in, you'll see a clean interface. Click "Create Bucket" and name it something like test-bucket. That's it. You now have an S3-compatible bucket ready to use.
Using MinIO with the AWS CLI
Here's where things get interesting. You can use the AWS CLI to interact with MinIO just like you would with real S3.
First, configure the AWS CLI to point to your MinIO instance:
aws configure --profile minio
When prompted, enter:
- AWS Access Key ID:
minioadmin - AWS Secret Access Key:
minioadmin123 - Default region:
us-east-1 - Default output format:
json
Now let's test it. Upload a file:
echo "Hello MinIO" > test.txt
aws --profile minio --endpoint-url http://localhost:9000 s3 cp test.txt s3://test-bucket/
List your bucket contents:
aws --profile minio --endpoint-url http://localhost:9000 s3 ls s3://test-bucket/
You should see your file listed. You just used the exact same commands you'd use with AWS S3, but everything stayed on your machine.
Using MinIO with Code
Let's say you're building a Node.js application. Here's how you'd connect:
const AWS = require(\'aws-sdk\');
const s3 = new AWS.S3({
endpoint: \'http://localhost:9000\',
accessKeyId: \'minioadmin\',
secretAccessKey: \'minioadmin123\',
s3ForcePathStyle: true, // This is important!
signatureVersion: \'v4\'
});
// Upload a file
s3.putObject({
Bucket: \'test-bucket\',
Key: \'myfile.txt\',
Body: \'Hello from Node.js\'
}, (err, data) => {
if (err) console.log(err);
else console.log(\'Upload successful\');
});
The same pattern works in Python, Go, Java, or any language with an S3 SDK. Just point it to your MinIO endpoint instead of AWS.
Production Considerations
For playing around, what we've set up is perfect. But if you're thinking about using MinIO seriously, here are a few things to know:
- You'll want to change those default credentials immediately. Set strong passwords for
MINIO_ROOT_USERandMINIO_ROOT_PASSWORD. - MinIO supports distributed mode where you can run multiple servers for high availability. Your data gets distributed and protected against hardware failures.
- You can also enable TLS easily by providing certificate files. This is crucial if you're exposing MinIO beyond localhost.
What You've Learned
You now have a working S3-compatible object storage system running locally. You can:
- Create buckets
- Upload files through the web interface or CLI
- Integrate it with any application that supports S3
Your development workflow just got a lot smoother. No more mocking S3 or racking up AWS bills during development. You have real object storage that behaves exactly like S3, running wherever you need it.
Try uploading some files, experiment with the API, and see how your applications can leverage object storage without depending on cloud services. MinIO makes it all possible.
Next: Administer MinIO with the mc CLI
Now that your local S3-compatible MinIO server is up and running, the next blog will dive deep into powerful administration using the MinIO Client (mc).
You’ll learn how to:
- Install and configure
mcto connect to your local or remote MinIO instance - Create and manage access policies for fine-grained control
- Add new users with custom permissions
- Generate service accounts (like IAM users) with scoped credentials
- Manage buckets: versioning, lifecycle rules, replication, and encryption
- Mirror buckets, sync data, and automate backups with
mc mirror - Monitor usage and set up event notifications
Stay tuned and watch this space for the next article.
Member discussion