Get Started
Prerequisites
Create an Amazon S3 Bucket
In this series of tutorials, whether WeSQL is running on a local machine or on AWS EKS, AWS S3 will be used as the storage backend. Therefore, before getting started, the first step is to create an S3 bucket in AWS, which will be used as the object storage for the cluster data. Refer to the Amazon S3 Create Bucket Guide. You can also apply a s3 bucket for testing purposes by clicking here.
Use the following command to create a bucket named wesql-storage
in the us-west-1
region:
aws s3api create-bucket --bucket wesql-storage --region us-west-1 --create-bucket-configuration LocationConstraint=us-west-1
Verify that the newly created bucket is accessible by listing the contents of the bucket:
aws s3 ls s3://wesql-storage
Create an IAM User Access Key (AK) and Secret Key (SK) with Minimal Permissions to Access the S3 Bucket (Optional)
Some tutorials require accessing S3 from outside the AWS environment. Although you can use the root account's Access Key (AK) and Secret Key (SK) to access the S3 bucket, for security reasons, we recommend creating an AK and SK with minimal permissions. Here's how to create an IAM user with the minimal permissions required to read and write to a specific S3 bucket:
Step 1: Create an IAM User
- Log in to the AWS Management Console and navigate to the IAM console.
- In the left sidebar, select Users, then click Add user, refer to Create an IAM user.
- Enter a username (e.g., wesql-user).
- Click Next: Permissions to proceed.
Step 2: Attach a Minimal Permissions S3 Policy
- In the Set permissions step, click Attach policies directly.
- Click Create policy to create a custom policy for the user.
Create a Custom Policy
- In the policy creation screen, select the JSON tab.
- Use the following JSON template to grant the user read and write permissions to a specific S3 bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GetBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:::wesql-storage"
},
{
"Sid": "ReadWriteObject",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::wesql-storage/*"
}
]
}
- Replace
wesql-storage
with your actual bucket name.
This policy allows:
- PutObject: Upload objects to the bucket.
- GetObject: Download objects from the bucket.
- DeleteObject: Delete objects from the bucket.
- ListBucket: List the objects in the bucket.
- Once done, click Review policy.
- Give the policy a name (e.g., S3ReadWriteBucketWeSQLStoragePolicy).
- Click Create policy.
Step 3: Attach the Policy to the User
- Go back to the user creation process. On the Attach permissions policies page, find and select the policy you just created (S3ReadWriteBucketWeSQLStoragePolicy).
- Click Next: Tags, then Next: Review.
- On the Review page, confirm the settings and click Create user.
Step 4: Retrieve Access Credentials
- After successfully creating the user, navigate to the Users page and select the IAM user you just created (e.g., wesql-user). Then, click on Create access key.
- Under Use case, choose Application running outside AWS.
- Click Create access key. Once the access key is generated, make sure to securely store both the Access Key ID and Secret Access Key, as the secret key will not be displayed again.
The AK and SK created here are limited to the S3 bucket s3://wesql-storage created in the previous step. If you change the bucket name, you will need to generate a new set of AK and SK with minimal permissions for the new bucket.
For more details on how to manage access keys, refer to Manage IAM User Access key
Step 5: Set environment variables(Optional)
On each of your server nodes if starting directly in WeSQL binary, set the environment variables for your S3 access keys:
export AWS_ACCESS_KEY_ID=your_access_key
export AWS_SECRET_ACCESS_KEY=your_secret_access_key
export AWS_DEFAULT_REGION=us-west-1
Ensure that these keys have the appropriate permissions to read and write to the S3 bucket.