aws s3 rm access denied


How to install and manage

Preface

Red Hat OpenShift Container Storage 4.5 supports deployment on existing Red Hat OpenShift Container Platform (OCP) Google Cloud clusters.

Only internal Openshift Container Storage clusters are supported on Google Cloud. See Planning your deployment for more information about deployment requirements.

To deploy OpenShift Container Storage in internal mode, follow the deployment process Deploying OpenShift Container Storage on Google Cloud.

Chapter 1. Deploying OpenShift Container Storage on Google Cloud

Deploying OpenShift Container Storage on OpenShift Container Platform using dynamic storage devices provided by Google Cloud installer-provisioned infrastructure (IPI) enables you to create internal cluster resources. This results in internal provisioning of the base services, which helps to make additional storage classes available to applications.

Only internal Openshift Container Storage clusters are supported on Google Cloud. See Planning your deployment for more information about deployment requirements.

1.1. Installing Red Hat OpenShift Container Storage Operator

You can install Red Hat OpenShift Container Storage Operator using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment.

When you need to override the cluster-wide default node selector for OpenShift Container Storage, you can use the following command in command line interface to specify a blank node selector for the openshift-storage namespace:

  1. Click in the left pane of the OpenShift Web Console.

    Screenshot of list of operators in the Operator Hub of the OpenShift Web Console.

  2. You can use the text box or the filter list to search for OpenShift Container Storage from the list of operators.

  3. On the page, ensure the following options are selected:

    1. Select as or . Approval Strategy is set to by default.

      • When you select the Approval Strategy as , approval is not required either during fresh installation or when updating to the latest version of OpenShift Container Storage.

      • When you select the Approval Strategy as , approval is required during fresh installation or when updating to the latest version of OpenShift Container Storage.

1.2. Creating an OpenShift Container Storage Cluster Service in internal mode

Use this procedure to create an OpenShift Container Storage Cluster Service after you install the OpenShift Container Storage operator.

  • Be aware that the default storage class of Google Cloud uses hard disk drive (HDD). To use solid state drive (SSD) based disks for better performance, you need to create a storage class, using pd-ssd as shown in the following ssd-storeageclass.yaml example:

  1. Screenshot of OpenShift Container Storage operator dashboard.

  2. On the page, perform either of the following to create a Storage Cluster Service.

    1. On the , click .

      Screenshot of Operator Details Page.

    2. Alternatively, select the tab and click .

      Screenshot of Storage Cluster tab on OpenShift Container Storage Operator dashboard.

  3. On the page, ensure that the following options are selected:

    Screenshot of Create Cluster Service page where you can select mode of deployment.

    1. In the section, for the use of OpenShift Container Storage service, select a minimum of three or a multiple of three worker nodes from the available list.

      For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones.

      To find specific worker nodes in the cluster, you can filter nodes on the basis of Name or Label.

      For minimum starting node requirements, see Resource requirements section in Planning guide.

    2. Once you select the initial storage capacity, cluster expansion will only be performed using the selected usable capacity (times 3 of raw storage).

  4. The button is enabled only after selecting a minimum of three worker nodes.

    Upon successful deployment, a storage cluster with three storage devices gets created. These devices get distributed across three of the selected nodes. The configuration uses a replication factor of 3. To scale the initial cluster, see Scaling storage nodes.

1.3. Creating a new backing store

This procedure is not mandatory. However, it is recommended to perform this procedure.

When you install OpenShift Container Storage on Google Cloud platform, noobaa-default-bucket-class places data on noobaa-default-backing-store instead of Google Cloud storage. Hence, to use OpenShift Container Storage Multicloud Object Gateway (MCG) managed object storage backed by Google Cloud storage, you need to perform the following procedure.

  1. Create Google Cloud storage bucket for MCG to store object data as described in Creating storage buckets documentation. Make sure to have a service account with the Storage Admin role.

    It is recommended to use a separate Google Cloud project to limit this service account from accessing other data.

  1. On the OpenShift Container Storage Operator page, scroll right and click the tab.

    Figure 1.6. OpenShift Container Storage Operator page with backing store tab

    Screenshot of OpenShift Container Storage operator page with backing store tab.

  2. Screenshot of create new backing store page.

  1. Run the following command by using the MCG command line tool noobaa (from mcg rpm package) to verify that the Google Cloud storage backing store that you created is in Ready state.

  2. Verify that the output shows the default bucket class in Ready state and uses the expected backing store.

Chapter 2. Verifying OpenShift Container Storage deployment

Use this section to verify that OpenShift Container Storage is deployed correctly.

2.1. Verifying the state of the pods

To determine if OpenShift Container storage is deployed successfully, you can verify that the pods are in Running state.

  1. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, “Pods corresponding to OpenShift Container storage cluster”.

  2. Verify that the following pods are in running and completed state by clicking on the and the tabs:

2.2. Verifying the OpenShift Container Storage cluster is healthy

You can verify health of OpenShift Container Storage cluster using the persistent storage dashboard. For more information, see Monitoring OpenShift Container Storage.

  • In the , verify that has a green tick mark as shown in the following image:

    Figure 2.1. Health status card in Persistent Storage Overview Dashboard

    Screenshot of Health card in persistent storage dashboard

  • In the , verify that the cluster information is displayed appropriately as follows:

    Screenshot of Details card in object service dashboard

2.3. Verifying the Multicloud Object Gateway is healthy

You can verify the health of the OpenShift Container Storage cluster using the object service dashboard. For more information, see Monitoring OpenShift Container Storage.

  • In the , verify that the Multicloud Object Gateway (MCG) storage displays a green tick icon as shown in following image:

    Figure 2.3. Health status card in Object Service Overview Dashboard

    Screenshot of Health card in object service dashboard

  • In the , verify that the MCG information is displayed appropriately as follows:

    Screenshot of Details card in object service dashboard

2.4. Verifying that the OpenShift Container Storage specific storage classes exist

  • Verify that the following storage classes are created with the OpenShift Container Storage cluster creation:

Chapter 3. Uninstalling OpenShift Container Storage

3.1. Uninstalling OpenShift Container Storage on Internal mode

Use the steps in this section to uninstall OpenShift Container Storage instead of the Uninstall option from the user interface.

  1. Query for PVCs and OBCs that use the OpenShift Container Storage based storage class provisioners.

  2. Follow these instructions to ensure that the PVCs and OBCs listed in the previous step are deleted.

    If you have created PVCs as a part of configuring the monitoring stack, cluster logging operator, or image registry, then you must perform the clean up steps provided in the following sections as required:

    • Section 3.4, “Removing the cluster logging operator from OpenShift Container Storage”

      For each of the remaining PVCs or OBCs, follow the steps mentioned below :

      1. Identify the controlling API object such as a Deployment, StatefulSet, DaemonSet , Job, or a custom controller.

        Each API object has a metadata field known as OwnerReference. This is a list of associated objects. The OwnerReference with the controller field set to true will point to controlling objects such as ReplicaSet, StatefulSet,DaemonSet and so on.

      2. Ensure that the API object is not consuming PVC or OBC provided by OpenShift Container Storage. Either the object should be deleted or the storage should be replaced. Ask the owner of the project to make sure that it is safe to delete or modify the object.

      3. If you have created any custom Multi Cloud Gateway backingstores, delete them.

        • Delete each of the backingstores listed above and confirm that the dependent resources also get deleted.

        • If any of the backingstores listed above were based on the pv-pool, ensure that the corresponding pod and PVC are also deleted.

  3. Delete the StorageCluster object and wait for the removal of the associated resources.

  4. Delete the namespace and wait till the deletion is complete. You will need to switch to another project if openshift-storage is the active project.

    1. Switch to another namespace if openshift-storage is the active namespace.

    2. Wait for approximately five minutes and confirm if the project is deleted successfully.

      While uninstalling OpenShift Container Storage, if namespace is not deleted completely and remains in Terminating state, perform the steps in the article Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.

  5. Ensure you can see removed directory /var/lib/rook in the output.

  6. You can ignore the warnings displayed for the unlabeled nodes such as label

  7. Confirm all PVs are deleted. If there is any PV left in the Released state, delete it.

  8. To ensure that OpenShift Container Storage is uninstalled completely, on the OpenShift Container Platform Web Console,

3.2. Removing monitoring stack from OpenShift Container Storage

Use this section to clean up monitoring stack from OpenShift Container Storage.

The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace.

  1. List the pods and PVCs that are currently running in the openshift-monitoring namespace.

  2. Remove any config sections that reference the OpenShift Container Storage storage classes as shown in the following example and save it.

    In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Container Storage PVCs.

  3. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes.

3.3. Removing OpenShift Container Platform registry from OpenShift Container Storage

Use this section to clean up OpenShift Container Platform registry from OpenShift Container Storage. If you want to configure an alternative storage, see image registry

The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace.

  1. Edit the configs.imageregistry.operator.openshift.io object and remove the content in the section.

    In this example, the PVC is called registry-cephfs-rwx-pvc, which is now safe to delete.

3.4. Removing the cluster logging operator from OpenShift Container Storage

Use this section to clean up the cluster logging operator from OpenShift Container Storage.

The PVCs that are created as a part of configuring cluster logging operator are in openshift-logging namespace.

  1. The PVCs in the openshift-logging namespace are now safe to delete.

Chapter 4. Configure storage for OpenShift Container Platform services

You can use OpenShift Container Storage to provide storage for OpenShift Container Platform services such as image registry, monitoring, and logging.

The process for configuring storage for these services depends on the infrastructure used in your OpenShift Container Storage deployment.

Always ensure that you have plenty of storage capacity for these services. If the storage for these critical services runs out of space, the cluster becomes inoperable and very difficult to recover.

Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the sub section of Configuring persistent storage in the OpenShift Container Platform documentation for details.

If you do run out of storage space for these services, contact Red Hat Customer Support.

4.1. Configuring Image Registry to use OpenShift Container Storage

OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster.

Follow the instructions in this section to configure OpenShift Container Storage as storage for the Container Image Registry. On Google Cloud, it is not required to change the storage for the registry.

This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete.


  1. Create a Persistent Volume Claim for the Image Registry to use.

      1. Wait until the status of the new Persistent Volume Claim is listed as Bound.


  2. Configure the cluster’s Image Registry to use the new Persistent Volume Claim.

    1. Add the new Persistent Volume Claim as persistent storage for the Image Registry.

      1. Add the following under spec:, replacing the existing storage: section if necessary.

4.2. Configuring monitoring to use OpenShift Container Storage

OpenShift Container Storage provides a monitoring stack that is comprised of Prometheus and AlertManager.

Follow the instructions in this section to configure OpenShift Container Storage as storage for the monitoring stack.

Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring.

Red Hat recommends configuring a short retention intervals for this service. See the sub section of Configuring persistent storage in the OpenShift Container Platform documentation for details.

  1. Define a new cluster-monitoring-config Config Map using the following example.

    Replace the content in angle brackets (<, >) with your own values, for example, retention: 24h or storage: 40Gi.

    Replace the with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com. In the example given below the name of the is ocs-storagecluster-ceph-rbd.

  1. Verify that the Persistent Volume Claims are bound to the pods.

    1. Verify that 5 Persistent Volume Claims are visible with a state of Bound, attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods.

  2. Verify that the new alertmanager-main-* pods appear with a state of Running.

    1. Scroll down to and verify that the volume has a , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0.

  3. Verify that the new prometheus-k8s-* pods appear with a state of Running.

    1. Scroll down to and verify that the volume has a , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0.

4.3. Cluster logging for OpenShift Container Storage

You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging.

Upon initial OpenShift Container Platform deployment, OpenShift Container Storage is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Container Storage to have OpenShift Container Storage backed logging (Elasticsearch).

Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover.

Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details.

If you run out of storage space for these services, contact Red Hat Customer Support.

4.3.1. Configuring persistent storage

You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example:

This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see in About deploying and configuring cluster logging.

Omission of the storage block will result in a deployment backed by default storage. For example:

4.3.2. Configuring cluster logging to use OpenShift Container Storage

Follow the instructions in this section to configure OpenShift Container Storage as storage for the OpenShift cluster logging.

You can obtain all the logs when you configure logging for the first time in OpenShift Container Storage. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed.

  1. You might have to refresh the page to load the data.

  2. In the YAML, replace the with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com. In the example given below the name of the is ocs-storagecluster-ceph-rbd:

  1. Verify that the Persistent Volume Claims are bound to the elasticsearch pods.

    1. Verify that Persistent Volume Claims are visible with a state of Bound, attached to elasticsearch-* pods.

      Screenshot of Persistent Volume Claims with a bound state attached to elasticsearch pods

Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods.

You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default.

To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Container Storage in the uninstall chapter of the respective deployment guide.

Chapter 5. Backing OpenShift Container Platform applications with OpenShift Container Storage

You cannot directly install OpenShift Container Storage during the OpenShift Container Platform installation. However, you can install OpenShift Container Storage on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Container Storage.

    • In the Deployments page, you can do one of the following:

    • In the Deployment Configs page, you can do one of the following:

  1. In the Add Storage page, you can choose one of the following options:

      1. You cannot resize the storage capacity after the creation of Persistent Volume Claim.

Chapter 6. Scaling storage nodes

To scale the storage capacity of OpenShift Container Storage, you can do either of the following:

6.1. Requirements for scaling storage nodes

Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Container Storage instance:

If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space. Completely full storage is very difficult to recover.

Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space.

If you do run out of storage space completely, contact Red Hat Customer Support.

6.2. Scaling up storage by adding capacity to your OpenShift Container Storage nodes on Google Cloud infrastructure

Use this procedure to add storage capacity and performance to your configured Red Hat OpenShift Container Storage worker nodes.

  1. ocs installed operators

  2. In the top navigation bar, scroll right and click tab.

    OCS Storage Cluster overview

  3. OCS add capacity dialog gcp

    From this dialog box, you can set the requested additional capacity and the storage class. will show the capacity selected at the time of installation and will allow to add the capacity only in this increment. Set the storage class to standard if you are using the default storage class that uses HDD. However, if you created a storage class to use SSD based disks for better performance, you need to select that storage class.

    The effectively provisioned capacity will be three times as much as what you see in the field because OpenShift Container Storage uses a replica count of 3.

  1. Navigate to → tab, then check the card.

    ocs add capacity expansion verification capacity card aws

As of OpenShift Container Storage 4.2, cluster reduction, whether by reducing OSDs or nodes, is not supported.

6.3. Scaling out storage capacity by adding new nodes

To scale out storage capacity, you need to perform the following:

6.3.1. Adding a node on Google Cloud installer-provisioned infrastructure

It is recommended to add 3 nodes each in different zones. You must add 3 nodes and perform this procedure for all of them.

To verify that the new node is added, see Section 6.3.2, “Verifying the addition of a new node”.

6.3.2. Verifying the addition of a new node

  1. Execute the following command and verify that the new node is present in the output:

  2. Click → , confirm that at least the following pods on the new node are in state:

6.3.3. Scaling up storage capacity

After you add a new node to OpenShift Container Storage, you must scale up the storage capacity as described in Scaling up storage by adding capacity.

Chapter 7. Multicloud Object Gateway

7.1. About the Multicloud Object Gateway

The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage.

7.2. Accessing the Multicloud Object Gateway with your applications

You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the MCG endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information.

You can access the relevant endpoint, access key, and secret access key two ways:

7.2.1. Accessing the Multicloud Object Gateway from the terminal

Run the describe command to view information about the MCG endpoint, including its access key (AWS_ACCESS_KEY_ID value) and secret access key (AWS_SECRET_ACCESS_KEY value):

The output from the oc describe noobaa command lists the internal and external DNS names that are available. When using the internal DNS, the traffic is free. The external DNS uses Load Balancing to process the traffic, and therefore has a cost per hour.

7.2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface

Run the status command to access the endpoint, access key, and secret access key:

You now have the relevant endpoint, access key, and secret access key in order to connect to your applications.

If AWS S3 CLI is the application, the following command will list buckets in OCS:

7.3. Adding storage resources for hybrid or Multicloud

7.3.1. Adding storage resources for hybrid or Multicloud using the MCG command line interface

The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters.

To do so, add a backing storage that can be used by the MCG.

    1. Replace with an existing AWS bucket name. This argument tells NooBaa which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.

7.3.2. Creating an s3 compatible Multicloud Object Gateway backingstore

The Multicloud Object Gateway can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage’s RADOS Gateway (RGW). The following procedure shows how to create an S3 compatible Multicloud Object Gateway backing store for Red Hat Ceph Storage’s RADOS Gateway. Note that when RGW is deployed, Openshift Container Storage operator creates an S3 compatible backingstore for Multicloud Object Gateway automatically.

  1. From the Multicloud Object Gateway (MCG) command-line interface, run the following NooBaa command:

    1. To get the and , run the following command using your RGW user secret name:

    2. Replace with an existing RGW bucket name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.

  1. Create a CephObjectStore user. This also creates a secret containing the RGW credentials:

7.3.3. Adding storage resources for hybrid and Multicloud using the user interface

  1. In your OpenShift Storage console, navigate to → → select the link:

    MCG object service noobaa link

  2. Select the tab in the left, highlighted below. From the list that populates, select :

    MCG add cloud resource

  3. MCG add new connection

  4. Select the relevant native cloud provider or S3 compatible option and fill in the details:

    MCG add cloud connection

  5. Select the newly created connection and map it to the existing bucket:

    MCG map to existing bucket

Resources created in NooBaa UI cannot be used by OpenShift UI or MCG CLI.

7.3.4. Creating a new bucket class

Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class (OBC).

Use this procedure to create a bucket class in OpenShift Container Storage.

  1. On the OpenShift Container Storage Operator page, scroll right and click the tab.

    Figure 7.1. OpenShift Container Storage Operator page with Bucket Class tab

    Screenshot of OpenShift Container Storage operator page with Bucket Class tab.

    1. Screenshot of create new bucket class page.

    2. In Placement Policy, select and click . You can choose either one of the options as per your requirements.

      • Screenshot of Tier 1 - Policy Type selection tab.

    3. Select atleast one resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store.

      Screenshot of Tier 1 - Backing Store selection tab.

You need to select atleast 2 backing stores when you select Policy Type as Mirror in previous step.

  1. Screenshot of bucket class settings review tab.

7.3.5. Creating a new backing store

Use this procedure to create a new backing store in OpenShift Container Storage.

  1. On the OpenShift Container Storage Operator page, scroll right and click the tab.

    Figure 7.6. OpenShift Container Storage Operator page with backing store tab

    Screenshot of OpenShift Container Storage operator page with backing store tab.

  2. Screenshot of create new backing store page.

    1. Select a from drop down list, or create your own secret. Optionally, you can view which lets you fill in the required secrets.

      For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation.

      Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 7.3.1, “Adding storage resources for hybrid or Multicloud using the MCG command line interface” and follow the procedure for the addition of storage resources using a YAML.

      This menu is relevant for all providers except Google Cloud and local PVC.

7.4. Mirroring data for hybrid and Multicloud buckets

The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters.

Then you create a bucket class that reflects the data management policy, mirroring.

7.4.1. Creating bucket classes to mirror data using the MCG command-line-interface

  1. From the MCG command-line interface, run the following command to create a bucket class with a mirroring policy:

  2. Set the newly created bucket class to a new bucket claim, generating a new bucket that will be mirrored between two locations:

7.4.2. Creating bucket classes to mirror data using a YAML

  1. Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS:

  2. Add the following lines to your standard Object Bucket Claim (OBC):

    For more information about OBCs, see Section 7.6, “Object Bucket Claim”.

7.4.3. Configuring buckets to mirror data using the user interface

  1. In your OpenShift Storage console, navigate to → → select the link:

    MCG object service noobaa link

  2. Click the icon on the left side. You will see a list of your buckets:

    MCG noobaa bucket icon

  3. MCG edit tier 1 resources

  4. Select and check the relevant resources you want to use for this bucket. In the following example, we mirror data between on prem Ceph RGW to AWS:

    MCG mirror relevant resources

Resources created in NooBaa UI cannot be used by OpenShift UI or MCG CLI.

7.5. Bucket policies in the Multicloud Object Gateway

OpenShift Container Storage supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them.

7.5.1. About bucket policies

Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview.

7.5.2. Using bucket policies

  1. Create the bucket policy in JSON format. See the following example:

    There are many available elements for bucket policies. For details on these elements and examples of how they can be used, see AWS Access Policy Language Overview.

    For more examples of bucket policies, see AWS Bucket Policy Examples.

    Instructions for creating S3 users can be found in Section 7.5.3, “Creating an AWS S3 user in the Multicloud Object Gateway”.

  2. Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket:

    Add --no-verify-ssl if you are using the default self signed certificates

    For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy.

The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.@noobaa.io.

7.5.3. Creating an AWS S3 user in the Multicloud Object Gateway

  1. In your OpenShift Storage console, navigate to → → select the link:

    MCG object service noobaa link

  2. MCG accounts create account button

  3. Select , provide the , for example, [email protected]. Click Next:

    MCG create account s3 user

  4. Select , for example, noobaa-default-backing-store. Select . A specific bucket or all buckets can be selected. Click :

    MCG create account s3 user2

7.6. Object Bucket Claim

An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads.

An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can’t create new buckets by default.

7.6.1. Dynamic Object Bucket Claim

Similar to Persistent Volumes, you can add the details of the Object Bucket claim to your application’s YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application.

  1. You can add more lines to the YAML file to automate the use of the Object Bucket Claim. The example below is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job will claim the Object Bucket from NooBaa, which will create a bucket and an account.

    1. Replace obc-name with the name of your Object Bucket Claim.

7.6.2. Creating an Object Bucket Claim using the command line interface

When creating an Object Bucket Claim using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service.

  1. Use the command-line interface to generate the details of a new bucket and credentials. Run the following command:

    Replace with a unique Object Bucket Claim name, for example, myappobc.

    Additionally, you can use the --app-namespace option to specify the namespace where the Object Bucket Claim configuration map and secret will be created, for example, myapp-namespace.

    The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC.

  2. Run the following command to view the YAML file for the new Object Bucket Claim:

  3. Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this Object Bucket Claim. The CM and the secret have the same name as the Object Bucket Claim. To view the secret:

  4. The configuration map contains the S3 endpoint information for your application.

7.6.3. Creating an Object Bucket Claim using the OpenShift Web Console

You can create an Object Bucket Claim (OBC) using the OpenShift Web Console.

  1. Create Object Bucket Claims page

  2. Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu:

    Create Object Bucket Claim wizard

    The following storage classes, which were created after deployment, are available for use:

    Create Object Bucket Claim wizard

    The following storage classes, which were created after deployment, are available for use:

    • The RGW OBC storage class is only available with fresh installations of OpenShift Container Storage version 4.5. It does not apply to clusters upgraded from previous OpenShift Container Storage releases.

  3. Once you create the OBC, you are redirected to its detail page:

    Object Bucket Claim Details page

7.7. Scaling Multicloud Object Gateway performance by adding endpoints

The Multicloud Object Gateway performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints.

The Multicloud Object Gateway resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default:

7.7.1. S3 endpoints in the Multicloud Object Gateway

The S3 endpoint is a service that every Multicloud Object Gateway provides by default that handles the heavy lifting data digestion in the Multicloud Object Gateway. The endpoint service handles the inline data chunking, deduplication, compression, and encryption, and it accepts data placement instructions from the Multicloud Object Gateway.

7.7.2. Scaling with storage nodes

A storage node in the Multicloud Object Gateway is a NooBaa daemon container attached to one or more Persistent Volumes and used for local object service data storage. NooBaa daemons can be deployed on Kubernetes nodes. This can be done by creating a Kubernetes pool consisting of StatefulSet pods.

  1. In the Multicloud Object Gateway user interface, from the page, click :

    MCG add storage resources button

  2. MCG deploy kubernetes pool

  3. In the step create the target pool for the future installed nodes.

    MCG deploy kubernetes pool create pool

  4. In the step, configure the number of requested pods and the size of each PV. For each new pod, one PV is be created.

    MCG deploy kubernetes pool configure

  5. All nodes will be assigned to the pool you chose in the first step, and can be found under → → :

    MCG storage resources overview

Chapter 8. Managing persistent volume claims

Expanding PVCs is not supported for PVCs backed by OpenShift Container Storage.

8.1. Configuring application pods to use OpenShift Container Storage

Follow the instructions in this section to configure OpenShift Container Storage as storage for an application pod.


  1. Create a Persistent Volume Claim (PVC) for the application to use.


  2. Configure a new or existing application pod to use the new PVC.

      1. Under the spec: section, add volume: section to add the new PVC as a volume for the application pod.

      1. Under the spec: section, add volume: section to add the new PVC as a volume for the application pod and click .

8.2. Viewing Persistent Volume Claim request status

Use this procedure to view the status of a PVC request.

8.3. Reviewing Persistent Volume Claim request events

Use this procedure to review and address Persistent Volume Claim (PVC) request events.

8.4. Dynamic provisioning

8.4.1. About dynamic provisioning

The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators (cluster-admin) or Storage Administrators (storage-admin) define and create the StorageClass objects that users can request without needing any intimate knowledge about the underlying storage volume sources.

The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure.

Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plug-in APIs.

8.4.2. Dynamic provisioning in OpenShift Container Storage

Red Hat OpenShift Container Storage is software-defined storage that is optimised for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers.

Version 4.5 uses Red Hat Ceph Storage to provide the file, block, and object storage that backs persistent volumes, and Rook.io to manage and orchestrate provisioning of persistent volumes and claims. NooBaa provides object storage, and its Multicloud Gateway allows object federation across multiple cloud environments (available as a Technology Preview).

In OpenShift Container Storage 4.5, the Red Hat Ceph Storage Container Storage Interface (CSI) driver for RADOS Block Device (RBD) and Ceph File System (CephFS) handles the dynamic provisioning requests. When a PVC request comes in dynamically, the CSI driver has the following options:

The judgement of which driver (RBD or CephFS) to use is based on the entry in the storageclass.yaml file.

8.4.3. Available dynamic provisioning plug-ins

OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster’s configured provider’s API to create new storage resources:

For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/,Value= where and are unique per cluster.

Dynamic provisioning is accomplished through the EFS provisioner pod and not through a provisioner plug-in.

The persistent-volume-binder ServiceAccount requires permissions to create and get Secrets to store the Azure storage account and keys.

In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists.

Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation.

Chapter 9. Replacing storage nodes

You can choose one of the following procedures to replace storage nodes:

9.1. Replacing operational nodes on Google Cloud installer-provisioned infrastructure

Use this procedure to replace an operational node on Google Cloud installer-provisioned infrastructure (IPI).

  1. This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.

  2. Wait for new machine to start and transition into state.

    This activity may take at least 5-10 minutes or more.

  3. Apply the OpenShift Container Storage label to the new node using any one of the following:

    • Execute the following command to apply the OpenShift Container Storage label to the new node:

  1. Execute the following command and verify that the new node is present in the output:

  2. Click → , confirm that at least the following pods on the new node are in state:

9.2. Replacing failed nodes on Google Cloud installer-provisioned infrastructure

Perform this procedure to replace a failed node which is not operational on Google Cloud installer-provisioned infrastructure (IPI) for OpenShift Container Storage.

  1. A new machine is automatically created, wait for new machine to start.

    This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.

  2. Apply the OpenShift Container Storage label to the new node using any one of the following:

    • Execute the following command to apply the OpenShift Container Storage label to the new node:

  1. Execute the following command and verify that the new node is present in the output:

  2. Click → , confirm that at least the following pods on the new node are in state:

Chapter 10. Replacing storage devices

10.1. Replacing operational or failed storage devices on Google Cloud installer-provisioned infrastructure

When you need to replace a device in a dynamically created storage cluster on an Google Cloud installer-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: