Skip to content

Commit 4bfe74c

Browse files
authored
chore: Move static assets to /static (#1777)
1 parent d3b1ff7 commit 4bfe74c

319 files changed

Lines changed: 162 additions & 166 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

website/docs/aiml/chatbot/testing.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -68,13 +68,13 @@ http://k8s-ui-uinlb-647e781087-6717c5049aa96bd9.elb.us-west-2.amazonaws.com
6868
A "Chat" button will be visible in the bottom-right corner of the screen:
6969

7070
<Browser url="http://k8s-ui-uinlb-647e781087-6717c5049aa96bd9.elb.us-west-2.amazonaws.com">
71-
<img src={require('./assets/home-chat.webp').default}/>
71+
<img src={require('@site/static/docs/aiml/chatbot/home-chat.webp').default}/>
7272
</Browser>
7373

7474
Clicking this button will display a chat window which you can use to send messages to the retail store assistant:
7575

7676
<Browser url="http://k8s-ui-uinlb-647e781087-6717c5049aa96bd9.elb.us-west-2.amazonaws.com">
77-
<img src={require('./assets/chat-bot.webp').default}/>
77+
<img src={require('@site/static/docs/aiml/chatbot/chat-bot.webp').default}/>
7878
</Browser>
7979

8080
## Conclusion

website/docs/aiml/inferentia/wrapup.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@ In the previous sections we've seen how we can use Amazon EKS to train models fo
77

88
For training the model we would want to use the DLC container as our base image and add our Python code to it. We would then store this container image in our container repository like Amazon ECR. We would use a Kubernetes Job to run this container image on EKS and store the generated model to S3.
99

10-
![Build Model](./assets/CreateModel.webp)
10+
![Build Model](/docs/aiml/inferentia/CreateModel.webp)
1111

1212
For running inference against our model we would want to modify our code to allow other applications or users to retrieve the classification results from the model. This could be done by creating a REST API that we can call and responds with our classification results. We would run this application as a Kubernetes Deployment within our cluster using the AWS Inferentia resource requirement: `aws.amazon.com/neuron`.
1313

14-
![Inference Model](./assets/Inference.webp)
14+
![Inference Model](/docs/aiml/inferentia/Inference.webp)

website/docs/aiml/q-cli/q-cli-setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ To see the tools offered by the EKS MCP server, run:
8484

8585
You should see output similar to this:
8686

87-
![list-mcp-tools](./assets/list-mcp-tools.jpg)
87+
![list-mcp-tools](/docs/aiml/q-cli/list-mcp-tools.jpg)
8888

8989
The output shows:
9090

website/docs/automation/continuousdelivery/codepipeline/deploy_application.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ $ while [[ "$(aws codepipeline list-pipeline-executions --pipeline-name ${EKS_CL
2727

2828
Once complete the pipeline will show the stages have succeeded.
2929

30-
![Pipeline complete](./assets/pipeline-complete.webp)
30+
![Pipeline complete](/docs/automation/continuousdelivery/codepipeline/pipeline-complete.webp)
3131

3232
Now we can review the changes that have been made by the pipeline. First we can check the ECR repository:
3333

@@ -39,7 +39,7 @@ Now we can review the changes that have been made by the pipeline. First we can
3939

4040
Open the repository `retail-store-sample-ui` and inspect the image that has been pushed.
4141

42-
![Image 1](assets/ecr_image.webp)
42+
![Image 1](/docs/automation/continuousdelivery/codepipeline/ecr_image.webp)
4343

4444
We can also verify that a Helm release has been installed in the cluster:
4545

@@ -76,6 +76,6 @@ $ kubectl get deployment -n ui ui -o json | jq -r '.spec.template.spec.container
7676

7777
We can also click the `deploy_eks` action to view more details such as the logs:
7878

79-
![Pipeline deploy detail](assets/pipeline-deploy-detail.webp)
79+
![Pipeline deploy detail](/docs/automation/continuousdelivery/codepipeline/pipeline-deploy-detail.webp)
8080

8181
With that we have successfully create a pipeline that builds our application container image and deploys it to an EKS cluster using a Helm chart.

website/docs/automation/continuousdelivery/codepipeline/pipeline_setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ $ aws eks associate-access-policy --cluster-name ${EKS_CLUSTER_NAME} \
1717

1818
Let's explore the CodePipeline that was set up for us, and refer to the CloudFormation that was used to create it.
1919

20-
![Pipeline overview](./assets/pipeline.webp)
20+
![Pipeline overview](/docs/automation/continuousdelivery/codepipeline/pipeline.webp)
2121

2222
You can use the button below to navigate to the pipeline in the console:
2323

website/docs/automation/controlplanes/ack/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,4 +32,4 @@ In this lab, we'll use ACK to provision these services and create secrets and co
3232

3333
For learning purposes, we're using helm to install the ACK controller. Another option is to use Terraform that allows for rapid deployment of AWS Service Controllers to your cluster. For more information, see the [ACK Terraform module documentation](https://registry.terraform.io/modules/aws-ia/eks-ack-addons/aws/latest#module_dynamodb).
3434

35-
![EKS with DynamoDB](./assets/eks-workshop-ddb.webp)
35+
![EKS with DynamoDB](/docs/automation/controlplanes/ack/eks-workshop-ddb.webp)

website/docs/automation/controlplanes/ack/provision-resources.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ sidebar_position: 5
55

66
By default, the **Carts** component in the sample application uses a DynamoDB local instance running as a pod in the EKS cluster called `carts-dynamodb`. In this section of the lab, we'll provision an Amazon DynamoDB cloud-based table for our application using Kubernetes custom resources and configure the **Carts** deployment to use this newly provisioned DynamoDB table instead of the local copy.
77

8-
![ACK reconciler concept](./assets/ack-desired-current-ddb.webp)
8+
![ACK reconciler concept](/docs/automation/controlplanes/ack/ack-desired-current-ddb.webp)
99

1010
Let's examine how we can create the DynamoDB Table using a Kubernetes manifest:
1111

website/docs/automation/controlplanes/crossplane/compositions/claims.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ eks-workshop-carts-crossplane-bt28w-lnb4r True True eks-workshop-carts-
4141

4242
Now, let's understand how the DynamoDB table is deployed using this claim:
4343

44-
![Crossplane reconciler concept](../assets/ddb-claim-architecture.webp)
44+
![Crossplane reconciler concept](/docs/automation/controlplanes/crossplane/ddb-claim-architecture.webp)
4545

4646
When querying the claim `DynamoDBTable` deployed in the carts namespace, we can observe that it points to and creates a Composite Resource (XR) `XDynamoDBTable`:
4747

website/docs/automation/controlplanes/crossplane/how-it-works.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,6 @@ Here, `upbound-provider-family-aws` represents the Crossplane provider for Amazo
2323

2424
Crossplane simplifies the process for developers to request infrastructure resources using Kubernetes manifests called claims. As illustrated in the diagram below, claims are the only namespace-scoped Crossplane resources, serving as the developer interface and abstracting implementation details. When a claim is deployed to the cluster, it creates a Composite Resource (XR), a Kubernetes custom resource representing one or more cloud resources defined through templates called Compositions. The Composite Resource then creates one or more Managed Resources, which interact with the AWS API to request the creation of the desired infrastructure resources.
2525

26-
![Crossplane claim](./assets/claim-architecture-drawing.webp)
26+
![Crossplane claim](/docs/automation/controlplanes/crossplane/claim-architecture-drawing.webp)
2727

2828
This architecture allows for a clear separation of concerns between developers, who work with high-level abstractions (claims), and platform teams, who define the underlying infrastructure implementations (Compositions and Managed Resources).

website/docs/automation/controlplanes/crossplane/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ You can view the Terraform that applies these changes [here](https://github.com/
2626

2727
Crossplane extends your Kubernetes cluster to support orchestrating any infrastructure or managed service. It allows you to compose Crossplane's granular resources into higher-level abstractions that can be versioned, managed, deployed, and consumed using your favorite tools and existing processes.
2828

29-
![EKS with Dynamodb](./assets/eks-workshop-crossplane.webp)
29+
![EKS with Dynamodb](/docs/automation/controlplanes/crossplane/eks-workshop-crossplane.webp)
3030

3131
With Crossplane, you can:
3232

0 commit comments

Comments
 (0)