Skip to content

Add ci #11

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: vpc-endpoint
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions .github/workflows/create-aws-test-infrastructure.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
name: Create AWS test infrastructure
on:
workflow_dispatch:
inputs:
self_hosted_runner_name:
description: 'Self-hosted runner name'
required: true

env:
AWS_ACCESS_KEY_ID: ${{ secrets[format('{0}_AWS_ACCESS_KEY_ID', github.triggering_actor)] }}
AWS_SECRET_ACCESS_KEY: ${{ secrets[format('{0}_AWS_SECRET_ACCESS_KEY', github.triggering_actor)] }}

jobs:
create:
name: Create resources
runs-on: [self-hosted, "${{ github.event.inputs.self_hosted_runner_name }}"]

steps:
- name: Checkout repo
uses: actions/checkout@v3
with:
clean: false

- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.9.2
- name: Terraform init
id: init
run: terraform init
working-directory: ./terraform/aws
- name: Terraform plan
id: plan
run: terraform plan -out=tfplan
working-directory: ./terraform/aws
- name: Terraform Apply
run: terraform apply -auto-approve tfplan
working-directory: ./terraform/aws
32 changes: 32 additions & 0 deletions .github/workflows/destroy-aws-test-infrastructure.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Destroy AWS test infrastructure
on:
workflow_dispatch:
inputs:
self_hosted_runner_name:
description: 'Self-hosted runner name'
required: true

env:
AWS_ACCESS_KEY_ID: ${{ secrets[format('{0}_AWS_ACCESS_KEY_ID', github.triggering_actor)] }}
AWS_SECRET_ACCESS_KEY: ${{ secrets[format('{0}_AWS_SECRET_ACCESS_KEY', github.triggering_actor)] }}

jobs:
destroy:
name: Destroy resources
runs-on: [self-hosted, "${{ github.event.inputs.self_hosted_runner_name }}"]
steps:
- name: Checkout repo
uses: actions/checkout@v3
with:
clean: false

- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.9.2
- name: Initialize Terraform
run: terraform init
working-directory: ./terraform/aws
- name: Destroy Terraform configuration
run: terraform destroy -auto-approve
working-directory: ./terraform/aws
25 changes: 25 additions & 0 deletions .github/workflows/test-aws-infrastructure.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
name: Test using AWS infrastructure
on:
workflow_dispatch:
inputs:
self_hosted_runner_name:
description: 'Self-hosted runner name'
required: true

jobs:
test:
name: Infrastructure tests
runs-on: [self-hosted, "${{ github.event.inputs.self_hosted_runner_name }}"]
steps:
- name: Checkout repo
uses: actions/checkout@v3
with:
clean: false

- name: Check if gRPC server is ready
run: ./test_grpc_server.sh
working-directory: ./grpc/tests

- name: Run tests
run: ./run_tests.sh
working-directory: ./grpc/tests
9 changes: 9 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,12 @@ bin/*
connector/bin/*

connector/demo/*.pem
**/.terraform
**/.terraform.lock.hcl
**/terraform.tfstate
**/terraform.tfstate.backup
.github/.secrets
.github/artifacts
.github/workflow_config.yaml
grpc/tests/logs

67 changes: 67 additions & 0 deletions CI.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# CI Pipeline
To streamline the works on the project, CI pipeline is available for developers. It uses GitHub Actions, Terraform and grpc_cli among others and allows to run all the tests locally using self-hosted runners.

The pipeline consists of 3 workflows, each specified under the [.github/workflows](.github/workflows) directory. Each of the workflows provides a separate functionality - resource creation, testing and deletion.

## Self-hosted runner
A self-hosted runner is required to run any workflow. To create one locally, please follow the [GitHub's documentation](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/adding-self-hosted-runners). A label should also be added to the runner. It will be later used to specify a runner on which to execute workflows. Example label:
``
<github_handle>-<runner_name>
``

Additionally a GitHub CLI tool (``gh``) is required to execute workflows from a local machine. Please follow [GitHub's documentation](https://github.com/cli/cli#installation) to install it.

## Workflow execution
Once runner is crated and running, a workflow can be executed with the following command
```
gh workflow run <workflow_filename> --ref <branch_name> -f self_hosted_runner_name=<self_hosted_runner_label>
```
``workflow_filename`` - file containing instructions to execute using GitHub Actions</br>
``branch_name`` - a branch on which to execute a workflow</br>
``self_hosted_runner_label`` - previously specified label, allows to choose a runner on which the workflow will be executed

## AWS access
Please specify access credentials to an AWS account using [GitHub Secrets](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository). Two secrets are required: ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY``. Please prepend both of these names with an appropriate github handle ``<GITHUB_HANDLE_>``. </br>
There can be more credentials defined in a repository, to allow usage of multiple AWS accounts the prepended github handle will be used to automatically select credentials based on the user that triggered the workflow.

The following secrets should be specified before any workflow is executed:
```
<GITHUB_HANDLE>_AWS_ACCESS_KEY_ID
<GITHUB_HANDLE>_AWS_SECRET_ACCESS_KEY
```

## Create workflow
It uses Terraform to spin up a sample infrastructure on AWS. Created resources can be later used for testing purposes.

Please run the following command to execute the create workflow:
```
gh workflow run create-aws-test-infrastructure.yml --ref <branch_name> -f self_hosted_runner_name=<self_hosted_runner_label>
```

Please notice that the .tfstate file, created during the workflow run, is kept locally under the ``terraform/aws/backend`` directory in a repository that lives where the self-hosted runner is located, e.g. ``~/actions-runner/_work/awi-infra-guard/awi-infra-guard/terraform/aws/backend``

## Test workflow
Runs checks on the created infrastructure, ensuring that the required fields are present and accessible by the awi-infra-guard.

An instance of the awi-infra-guard, accepting the grpc calls, must be running locally, on the same machine as the self-hosted runner. The tests will be executed against that instance. Please refer to [README.md](README.md) for more information on running the app.

Please run the following command to execute the test workflow:
```
gh workflow run test-aws-infrastructure.yml --ref <branch_name> -f self_hosted_runner_name=<self_hosted_runner_label>
```

## Destroy workflow
Deletes the sample infrastrcture created by the [Create workflow](#create-workflow)

Please run the following command to execute the destroy workflow:
```
gh workflow run destroy-aws-test-infrastructure.yml --ref <branch_name> -f self_hosted_runner_name=<self_hosted_runner_label>
```

## Workflow results
All the results and logs from the workflow runs are available on the GitHub page of the repository, under the Actions tab.

Optionally they can accessed using the following command:
```
gh run view
```
175 changes: 175 additions & 0 deletions List-New-Resource.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,175 @@

This document serves as a comprehensive guide for adding support for the listing a new cloud resource using awi-infra-guard software. It outlines the required modifications across various components of the application. This example shows how a VPCEndpoint was added, but can be used an example to add any other resource.


## Prerequisites

Familiarity with Go, gRPC, and Protocol Buffers.
Set up with the complete development environment of the awi-infra-guard project.

## File Modifications

**1. Proto File**

Update proto/cloud.proto and proto/types.proto to include definitions and service methods for VPCEndpoints.

```
// Update to proto/cloud.proto
rpc ListVPCEndpoints (ListVPCEndpointsRequest) returns (ListVPCEndpointsResponse) {}

// Update to proto/types.proto
message VPCEndpoint {
string id = 1;
string name = 2;
string provider = 3;
string account_id = 4;
string vpc_id = 5;
string region = 6;
string state = 7;
string type = 8;
string service_name = 9;
repeated int32 route_table_ids = 10;
repeated int32 subnet_ids = 11;
map<string, string> labels = 12;
google.protobuf.Timestamp created_at = 13;
google.protobuf.Timestamp updated_at = 14;
string last_sync_time = 15;
}
// Update cloud.proto file and add request response objects for the resource you're trying to fetch
message ListVPCEndpointsResponse {
repeated VPCEndpoint veps = 1;
string last_sync_time = 2;
}

message ListVPCEndpointsRequest {
string provider = 1;
string vpc_id = 2;
string region = 3;
string account_id = 4;
}

// Update CloudProvider Service to add List Resource RPC (Method)
rpc ListVPCEndpoints (ListVPCEndpointsRequest) returns (ListVPCEndpointsResponse) {}

```
Run `make generate` in the repository root directory to generate language-specific generated protobuf files.

**2. Type Definitions**

Add or update VPCEndpoint struct definitions in type/types.go.

```
// type/types.go

const VPCEndpointType = "VPCEndpoint"


type VPCEndpoint struct {
}

func (v *VPCEndpoint) DbId() string {
return CloudID(v.Provider, v.ID)
}

func (v *VPCEndpoint) SetSyncTime(time string) {
v.LastSyncTime = time
}

func (v *VPCEndpoint) GetProvider() string {
return v.Provider
}

```

**3. Server Implementation**

Implement the gRPC server methods in server/server.go and update server/translate.go to handle data translation.

```
// server/server.go
func (s *Server) ListVPCEndpoints(ctx context.Context, in *infrapb.ListVPCEndpointsRequest) (*infrapb.ListVPCEndpointsResponse, error) {
// Add your implementation here
}

// server/translate.go
func typesVPCEndpointsToGrpc(in []types.VPCEndpoint) []*infrapb.VPCEndpoint {
// Add your translation logic here
}
```

**4. Synchronization Logic**

Define synchronization logic for VPCEndpoints in sync/sync.go.

```

// sync/sync.go
func (s *Syncer) syncVPCEndpoints() {
// Add synchronization logic here
}

func (s *Syncer) syncVPCEndpoints() {
genericCloudSync[*types.VPCEndpoint](s, types.VPCEndpointType, func(ctx context.Context, cloudProvider provider.CloudProvider, accountID string) ([]types.VPCEndpoint, error) {

return cloudProvider.ListVPCEndpoints(ctx, &infrapb.ListVPCEndpointsRequest{AccountId: accountID})
}, s.logger, s.dbClient.ListVPCEndpoints, s.dbClient.PutVPCEndpoint, s.dbClient.DeleteVPCEndpoint)
}
```

**5. Provider Interface**

Ensure the CloudProvider interface in provider/provider.go supports the ListVPCEndpoints method.

```
// provider/provider.go
ListVPCEndpoints(ctx context.Context, input *infrapb.ListVPCEndpointsRequest) ([]types.VPCEndpoint, error)
```

**6. Database Layer**

Implement methods for VPCEndpoints in db/db.go and db/db_strategy.go.

```
// db/db.go
interface Client {
ListVPCEndpoints() ([]*types.VPCEndpoint, error)
PutVPCEndpoint(*types.VPCEndpoint) error
GetVPCEndpoint(string) (*types.VPCEndpoint, error)
DeleteVPCEndpoint(string) error
}

// db/db_strategy.go
func (p *providerWithDB) ListVPCEndpoints(ctx context.Context, params *infrapb.ListVPCEndpointsRequest) ([]types.VPCEndpoint, error) {
// Implement interaction logic here
}

```

**7. BoltDB Client Implementation**

Add methods to manage VPCEndpoints in BoltDB within boltdb/bolt_client.go.

```
//boltdb/db.go
Add Table Name for your resource
const vpcEndpointTable = "vpcEndpoints"

var tableNames = [] string {
...
vpcEndpointTable,
...
}
// boltdb/bolt_client.go
func (client *boltClient) PutVPCEndpoint(vpce *types.VPCEndpoint) error {
// Implement put logic here
}
```

**8. Cloud Provider Specific Logic**

Implement cloud-specific logic to list VPCEndpoints in aws/listVPCEndpoint.go, azure.go, gcp.go. This interface won't be satisfied unless we have function definitions for all providers.
So, even when you don't have implementation for all providers, do add the function definition to avoid compilation errors.

```
func (c *Client) ListVPCEndpoints(ctx context.Context, params *infrapb.ListVPCEndpointsRequest) ([]types.VPCEndpoint, error) { }
```
1 change: 1 addition & 0 deletions aws/aws.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ const providerName = "AWS"
type Client struct {
defaultRegion string
defaultAccountID string
accountID string
profiles []types.Account
clients map[string]awsRegionalClientSet

Expand Down
Loading