|
| 1 | + |
| 2 | +## AWS |
| 3 | + |
| 4 | +### Create an EC2 instance |
| 5 | + |
| 6 | +* Select Eu-west2 (London) region from the top right of navigation bar |
| 7 | +* Click on Launch instance |
| 8 | +* Choose Amazon Linux 2 AMI (HVM) kARNEL 5.10 64-bit (- X86) machine, click select |
| 9 | +* Choose t2.2xlarge and click next: configure instance details |
| 10 | +* Choose subnet default eu-west-2c |
| 11 | +* In IAM role choose existing trainings-ec2-dev role and click next: storage |
| 12 | +* 8 gb is fine, click next: add tags |
| 13 | +* Add following tags |
| 14 | + * Name: [Unique Instance name] |
| 15 | + * Tenable: FA |
| 16 | + * ServiceOwner: [firstname.lastname] |
| 17 | + * ServiceCode: PABCLT |
| 18 | +* add securitygroup, select an existing security group: IAStrainings-ec2-mo |
| 19 | +* Review and Launch and then select launch |
| 20 | +* It will prompt to set a key pair (to allow ssh). create a new key and download it. |
| 21 | + |
| 22 | +It will create the instance. To see the running instance goto instances and instacne state will be "Running" |
| 23 | + |
| 24 | +### SSH instance on VDI |
| 25 | + |
| 26 | + |
| 27 | +* Save the key (.pem) to .ssh and set the permission: chmod 0400 ~/.ssh/your_key.pem |
| 28 | +* Open ~/.ssh/config and add following: |
| 29 | + |
| 30 | +``` |
| 31 | +Host ec2-*.eu-west-2.compute.amazonaws.com |
| 32 | + IdentityFile ~/.ssh/your_key.pem |
| 33 | + User ec2-user |
| 34 | +
|
| 35 | +``` |
| 36 | + |
| 37 | +* Find the public IPv4 DNS and ssh in using it ssh ec2-<ip address>.eu-west-2.compute.amazonaws.com, public IPv4 DNS can be found in instance detail on AWS. Click on your instance and it will open the details. |
| 38 | + |
| 39 | +* Remember to shutdown the instance when not using it. It will save the cost. |
| 40 | +### create s3 bucket |
| 41 | + |
| 42 | +* goto s3 service and press "create bucket" |
| 43 | +* name the bucket |
| 44 | +* set region to EU (London) eu-west-2 |
| 45 | +* add tags: |
| 46 | + * Name: [name of bucket or any unique name] |
| 47 | + * ServiceOwner: [your-name] |
| 48 | + * ServiceCode: PABCLT |
| 49 | + * Tenable: FA |
| 50 | +* click on "create bucket" |
| 51 | + |
| 52 | +### Key configurations |
| 53 | + |
| 54 | + |
| 55 | +The above script run only when config files contains latest keys. In order to update the keys: |
| 56 | + |
| 57 | +* go to AB climate training dev --> Administrator access --> command line or programmatic access |
| 58 | +* Copy keys in "Option 1: Set AWS environment variables" |
| 59 | +* In VDI, paste (/replace existing) these keys in ~/.aws/config |
| 60 | +* add [default] in first line |
| 61 | +* Copy keys in "Option 2: Add a profile to your AWS credentials file" |
| 62 | +* In VDI, Paste the keys in credentials file: ~/.aws/credentials (remove the first copied line, looks somethings like: [198477955030_AdministratorAccess]) |
| 63 | +* add [default] in first line |
| 64 | + |
| 65 | +The config and credentials file should look like (with own keys): |
| 66 | + |
| 67 | +``` |
| 68 | +[default] |
| 69 | +export AWS_ACCESS_KEY_ID="ASIAS4NRVH7LD2RRGSFB" |
| 70 | +export AWS_SECRET_ACCESS_KEY="rpI/dxzQWhCul8ZHd18n1VW1FWjc0LxoKeGO50oM" |
| 71 | +export AWS_SESSION_TOKEN="IQoJb3JpZ2luX2VjEGkaCWV1LXdlc3QtMiJH" |
| 72 | +``` |
| 73 | + |
| 74 | +### Loading data on s3 bucket from VDI (using boto3) |
| 75 | + |
| 76 | +to upload the file(s) on S3 use: /aws-scripts/s3_file_upload.py |
| 77 | +to upload the directory(s) on S3 use: /aws-scripts/s3_bulk_data_upload.py |
| 78 | + |
| 79 | +### AWS Elastic container repository |
| 80 | + |
| 81 | +Following instructions are for creating image repo on ECR and uploading container image |
| 82 | + |
| 83 | +* ssh to the previously created EC2 instance, make an empty Git repo: |
| 84 | + |
| 85 | +``` |
| 86 | +sudo yum install -y git |
| 87 | +git init |
| 88 | +``` |
| 89 | +* On VDI, run the following command to push the PyPrecis repo containing the docker file to the EC2 instance: |
| 90 | +``` |
| 91 | +git push <ec2 host name>:~ |
| 92 | +``` |
| 93 | + |
| 94 | +* Now checkout the branch on EC2: git checkout [branch-name] |
| 95 | +* Install docker and start docker service |
| 96 | + |
| 97 | +``` |
| 98 | +sudo amazon-linux-extras install docker |
| 99 | +sudo service docker start |
| 100 | +``` |
| 101 | + |
| 102 | +* build docker image: |
| 103 | + |
| 104 | +``` |
| 105 | +sudo docker build . |
| 106 | +``` |
| 107 | + |
| 108 | +* goto AWS ECR console and "create repository", make it private and name it |
| 109 | + |
| 110 | +* Once created, press "push commands" |
| 111 | + |
| 112 | +* copy the command and run it on EC2 instance, it will push the container image on record. if get "permission denied" error, please add "sudo" before "docker" in the command. |
| 113 | + |
| 114 | + |
| 115 | + |
| 116 | +### AWS Sagemaker: Run notebook using custom kernel |
| 117 | +The instructions below follow the following tutorial: |
| 118 | +https://aws.amazon.com/blogs/machine-learning/bringing-your-own-custom-container-image-to-amazon-sagemaker-studio-notebooks/ |
| 119 | + |
| 120 | +* goto Sagemaker and "open sagemaker domain" |
| 121 | +* add user |
| 122 | + * Name and and select Amazonsagemaker-executionrole (dafult one) |
| 123 | + |
| 124 | +* Once user is created, goto "attach image" |
| 125 | +* Select "New Image" and add image URI (copy from image repo) |
| 126 | +* Give new image name, display name, sagmaker-executionrole and add tags and attach the image |
| 127 | +* add kernel name and display name (both can be same) |
| 128 | +* Now, launch app -> Studio and it will open the Notebook dashboard. |
| 129 | +* Select python notebook and add your custom named Kernel |
0 commit comments