Home » aws

Tag: aws

Migrating an Existing WordPress Site to AWS Lightsail

Introduction

In this guide we will walk through the steps to to migrate an existing wordpress website to a new AWS Lightsail instance.

Create a New Lightsail Instance

Description Screenshot
Log into the AWS management console
Click Create Instance
Choose the AWS Region, Platform, and Blueprint

As this is a wordpress example we have chosen

London, Linux and an Apps+OS using WordPress

If necessary generate a new SSH key pair and be sure to download the private key PEM file.
Choose your Instance plan
Give your instance a name
Click Create Instance

Setup your Lightsail Instance – Networking

Description Screenshot
In Light sail > Instances > Left click your instance
Here you can connect to the instance via SSH

Click Connect using SSH

If you are keeping this instance then you should generate a Static IP Address for this instance so things like DNS can point to the server.

Click Networking > Create Static IP

Select the Instance to attach this static IP Address to and give your static ip a name
Click Create
Instantly the IP address will be added to the lightsail instance. If you had any ssh sessions opened previously you will need to reconnect

Note that you can have up to 5 static ip addresses for your account and they are free only if attached to running instances.

Setup your Lightsail Instance – Operating System

Description Screenshot
In the Lightsail console home

Click you instance

Click connect using SSH

In the console type to obtain the password of the wordpress ‘user’ account generated by bitnami cat bitnami_application_password

Copy the password locally and keep it safe for later use

Currently bitnami wordpress is limited to 40MB Uploads. You can edit the php.ini file and increase this as per this Bitnami Support doc

Browse to /opt/bitnami/php/etc/php.ini

STEP1:

In php.ini find

And change it to:

STEP2

In php.ini find

And change it to:

; Maximum size of POST data that PHP will accept.

post_max_size = 512M

; Maximum allowed size for uploaded files.

upload_max_filesize = 512M

Restart Apache

Command:

sudo /opt/bitnami/ctlscript.sh restart

Optional

Update the WP All in one Plugin to work correctly upload files.

Known Error: Uploading a .wpress file from the local device would

In the console browse to

‘cd /opt/bitnami/apps/wordpress/htdocs/wp-content/plugins’

Command:

‘wget https://import.wp-migration.com/all-in-one-wp-migration-file-extension.zip’

Unzip the file Command:

‘unzip all-in-one-wp-migration-file-extension.zip’

Log out of the console

 

Setup your Lightsail Instance – WordPress Settings

Description Screenshot
User your internet browser to connect to the static IP address of your instance

http://3.9.218.163/wp-admin

User: user

Password: whatever you obtained from the previous step bitnami_application_password file in the console

Perform all wordpress updates – if available
Perform all plugin updates – if available
Update all themes – if available
Active the Wp All In One Migration Plugins
Click the Migration Import option in the left sidebar
Note the upload maximum is 512M (and not 40M)
Upload your .wpress file from a saved copy from your existing wordpress instance

(If you dont have one you can install the same Wp Migration plugin into your existing wordpress instance and ‘export’ it first so you can then import it to this new instance)

Warming: You must ALSO know the existing wordpress instances username and password as these will be overwritten when the import completes.

Click Proceed if happy to continue

Once complete be sure to click the ‘save permalinks structure’ and then click finish
Update your permalinks to your preferred format for your blog posts Choose the format, then click save

Site import complete

Final Steps

Description Screenshot
You should update the lightsail instance(s) with

sudo apt-get update

You should update any additionally imported wordpress themes or plugins and test the website is functioning perfectly.

Especially things like contact us forms, reauthorising plugins (and moving stats for jetpack to this new site)

You may need to import other things like ads.txt file or SSL certificates.

See this article on how to create and import an SSL Cert into your Lightsail instance.

You should go back to the Lightsail console and take snapshots of your instance(s)

Click Create Snapshot

You should change your DNS settings to newly point to the static IP address of your new lightsail instance BEFORE you terminate your old website.

DNS time to live settings can be long so sometimes people may have cached the DNS record for your website with the old ip address for hours or even days.

 

Lesson Learned From Enabling AWS GuardDuty Across Multi Accounts and All Regions

Summary

AWS Guard Duty offers an enhanced security scan of all AWS services and how to better protect them based on your usage patters and well known vulnerabilities.

General Observations

  1. GD Analyzes VPC Flow Logs, AWS CloudTrail events and AWS DNS logs.
  2. Console Notification for new findings – as they are analysed and discovered.
  3. Notification frequency of new findings (to CloudWatch) every 5 minutes.
  4. Notification configurable frequency of 15ins, 60mins or 6 hours (to CloudWatch) for updates to existing findings (counts for the same finding / vulnerability)
  5. 1000 member accounts supported.
  6. Definitions for trusted IPs and cloudwatch notification time / period are set and enforced at master only to flow down to sub accounts.
  7. Max 2000 trusted IPs in single list.
  8. Max 250,000 threat IPs in single list.
  9. Master accounts cannot be member of any other account.
  10. Sub accounts can view findings related to their own account, but are unable to archive them. Master account can view all findings for all accounts and archive them, which removes them from the sub accounts findings view also.
  11. Guard Duty can be suspended on master and on slave accounts (from master) slaves can manage and re-enable suspension on themselves only.
  12. GD detectors are region specific and need to be enabled on a per region basis including any regional master/sub collector services
    • Regional Master accounts are required for aggregation from sub accounts in the same region
      • e.g you cannot have one master Guard Duty collector in EU-WEST-1 and have AWS EU-WEST-2 regions send their Guard Duty findings to it.
      • You must have a Guard Duty master enabled in the EU-WEST-2 region and invite the sub account again for every region you want to enable Guard Duty
  13. In master, sub account the trusted and threat lists are applied at the master as a single list only.
  14. Log data from CloudTrail. VPC DNS are all encrypted when in transit to GuardDuty, after analysis the logs are discarded.
  15. Immediate analysis of flow logs starts from the service being enabled, it consumes events directly as a duplicate stream of flow logs. This does not modify any existing flow log configurations.
  16. DNS analysis will only work if using AWS DNS Resolvers. Other DNS services will not be ‘captured’ or analysed.

 

Rollout

AWS documentation points to a Cloud Formation template for enabling Guard Duty. Its only available as a template in us-east-1 region, so be sure to select this region.

CloudFormation

If you just want to setup Guard Duty services in your accounts via an AWS CloudForation StackSet – use this AWS Provided CT template ‘Enable Amazon GuardDuty’

https://console.aws.amazon.com/cloudformation/stacksets/home?region=us-east-1#/stacksets/new

Caveats

If you dont specify a master ID. The CF template simply enables GuardDuty in the stackset accounts and regions only.

You have to manually invite all your ‘sub accounts’ from all regions first before this stackset will work with the ‘masterId’ (as each sub account has to have an invitation waiting from the master)

CF Message:

The Amazon GuardDuty master account ID. If you specify the master account ID, this stack set creates a GuardDuty detector in each specified account and accepts the GuardDuty membership invitation sent to each of the specified accounts by this master account. If this value is specified, before you can create this stack set, all accounts in all regions to which this stack set template is to be applied must already have an invitation from this master GuardDuty account and must NOT have a detector already created.

Python

If you want a full cross account multi region master subscriber setup, from scratch – use the AWS provided python script.]=]#=[

It creates a master detector in your specified master account, and subscribes each sub account and even every sub account region to the master, accepts the invite and links the accounts.

The script can even be run from an unrelated ‘build or deploy’ account. Perfect!

https://github.com/aws-samples/amazon-guardduty-multiaccount-scripts

AWS Support recommended using the python script as the primary deployment mechanism.

There are no currently no ansible modules for AWS GuardDuty

Python Script Notes

The disable script breaks a little but a fix is provided in the issues list

The enable script messages the root account email for each account and for EVERY region that is listed to be ‘enabled’ so for a large scale deployment many multiple emails may be delivered!

How to configure an AWS IoT Button

Cloud Video Walk Through – AWS IoT Button

In this video series mastersof.cloud walks through the steps of creating your own ‘personal alarm’ button and details how to configure an AWS IoT Button, join it to your wireless network, connect it to an amazon account, assign it to trigger a Lambda function which finally sends a notification to SNS (SMS message in this example).

 

Important Points to note

  1. Take note of your region when you register your button as not all SNS features are available in all regions (SMS capability is not available in AWS SNS in London eu-west-2 for example)
  2. This configuration differs to the AWS IoT 1-Click system so be sure you have a ‘blue’ AWS IoT Button and not something else.
  3. The button can only use a 2.4g wifi network
  4. there are reports of the buttons having a relatively limited lifetime (~2000 clicks, 90 days) so maybe best to program this for something that doesnt need to be clicked every 2 minutes 🙂

They have more videos here on their youtube channel

Enjoy!

Script a custom AWS AppStream image

Scenario

Lets script a custom aws appstream image

A customer wanted to setup AWS AppStream 2.0 Image Automation for their AppStream image creation especially as there were situations where two or more images were potentially required including monthly updates (minimum) so they needed a repeatable, consistent solution for this.

 

Solution

You can only automate this so far within the current AppStream 2.0 limitations.

Creation of the image builders, the builder image itself, creation or the fleets and stacks then based on this image.

Short of joining your image builder to a domain that launches a script at computer startup – there is no immediate way to call a ‘zero touch build’ for AppStream images, and no current way to automate the Image Builder test and optimize wizard (the wizard you run to seal and snapshot the image)

Things to Consider Scripting / Adding

  • IEES Disable for all users
  • Local Timezone and Regional Settings (particularly if outside the US and your regional settings are not available for selection from the End User interface) (for example UK English and Timezone)
Set-WinSystemLocale en-gb
Set-Culture en-GB
Set-WinSystemLocale en-GB
Set-Timezone "GMT Standard Time"
  • If your images wont be domain joined then
    • Create a login script to apply user settings at ‘login’
    • If you manipulate local Group Policy (gpedit.msc) use the microsoft tool  LGPO.exe to backup and restore the settings easily
    • You can publish Windows Explorer in the Image Assistanc via a batchfile with content
cd %userprofile%\my files\temporary files -Force
start .

Examples

You can automate the image builder application injection using sqlite.exe per below

example.sql file to pass into C:\ProgramData\Amazon\Photon\PhotonAppCatalog.sqlite

INSERT INTO Applications (Name, AbsolutePath, DisplayName, IconFilePath, LaunchParameters) VALUES (“My Intranet Website”, “C:\Program Files (x86)\internet explorer\iexplorer.exe”, “Intranet”, “C:\ProgramData\Amazon\Photon\AppCatalogHelper\AppIcons\ie.png”, “https://www.myintranet.org.uk”)

Any questions or comments get in touch using the social media links at the top of the website and we will do out best to help! 😉

 

Cloud Hosting With Multiple Proxy Servers

Scenario

A customer had a requirement for Cloud Hosting With Multiple Proxy Servers and wanted to send some traffic direct to the internet (host or url whitelist), some hosts or urls to one proxy in their cloud hosting and some traffic via another proxy in another peered network in their cloud hosting.

Solution

Our solution in the end was simple but it does required endpoint configuration (the browsers needs to point to the pac file in order for this to work – this was configured via AD GPO for the AppStream instances in Amazon Web Services as the AppStream instances were domain joined.)

This is also supported on Windows and Mac Endpoints via the proxy autoconfiguration file.

This means we can whitelist traffic to the internet, we can send other url or hosts specific matches to various internal proxy servers and for all else we can return a proxy server that doesnt exist and if it points to 127.0.0.1 its a very quick ‘failure’ response.

The response message to the clients is not perfect (users receive ‘The Proxy Server is not responding’) but as a simple working solution this was considered tolerable.

Windows > Configure it in Internet Explorer

Internet Explorer pac file configuration
Internet Explorer pac file configuration

Mac > Configure it in Network Settings

Mac automatic proxy configuration
Mac automatic proxy configuration

PAC File Configuration

function FindProxyForURL(url, host) {

// If the hostname matches, send direct.
if (shExpMatch(host, "*.microsoft.com") ||
shExpMatch(host, "*.google.com"))
return "DIRECT";

// If the hostname matches, send direct.
if (shExpMatch(host, "*.myotherwebsite.com") ||
shExpMatch(host, "*.myotherwebsite2.com"))
return "PROXY internal.squid.proxy:3128";

// If the hostname matches, send direct.
if (shExpMatch(host, "*.myotherwebsite3.com") ||
shExpMatch(host, "*.myotherwebsite4.com"))
return "PROXY internal.squid.proxy2:3128";


// DEFAULT RULE: All other traffic, use below proxies, in fail-over order.
return "PROXY 127.0.0.1:8081";

}

Scripts for AWS S3 powershell Upload and Download of Folders and Subfolders

UPLOAD LOCAL FOLDER and SUBFILES to S3

#Load AWS Powershell Extensions
import-module "C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell\AWSPowerShell.psd1"

#Set AWS Creds to connect to S3. The S3 user should have a specific IAM policy to lock them down to this specific bucket only. See here for example s3 policy
Set-AWSCredentials -AccessKey <BUCKETUSERACCESSKEY> -SecretKey <SECRETKEY> -StoreAs default

#Upload AWS bucket folder using AWS Powershell Tools
#usage example write-s3object -BucketName <BUCKETNAME> -Folder <LOCALPath> -keyprefix <REMOTE> -recurse
write-s3object -BucketName mys3bucket -Folder d:\folder1\ -keyprefix folder1\ -recurse

#Remove AWS Credentials
Remove-AWSCredentialProfile -ProfileName default -Force

 

DOWNLOAD LOCAL FOLDER and SUBFILES to LOCAL

#Load AWS Powershell Extensions
import-module "C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell\AWSPowerShell.psd1"

#Set AWS Creds to connect to S3. The S3 user should have a specific IAM policy to lock them down to this specific bucket only. See here for example s3 policy
Set-AWSCredentials -AccessKey <BUCKETUSERACCESSKEY> -SecretKey <SECRETKEY> -StoreAs default

#Download AWS bucket folder called 'build' using AWS Powershell Tools
#read-s3object -BucketName <MYBUCKETNAME>  -Folder <LOCALPATH> -keyprefix <REMOTE>
read-s3object -BucketName <MYBUCKETNAME>  -Folder c:\Build\ -keyprefix Build

#Remove AWS Credentials
Remove-AWSCredentialProfile -ProfileName default -Force

 

 

AWS AppStream 2.0 Whats New?

AWS AppStream 2.0 Whats New for June 2018?

AWS have updated AppStream 2.0 to introduce some fantastic new features in the May & June 2018 releases.

Google Drive support has been added (selectable at fleet creation). It only supports G-suite enterprise and must be enabled in G-Suite to function, but it also has support for multiple G-Suite domains.

This means clients can avoid the clumsy upload and download of files from the local device to the remote and simply log into Google Drive and have immediate access to their files within the AppStream session.

Screenshot of google drive integration for AWS AppStream 2.0
Google Drive integration for AWS AppStream 2.0

 

 

 

Google Drive integration within AppStream 2.0 session
Google Drive integration within AppStream 2.0 session

Here is a screenshot of the Windows Explorer integration and conveniently shows my free space as approx 8000 Petabytes! Good to know!

Google Drive AppStream 2.0 Windows Explorer integration
Google Drive AppStream 2.0 Windows Explorer integration

Support for Administrative controls have also been added (again selectable at fleet creation). Giving the administrator greater control and flexibility in the solution they deploy to the users for things like local device copy and paste, file upload or download (or upload only or download only or disabled) and local print options.

control clipboard, file transfer and print options for AWS AppStream 2.0
Selective administrative controls for AWS AppStream 2.0

Happy Clouding!

AWS Workspaces Error – This OS/platform is not authorized to access your Workspace

Situation

Recently a customer received the following message ‘this OS/platform is not authorized to access your Workspace’ when connecting to newly built AWS Workspace instance whilst attempting to connect via ‘Web Access’ https://clients.amazonworkspaces.com/

This OS/platform is not authorized to access your Workspace
If the problem persists please contact your Workspaces Administrator.
ERR_DEVICE_ACCESS_DENIED

Solution

Web Access needs to be explicitly enabled. As these were relatively new workspaces (May 2018) the workspaces also didn’t have to be rebuilt to allow web connectivity contrary to the AWS documentation.

 

Open the AWS Console

Select Workspaces

Expand Directories

Select your Directory and click Actions then Update Details

Expand the 4th Section Access Control Options

Tick Web Access

Scroll to the bottom of the update details page and click Update and Exit

 

 

 

AWS AppStream 2.0 Image Builder X Drive not being created

Problem

This month on creation of a new image builder in AWS AppStream we noticed that the AWS AppStream 2.0 Image Builder X Drive was not being created.

The X drive is the temporary drive for uploading and downloading files to and from the AppStream instance, and usually where we house deployment scripts, build scripts, GPOS and installation files.

Solution

As of AppStream Image builder version Base-Image-Builder-05-02-2018 this is by design.

You should update any scripts or pointers

from “X:\Temporary Files” drive

to “C:\Users\ImageBuilderAdmin\My Files\Temporary Files”

#aws #appstream2.0