Home » KBArticles

Category: KBArticles

Lesson Learned From Enabling AWS GuardDuty Across Multi Accounts and All Regions

Summary

AWS Guard Duty offers an enhanced security scan of all AWS services and how to better protect them based on your usage patters and well known vulnerabilities.

General Observations

  1. GD Analyzes VPC Flow Logs, AWS CloudTrail events and AWS DNS logs.
  2. Console Notification for new findings – as they are analysed and discovered.
  3. Notification frequency of new findings (to CloudWatch) every 5 minutes.
  4. Notification configurable frequency of 15ins, 60mins or 6 hours (to CloudWatch) for updates to existing findings (counts for the same finding / vulnerability)
  5. 1000 member accounts supported.
  6. Definitions for trusted IPs and cloudwatch notification time / period are set and enforced at master only to flow down to sub accounts.
  7. Max 2000 trusted IPs in single list.
  8. Max 250,000 threat IPs in single list.
  9. Master accounts cannot be member of any other account.
  10. Sub accounts can view findings related to their own account, but are unable to archive them. Master account can view all findings for all accounts and archive them, which removes them from the sub accounts findings view also.
  11. Guard Duty can be suspended on master and on slave accounts (from master) slaves can manage and re-enable suspension on themselves only.
  12. GD detectors are region specific and need to be enabled on a per region basis including any regional master/sub collector services
    • Regional Master accounts are required for aggregation from sub accounts in the same region
      • e.g you cannot have one master Guard Duty collector in EU-WEST-1 and have AWS EU-WEST-2 regions send their Guard Duty findings to it.
      • You must have a Guard Duty master enabled in the EU-WEST-2 region and invite the sub account again for every region you want to enable Guard Duty
  13. In master, sub account the trusted and threat lists are applied at the master as a single list only.
  14. Log data from CloudTrail. VPC DNS are all encrypted when in transit to GuardDuty, after analysis the logs are discarded.
  15. Immediate analysis of flow logs starts from the service being enabled, it consumes events directly as a duplicate stream of flow logs. This does not modify any existing flow log configurations.
  16. DNS analysis will only work if using AWS DNS Resolvers. Other DNS services will not be ‘captured’ or analysed.

 

Rollout

AWS documentation points to a Cloud Formation template for enabling Guard Duty. Its only available as a template in us-east-1 region, so be sure to select this region.

CloudFormation

If you just want to setup Guard Duty services in your accounts via an AWS CloudForation StackSet – use this AWS Provided CT template ‘Enable Amazon GuardDuty’

https://console.aws.amazon.com/cloudformation/stacksets/home?region=us-east-1#/stacksets/new

Caveats

If you dont specify a master ID. The CF template simply enables GuardDuty in the stackset accounts and regions only.

You have to manually invite all your ‘sub accounts’ from all regions first before this stackset will work with the ‘masterId’ (as each sub account has to have an invitation waiting from the master)

CF Message:

The Amazon GuardDuty master account ID. If you specify the master account ID, this stack set creates a GuardDuty detector in each specified account and accepts the GuardDuty membership invitation sent to each of the specified accounts by this master account. If this value is specified, before you can create this stack set, all accounts in all regions to which this stack set template is to be applied must already have an invitation from this master GuardDuty account and must NOT have a detector already created.

Python

If you want a full cross account multi region master subscriber setup, from scratch – use the AWS provided python script.]=]#=[

It creates a master detector in your specified master account, and subscribes each sub account and even every sub account region to the master, accepts the invite and links the accounts.

The script can even be run from an unrelated ‘build or deploy’ account. Perfect!

https://github.com/aws-samples/amazon-guardduty-multiaccount-scripts

AWS Support recommended using the python script as the primary deployment mechanism.

There are no currently no ansible modules for AWS GuardDuty

Python Script Notes

The disable script breaks a little but a fix is provided in the issues list

The enable script messages the root account email for each account and for EVERY region that is listed to be ‘enabled’ so for a large scale deployment many multiple emails may be delivered!

Apple Mac OSX Security and Privacy Allow Button is not working

Problem

From time to time there seems to be some ongoing issues where Apple Mac OSX Security and Privacy Allow Button is not working.

No matter how many times you click ‘allow’ it simple doesn’t function (yet the button highlights blue as if its been pressed) and this happens regardless of whether you unlock the ‘padlock’ to make changes.

Seems its something to do with actually clicking the allow button when you are attempting to unblock system extensions via the Security and Privacy system preferences.

This also doesn’t seems to be resolved with removing mouse / track pad preferences as some sites have suggest.

Image result for allow button not working mac

Workaround

Effectively we are going to program a click on the button using Apple Script instead of a manual mouse click using the Screenshot program to get the coordinates of the allow button when the system preferences window is open.

  1. Open the Apple Script editor (Applications > Utilities > Script Editor)

2. Enter the following

tell application "System Events"
	click at {x, y}
end tell

3. Use Command > Shift and 4 (Mac OSX Built in selective screenshot tool) and hover the cursor over your ‘allow’ button which will give you the x and y coordinates for the ‘click’

Cursor when the selective screenshot is active (use the displayed numbers for your X and Y coordinates)

 

4. Enter the correct coordinates in the script then press the Play button

The allow button should then ‘be pressed’ as expected giving you access to see the extensions that are blocked so you can selective enable them then click OK.

Hope this helps

JSCS

Cloud Hosting With Multiple Proxy Servers

Scenario

A customer had a requirement for Cloud Hosting With Multiple Proxy Servers and wanted to send some traffic direct to the internet (host or url whitelist), some hosts or urls to one proxy in their cloud hosting and some traffic via another proxy in another peered network in their cloud hosting.

Solution

Our solution in the end was simple but it does required endpoint configuration (the browsers needs to point to the pac file in order for this to work – this was configured via AD GPO for the AppStream instances in Amazon Web Services as the AppStream instances were domain joined.)

This is also supported on Windows and Mac Endpoints via the proxy autoconfiguration file.

This means we can whitelist traffic to the internet, we can send other url or hosts specific matches to various internal proxy servers and for all else we can return a proxy server that doesnt exist and if it points to 127.0.0.1 its a very quick ‘failure’ response.

The response message to the clients is not perfect (users receive ‘The Proxy Server is not responding’) but as a simple working solution this was considered tolerable.

Windows > Configure it in Internet Explorer

Internet Explorer pac file configuration
Internet Explorer pac file configuration

Mac > Configure it in Network Settings

Mac automatic proxy configuration
Mac automatic proxy configuration

PAC File Configuration

function FindProxyForURL(url, host) {

// If the hostname matches, send direct.
if (shExpMatch(host, "*.microsoft.com") ||
shExpMatch(host, "*.google.com"))
return "DIRECT";

// If the hostname matches, send direct.
if (shExpMatch(host, "*.myotherwebsite.com") ||
shExpMatch(host, "*.myotherwebsite2.com"))
return "PROXY internal.squid.proxy:3128";

// If the hostname matches, send direct.
if (shExpMatch(host, "*.myotherwebsite3.com") ||
shExpMatch(host, "*.myotherwebsite4.com"))
return "PROXY internal.squid.proxy2:3128";


// DEFAULT RULE: All other traffic, use below proxies, in fail-over order.
return "PROXY 127.0.0.1:8081";

}

AWS Workspaces Error – This OS/platform is not authorized to access your Workspace

Situation

Recently a customer received the following message ‘this OS/platform is not authorized to access your Workspace’ when connecting to newly built AWS Workspace instance whilst attempting to connect via ‘Web Access’ https://clients.amazonworkspaces.com/

This OS/platform is not authorized to access your Workspace
If the problem persists please contact your Workspaces Administrator.
ERR_DEVICE_ACCESS_DENIED

Solution

Web Access needs to be explicitly enabled. As these were relatively new workspaces (May 2018) the workspaces also didn’t have to be rebuilt to allow web connectivity contrary to the AWS documentation.

 

Open the AWS Console

Select Workspaces

Expand Directories

Select your Directory and click Actions then Update Details

Expand the 4th Section Access Control Options

Tick Web Access

Scroll to the bottom of the update details page and click Update and Exit

 

 

 

AWS AppStream 2.0 Image Builder X Drive not being created

Problem

This month on creation of a new image builder in AWS AppStream we noticed that the AWS AppStream 2.0 Image Builder X Drive was not being created.

The X drive is the temporary drive for uploading and downloading files to and from the AppStream instance, and usually where we house deployment scripts, build scripts, GPOS and installation files.

Solution

As of AppStream Image builder version Base-Image-Builder-05-02-2018 this is by design.

You should update any scripts or pointers

from “X:\Temporary Files” drive

to “C:\Users\ImageBuilderAdmin\My Files\Temporary Files”

#aws #appstream2.0

 

How to find an AWS AppStream 2.0 users homedrive path

Scenario

AWS AppStream 2.0 generates a SHA-256 hash of the users NameID for their Home Drive – when using SAML (aka Federated) authentication. This can potentially make it difficult to find the users home share if browsing from AWS S3 or for support teams when supporting users or uploading documents to the users ‘home drive’.

Example

In this document is an example of a federated users home drive autocreated in S3 after the user has accessed AppStream 2.0 for the first time.

This script will simply create a function in Windows powershell and allow you to generate the SHA256 hash based on the NameID and so you can discover the users homepath.

Function Get-StringHash([String] $String,$HashName = "MD5")
{
$StringBuilder = New-Object System.Text.StringBuilder
[System.Security.Cryptography.HashAlgorithm]::Create($HashName).ComputeHash([System.Text.Encoding]::UTF8.GetBytes($String))|%{
[Void]$StringBuilder.Append($_.ToString("x2"))
}
$StringBuilder.ToString()
}

$myvar = Read-Host –Prompt 'Enter string to hash'
Get-StringHash $myvar "SHA256"

Result

As we know the users NameID being passed into the AppStream session (in this instance its actually my email address)

AWS IAM CERTIFICATE_VERIFY_FAILED

Situation

When attempting to call AWS CLI commands we were receiving a CERTIFICATE_VERIFY_FAILED error message. We were using a proxy service. In this specific instance we were connecting to AWS IAM via zScaler Internet Access (ZIA)

Example

we were running a simple

aws iam get-role --role-name vmimport

 

Workaround

include–no-verifyssl to by pass the ssl verification

aws iam get-role --role-name vmimport --no-verify-ssl

Solution

Drop or whitelist the iam.amazonaws.com from SSL inspection on the proxy server

Citrix Cloud MCS Connection to Azure Unable to See Image or Template vhd files

Situation

The Citrix cloud MCS connection to Microsoft Azure is unable to provision any Machien Catalog as its unable to find any images, disks, vhds or servers to base its Machine Catalog on.

  1. Citrix Cloud setup with Hosting connected direct to Microsoft Azure RM (and working as it can connect and see resources etc)
  2. 1 x DC, 2 x Citrix Cloud Connectors and 1 Windows server template with VDA installed as ‘master image’
  3. All Windows 2016
  4. All servers built with Azure managed disks (where the servers are not placed into any storage account)

Hosting Connection

Machine Catalog Creation

CC sees the Resource Group and storage, but its basically looking in the wrong areas and not finding the image or VHD files.

A)

B)

C)

Solution

The VDA template must be created within a storage account and not built with azure managed disks.

 change to 

 

Azure Managed disks is now available in preview (apparently) but only if you are deploying machines via your Machine Catalog. It seems you still need to have a vhd and storage account for your base image / template when you are creating your Machine Catalog, but can then enable azure managed disks for your new MCS managed VMs (see below screenshot during Machine Catalog creation)

 

Citrix Storefront 3.9 passthrough authentication issues

Situation:

After a customer recently upgraded to Storefront 3.9 some users complained of having to authenticate twice when using various browsers. Once in Storefront and once again in a Windows Login prompt when they launch their selected application.

This seems to be related to the way Storefront runs the receiver detection, if a compatible receiver is detected the users are prompted and asked if they want to ‘Log On’ with their local computer credentials. (see screenshot from Workaround 1 below).

Previously we have only ever used ‘username and password’ authentication, but this process seems to negate / bypass the authentication configured in Storefront.

Workaround #1:

The users should be prompted each time to ‘passthrough’ their windows local windows credentials by clicking ‘Log On’.

The users can skip the passthrough and simply click ‘switch to user name and password’

To use the account you used to sign on the computer, click Log On.

Workaround #2

If you have more than one Store in Storefront separate the authentication methods in Storefront so they are not shared between the stores (as pass through detection continued to happen regardless of the authentication method selected when shared between stores)

(note the storename has been obscured for customer anonymity)

Resolution:

In relation to the references section for setting up a good receiver configuration this customer had broken the majority of the rules for good reason. So there was no adhering to the Citrix best practises, so workaround 2 became their resolution based on other requirements (like not all users are domain joined, not all devices that connect are manager by the customer, rather 3rd parties to which they have no control, the users have no / little access locally to upgrade or install or modify receiver configurations – the list goes on)

Post the upgrade the Authentication method between two different stores were merged, and shared authentication was enabled. Regardless of the settings we were selecting / applying in the Browser, the pass through continued to haunt users and attempt to log them in with their local credentials.

Once we split the authentication, so it could be controlled separately between the two stores, the issues went away and we had more granular control.

There were are number of things the customer was not doing like configuring the receiver clients locally, and configuring the local receivers to support http:// as they have a large number of non domain joined users and this prevented a ‘one size fits all’ approach to deploying receiver and Storefront internally. Our final suggestion was to look to replace this entirely with NetScaler and HTML5 instead.

References

https://docs.citrix.com/en-us/receiver/windows/4-7/secure-connections/receiver-windows-configure-passthrough.html