Sunday, January 22, 2017

Distributed Document Management Systems DDMS on AWS - Disruption!

There are two killer features AWS Elastic File System (EFS) needs to deliver:
  1. AWS NFS 4.1 Encryption support
  2. AWS NFS 4.1 Windows Explorer integrated Client or alike
With these two there will be true distributed file system support. This is the last milestone to eliminate the need for disaster recovery (DR) data centers and complicated once or twice a year DR tests. Clients RPO and RTO will be exactly the same as AWS. A true disruption that will save billions across the world as Distributed Document Management Systems (Am I really the first coining DDMS?) take over.

This is my official comment to Mount NFSv4.1 with Windows. I had to comment here since my "account is not ready for posting messages yet" and I do not want to "try again later".

Friday, January 13, 2017

On Security: Is your site still drown - ing?

If you are using the same certificate in several servers and just one of them happens to have SSLv2 enabled then all of your servers are vulnerable to the DROWN attack. Do not be misled by results from tools like nmap or sslyze. Better to not have shared keys and make sure of course SSLv2 is not allowed in any of your servers.

Thursday, January 05, 2017

Monitoring Linux Resources in AWS and beyond with Plain Old Bash ( POB )

As I already posted Amazon Web Services (AWS) is not providing an out of the box solution for such simple thing as monitoring HDD, Memory and CPU usage for Windows. The situation is the same for Linux. That is the reason I went ahead and created a quick plain old bash (POB) script for this matter. I recommend reading the README.md for straight and quick instructions to get it working in less than a minute. The code is in github but for convenience find the linked content below:

Wednesday, January 04, 2017

Monitoring Windows Resources in AWS and beyond with PowerShell

Amazon Web Services (AWS) is not providing an out of the box solution for such simple thing as monitoring HDD, Memory and CPU usage. CloudWatch demands still some coding and customization for something that IMO should be provided out of the box. We have Monit for Linux but nothing simple enough for Windows. Prometheus is great but probably overkilling for a small startup. That is the reason I went ahead and created a quick PowerShell script for this matter:

Tuesday, January 03, 2017

Install telnet client in Windows

Heads up, it might take a minute for the command to be available after installing it through the below command:
pkgmgr /iu:"TelnetClient"

Friday, December 30, 2016

AWS S3 Data transfer - Kill the Colo ;-)

Transferring data from your premises or Colo infrastructure to AWS Cloud is not longer as difficult as it used to be ;-)

Besides dedicated links and physical transport of files, Amazon provides a pure internet solution (S3 Transfer Acceleration) to transfer files to Amazon Simple Storage Solution (S3) which might be enough for your needs. I will describe here my experience using this method from a manual perspective (no scripting this time) which should be enough for cases when for example you are moving to the cloud those big on premises or on Colo files.

Start by performing a test to see how faster this method will be in comparison with the direct upload method http://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-speed-comparsion.html.

I had to transfer 1 TB of data to S3 from Windows 2008 servers. Here is how I did it.

To transfer your files with S3 Accelerated Transfer Upload Speed select your bucket from AWS console | Properties | Transfer Acceleration | Enable | Get the accelerated endpoint which will work just as the regular endpoint; for example mys3bucketname.s3-accelerate.amazonaws.com.

You can use AWS CLI in windows just as well as in *nix systems ( http://docs.aws.amazon.com/cli/latest/userguide/installing.html#install-msi-on-windows ) to upload or download files.

See below for an example that shows how I configure AWS CLI, list what I have in S3, set the AWS CLI to use the accelerated endpoint and finally copy the data to the bucket.
C:\> aws configure
AWS Access Key ID [****************HTAQ]:
AWS Secret Access Key [****************J+aE]:
Default region name [None]: us-east-1
Default output format [None]: json

C:\> aws s3 ls
2016-12-24 06:25:01 mys3bucket1
2016-12-03 15:15:37 mys3bucket2

C:\> aws configure set default.s3.use_accelerate_endpoint true

C:\> aws s3 sync E:\BigData\ s3://mys3bucket2
I got 12MB/sec versus 1.2MB/sec at regular speed. I was able to transfer 1 TB in around 16 hours. The good thing is that the command behaves like rsync meaning that new files or addition to existing files will be the only data you will be transferring after that first attempt. This is good news when you are planning to move the infrastructure to the cloud as it minimizes the possible business disruption timeframe.

C:\> aws configure set default.s3.use_accelerate_endpoint true

C:\> aws s3 sync s3://mys3bucket2 D:\SmallToBeBigDATA\
WARNING: S3 transfer Acceleration has an associated cost. All I can say is: It is worth it. It costed us $8.28 to transfer above 1TB of data from an on premises Windows Server to an EC2 hosted Windows Server via S3.

AWS Inventory - Audit the Cloud Infrastructure

How difficult is to audit your AWS Cloud Infrastructure?

Instances, volumes, snapshots, RDS, security groups, elastic IPs and beyond. A single report to get all the invaluable information that will keep you informed to make critical and quick decisions.

The guys from powerupcloud shared an initial script in their blog which they put in github. I forked it and after some tweaks found it so useful that I decided to ask the author for a pull request. The new Lambda Function:
* Adds support for environment variables
* Adds security groups listing
* Removes hardcoded and non generic names
* Corrects some comments
* Retrieves the ownerId instead of hardcoding it
* Adds the description for volumes for clearer identification
* Lists the Elastic IPs with the instanceId the are assigned to for clearer identification
* Has a TODO ;-: Naming conventions should be established
Here is a quick start:
  1. Create IAM role | Name:Inventory; Role Type: AWSLambda; Policy: EC2ReadOnly, AmazonS3FullAccess, AmazonRDSReadOnlyAccess
  2. Create S3 bucket | Name: YourInventoryS3Name
  3. Create Lambda | Name: YourInventoryLambda; Description: Extracts the AWS Inventory; Runtime: Python; Role: Inventory; Timeout: 3 min; environment variables: SES_SMTP_USER, SES_SMTP_PASSWORD, S3_INVENTORY_BUCKET, MAIL_FROM, MAIL_TO
  4. Schedule the Lambda: Select Lambda | Trigger | Add Trigger | CloudWatch Events - Schedule | Rule Name: InventorySchedulerRule; Rule Description: Inventory Scheduler Rule; Schedule Expression: cron(0 14 ? * FRI *) if you want it to run every Friday at 9AM ET| Enable Trigger | Submit
To be able to send emails from AWS you need to allow Amazon Simple Email Service (SES) to send emails for some emails in your domain:
  1. Verify the domain: SES | Manage Entities | Verify a New Domain ; yourdomain.com
  2. Follow the instructions. To complete the verification you must add several TXT/CNAME records to your domain's DNS settings
  3. Verify email addresses: SES | hit the domain | Verify a New Email Address | Confirm from email
  4. After getting confirmation go to SES | SMTP Settings | Create my SMTP Credentials | Provide a username like ‘anyusername’ for example and Amazon will show you the credentials (SMTP Username and SMTP Password). Keep them in safe place. You can also download them. This is the last time AWS will share them with you.
You can clone/fetch/checkout my latest version from my fork: Really useful stuff. Thanks PowerUpCloud for the quick start!

Followers