CloudWatch Logs is a new AWS feature. This new extension service was introduced at the AWS Summit in New York. CloudWatch used to only monitor resource utilization. To monitor application-level logs, we need to use third-party tools. With CloudWatch Log service, one can upload and monitor various kinds of log files and even filter the logs for particular pattern which could help resolve various production issues like an invalid user trying to login to your application, a 404 page not found error or a bot attempting a denial-of-service-attack. You can also monitor other AWS services such as EBS, EC2, RDS, and others. CloudWatch can store and monitor application logs, system logs as well as webserver logs. These metrics can be set as alarms so that you are notified of any app/webserver issues and can take the necessary actions quickly. CloudWatch Logs: Loggly, Splunk and Logstash already monitor logs and provide detailed reports. CloudWatch Logs is quite basic at the moment, but it would not surprise if Amazon adds additional features in the future. What makes CloudWatch Logs better than other third-party tools? CloudWatch Logs is the only platform that can monitor resource usage and logs. CloudWatch Logs pricing works on a pay-as-you-use model, which can be more affordable than third party tools that use a per node license model. You will pay for log storage and bandwidth to upload files. CloudWatch Logs pricing – $0.50 per gigabyte ingested, $0.03 per gigabyte archived per month Ingested data : This is the data (log file) that CloudWatch is uploading to. Archived Data – All data (log events) uploaded to CloudWatch are retained. You can choose how long you want to keep the data. This data will be archived using gzip Level 6 compression and stored. Storage space for archived data is charged. Let’s review the basic terminologies used in CloudWatch Logs. Log Agent – A python script that directs logs to CloudWatch. Log events: A Log Event is a log file that contains an activity and a timestamp. Log events are only supported in text format. Other kinds of formats will be reported as error in the agent’s log file (located at /var/logs/awslogs.log). Log Stream: A log stream is a collection of log events that are reported by one source. Consider the apache server’s access log. It contains multiple events from one source, i.e. apache web server. Log Group – Log Group is a collection of Log Streams from multiple resources. A WebServerAccessLog, for example, reports Apache access logs from three identical instances. Log Group level is used for Metric filters and retention policy. Metric Filter: The Metric filters tell CloudWatch how to extract metrics from ingested Log events, and turn them into CloudWatch metrics. We can create a Metric filter called “404_Error” that will filter log events to find 404 access issues. To monitor 404 errors on different servers/instances, an alarm can be set up. Retention Policy: The retention policies determine how long events will be retained. Log Groups are assigned policies that can be applied to all Log Streams within the group. You can set the retention time from 1 day to 10 year, or you can choose to have logs never expire. How to install and configure Log Agent on Linux machine: Steps: SSH into the instance and switch to root. Run the following command in the terminal. Shellwget https://s3.amazonaws.com/aws-cloudwatch/downloads/awslogs-agent-setup-v1.0.py1wget https://s3.amazonaws.com/aws-cloudwatch/downloads/awslogs-agent-setup-v1.0.py 3.
Create an IAM user for the instance using AWS console Attach the following p