Grand Central logging for Cloud Services to Splunk

amiracle, updated 🕥 2022-01-22 06:53:07

Grand Central App for Splunk

Manage and Monitor your Cloud Data Providers in Splunk from one centralized data platform.

This Splunk based app relies on the work done by Project Trumpet and the AWS Organizational model.

Grand Central User's Guide :

  • Version 3.0.1

Getting Started

Requirements Grand Central works with the AWS Organizations framework and does not require either Landing Zone or Control Tower to work. By having the organization setup with multiple accounts, Grand Central will be able to discover the accounts and add into management within Splunk. Please refer to the Amazon Web Services documentation on how to get started with Organizations.



Before Deploying Grand Central

You will need to be able to create an IAM User in the Master Account and the sub accounts that will be added into management under Splunk. By default there will be two IAM policies created, one to list all the accounts in the Organization and the second will be a deployment policy.

IAM Role Creation Shortcut (Simplified) 1. Log into your Master AWS Account AWS Account: aws_console

  1. Click on Grand Central Create iam_create_cf

  2. Copy the Access Key / Secret Key from CloudFormation Output : copy_ak_sk

  3. Download grand_central_300.spl file from S3 bucket and install on your Splunk instance. Splunk Cloud customers request to have the app installed. install_gc

Cloudformation Template for IAM User Creation

Here is the Cloudformation template used for this IAM Role Creation :

Here is the policy which it will deploy for reference:

IAM Policy - GCPolicy_Rev2.json json { "Version": "2012-10-17", "Statement": [ { "Sid": "CFTemplateSSInstance", "Effect": "Allow", "Action": [ "cloudformation:CreateStackInstances", "cloudformation:DeleteStackInstances" ], "Resource": [ "arn:aws:cloudformation:*:*:stackset-target/grandcentral*", "arn:aws:cloudformation:*:*:stackset/grandcentral*", "arn:aws:cloudformation:*::type/resource/AWS-IAM-Role", "arn:aws:cloudformation:*::type/resource/AWS-S3-Bucket", "arn:aws:cloudformation:*::type/resource/AWS-Lambda-Function", "arn:aws:cloudformation:*::type/resource/AWS-KinesisFirehose-DeliveryStream", "arn:aws:cloudformation:*::type/resource/AWS-IAM-Policy", "arn:aws:cloudformation:*::type/resource/AWS-Events-Rule" ] }, { "Sid": "CFTemplateCreateStackSet", "Effect": "Allow", "Action": [ "cloudformation:CreateStackSet", "cloudformation:DeleteStackSet" ], "Resource": [ "arn:aws:cloudformation:*:*:stackset/grandcentral*", "arn:aws:cloudformation:*::type/resource/AWS-IAM-Role", "arn:aws:cloudformation:*::type/resource/AWS-S3-Bucket", "arn:aws:cloudformation:*::type/resource/AWS-Lambda-Function", "arn:aws:cloudformation:*::type/resource/AWS-KinesisFirehose-DeliveryStream", "arn:aws:cloudformation:*::type/resource/AWS-IAM-Policy", "arn:aws:cloudformation:*::type/resource/AWS-Events-Rule" ] }, { "Sid": "OrganizationList", "Effect": "Allow", "Action": [ "organizations:List*", "organizations:DescribeOrganization", "organizations:DescribeOrganizationalUnit", "organizations:EnableAWSServiceAccess" ], "Resource": "*", "Condition": { "ForAllValues:StringLike": { "organizations:ServicePrincipal": [ "cloudformation.*", "cloudformation-fips.*", "cloudtrail.*", "cloudtrail-fips.*", "config.*", "config-fips.*", "events.*", "events-fips.*", "logs.*", "logs-fips.*", "", "s3.*", "organizations.*" ] } } } ] } IAM Policy - Grand_Central_IAM_Policy.json Grand Central Policy for Master Account : - [GCPolicy_Rev2.json] Use this policy to setup and deploy Grand Central in a Control Tower / AWS Organization deployment. This will use StackSets to deploy configuraiton changes to sub accounts in AWS.

Individual Account IAM Policy (Optional)

If you are going to use individual accounts and policies in each account, use this IAM policy. Grand Central Policy for Individual AWS Accounts : Grand_Central_IAM_Policy.json

Setting up Grand Central

Adding Master Account Log into the Grand Central App and navigate to the Accounts Section (Configure Data Sources -> Amazon Web Services -> Grand Central Accounts).

Click on the “New Organization Master Account” button:


Add the Master Account ID (must be a number) access key / secret key to Splunk:


The Master Account should now show up in Splunk:


Once the Master Account has been added, now you should be able to view the accounts in the organization. Under Actions, navigate to the List All Accounts in the dropdown: master_account

List of all the accounts: master_account

In order to add the discovered accounts into Splunk, select "Add Accounts in Organization to Grand Central" in the Actions dropdown: master_account

Click on the Add Button:


All the accounts should now show up under management in Splunk: master_account

Splunk Endpoints Now, add the destination where you will be sending your data. This is typically a Firehose endpoint on your Splunk Cloud Deployment. Here is an example of how you should fill out the fields: Note that if you are using Splunk Cloud the URL for your firehose endpoint should look like this: md master_account

BYOL Cloud Deployments md master_account

AWS Stacksets Define your AWS Stackset here in Splunk. What will happen is as new accounts are vended into Organizational Units (OU's), they will automatically have these configurations sent to the newly AWS Accounts. Addiontally, if an existing account is moved into an OU, it will be configured with these settings.

Example : Account 987654321 is in the Core OU after being provisioned, then gets moved into the DevOps OU, the AWS Stackset "grandcentralCloudTrail" will deploy in that account and setup the CloudTrail data collection.

Deploy an AWS Stackset


Your Splunk Accounts should look like this when you're done: master_account

Pro Tip : For now, create a Splunk HEC token for each sourcetype. E.g. aws:cloudtrail for CloudTrail, aws:config for Config, and aws:cloudwatchlogs:vpcflow for VPCFlow Logs

The legacy Bulk Data Deployment system still works if you do not want to use AWS Stacksets and would rather use Access Key / Secret Keys for each of your accounts. Just follow the steps for the Bulk Credential Upload or set up each account individually. Now you can deploy data collection to all these accounts, click on the Bulk Deployment button and select your accounts, regions, destination and data source: master_account Click Deploy. Splunk will communicate with AWS and begin deploying the CloudFormation templates in all the accounts and regions you've selected: In the Observation Deck dashboard you will see the successfully deployed Accounts and Regions: master_account # Binary File Declaration ## Terraform - Binary and Checksum: - Source Code:

Alternative deployments

Under the hood, Grand Central is simply storing account configurations and deploying Stacksets/CloudFormation templates to those accounts, however the state and persistence of the Stackset/Cloudformation stack itself is stored within the AWS services. It is therefore possible to deploy the same architecture deployed by Grand Central using the native Stacket/Cloudformation consoles and CLI tools to better match any existing IaaS workflows you may already be using.

This section will briefly describe how to manually deploy the same architecture as Grand Central. This is especially relevant for users with concerns about long term storage of account credentials. Note that this section will be describing how to manually deploy to an AWS Organization using Stacksets, rather than individual accounts using Cloudformation. To deploy customized data collection Cloudformation templates to individual AWS accounts without Grand Central, please see Project Trumpet.

There are two main options for deploying the data collection infrastructure set up by Grand Central without relying on the Grand Central app for management of the stacks.

  • Follow all the same steps as deploying Grand Central to an AWS Organization (Create and provide a Master Account credential, add accounts in the organization to Grand Central, then deploy one or more Stacksets to the relevant Organizational Units (OUs) and regions).

    • Once this is complete, you can safely remove the Master Account from Grand Central, delete the Master Account user/credential, and even uninstall the Grand Central app. The data collection deployment created by Grand Central will still exist after the app is uninstalled or the Master account credential is deleted.
    • Be sure to note the name of the Stackset deployment.
    • To edit or roll back the deployment, you will need to interact with the Stackset in the AWS console.
    • In this approach, Grand Central is used as a quickstart for deployment, and long term management/maintenance is done within native AWS services.
  • Instead of deploying the Stackset through Grand Central, you can instead manually deploy the same Stackset within the AWS Console or through the AWS CLI. This is a straightforward process, which will be described in more detail here.

    • The first step will be to create a template using Project Trumpet, this template should be deployable to all accounts and regions that you plan to deploy to, meaning that if CloudWatch Log or VPC Flow log groups are provided, they must exist (have the same name) in each account/region that the Stackset will deploy the template to. Services like CloudTrail exist in all regions/accounts and can be included.

    • The next step is to deploy the template using AWS Stacksets in the Master Account with trusted access enabled, from here, you can select which Organization Units (OUs) you would like the template to apply to, as well as regions.

    • Note that Grand Central deploys using the assumption that the Master Account has trusted access enabled, this allows the master account to deploy the Stackset to accounts within the organization without having to configure trust policies for each account. See more about this approach here.


Issue adding AWS account

opened on 2022-04-27 09:41:20 by Shubham191297

i am facing the below issue of handler when trying to add accounts in grand central app. I have correctly installed app on correct splunk version still facing this error. image

updated version

opened on 2022-01-22 06:53:07 by gayatrisingh31 None

Python version

opened on 2022-01-18 18:50:45 by ghost

We've recently updated our Grandcentral and it seems that it uses python 2.7 still, while aws no longer supports this. Can this change be added in?

Resource handler returned message: "The runtime parameter of python2.7 is no longer supported for creating or updating AWS Lambda functions. We recommend you use the new runtime (python3.9) while creating or updating functions. (Service: Lambda, Status Code: 400, Request ID: 08294c07-c42b-4907-ac74-8445de19b7d5, Extended Request ID: null)" (RequestToken: af517b70-865c-3130-c7e1-4cb5acf669d2, HandlerErrorCode: InvalidRequest)

Copy of generic Cloud Formation template generated by Grand Central to compare to Trumpet templates

opened on 2021-10-14 16:11:34 by faleon


My team is interested in using Grand Central but we'd like to compare the templates generated by Grand Central to Trumpet to see if there are any significant differences before making the switch. We haven't been able to get the app to work when importing it to our search head, so I'm hoping you can provide something generic.

We are trying to capture the following sources if we use Grand Central:

  • CloudTrail
  • Config Notifications
  • GuardDuty
  • VPC Flow Logs

If you need anymore information, please let me know.

Best, Faleon

Vetting fails for grand_central_300.spl

opened on 2020-11-19 12:12:48 by stuart-king-vm

Currently I am unable to upload the app to our Splunk cloud instance. It fails on the following vetting.

Auto-Generated FAILED vetting comment: [Cloud Vetting][Liverpool] #Custom 3.0.0 - grand_central Review fails vetting and cannot be installed. This is a preliminary report. More issues may be found upon further review. Thank you for your app install request. Your app did not meet security and functionality requirements for Splunk Cloud for the following reasons:

Failed and warning manual checks JSON file standards [failure] Check that all JSON files are well formed. Malformed JSON file found. Error: Expecting value: line 1 column 1 (char 0) File: aws_artifacts/IAM_Policy/GCPolicy_Rev2a.json Python file standards [failure] Check for UDP network communication Please check for inbound or outbound UDP network communications.Any programmatic UDP network communication is prohibited due to security risks in Splunk Cloud and App Certification.The use or instruction to configure an app using Settings -> Data Inputs -> UDP within Splunk is permitted. (Note: UDP configuration options are not available in Splunk Cloud and as such do not impose a security risk. File: bin/3rdparty/botocore/ Line Number: 207

Creating Cloudwatch Logs Grand Central config

opened on 2020-07-10 19:32:29 by grainger-ryanm

I tried to create a Cloudwatch Logs Grand Central config for one of my accounts and it did not work. Couldn’t really tell if it was the permissions like before or something to do with the logging subscriptions to the firehose or what. Here are some screenshots with what I was trying to accomplish as well as the CFN stack that was produced.



I think it only assigned the 1 log group to the firehose? This may be a problem with firehoses, i.e. only 1 log group can be assigned to 1 firehose.


This is the preview of the lambda code. I’m not sure that the included zip file “” is being read correctly, or at all. Also, the import statements highlighted here because I know AWS removed a lot of built in packages that used to be standard before (our headache came specifically from requests being removed, though it has not been taken down yet, we are working on a solution to add it back in via lambda layers).

The lambda function threw the same error each time, screenshot below.


Not a gzipped file: IOError Traceback (most recent call last): File "/var/task/", line 191, in handler records = list(processRecords(event['records'])) File "/var/task/", line 78, in processRecords data = json.loads( File "/usr/lib64/python2.7/", line 260, in read self._read(readsize) File "/usr/lib64/python2.7/", line 302, in _read self._read_gzip_header() File "/usr/lib64/python2.7/", line 196, in _read_gzip_header raise IOError, 'Not a gzipped file' IOError: Not a gzipped file

Not sure if this is running an older version of the lambda code (written in python2.7) or if the code itself won’t run because of the missing dependencies.

Things I have tried: Changing IAM permissions (changing resources to *, adding all permissions from s3, lambda, processor and delivery role all to processor role) Updating the firehoses lambda function with the newest Splunk blueprint for firehose processor (gave it the same IAM role as the one updated above) Manually subscribing the second log group to the firehose, which complained that the firehose was not active (though I confirmed that it was)



Any insight you can offer would be appreciated.

Kamilo Amir

I'm driven by curiosity and just a desire to get stuff done. I have no shortage of ideas and am always looking for new fun projects to take on!

GitHub Repository