1. Do I need to configure the TruSTAR configuration for the TruSTAR add-on or the main TruSTAR app?
The configuration is for the TruSTAR Add-on - this will start pulling data from our API and then index it in Splunk.
2. Will our technology add-on work with a Universal Forwarder?
Our Technology Add-on requires a heavy forwarder if you are deploying in a cluster setup. The Universal Forwarder does not come bundled with Python or a user interface, both of which are required to setup our Technology Add-on. We need Python for Splunk to connect to our REST API and do some pre-processing on the response. Universal Forwarders do not have this capability to connect to a REST API and process data before its indexed.
Splunk documentation for upgrading to a Heavy Forwarder in case that's of interest: https://docs.splunk.com/Documentation/Forwarder/7.0.0/Forwarder/Upgradeauniversalforwardertoaheavyforwarder
3. Since we have clustered environment, I installed the TruSTAR add-on on master cluster node but it didn’t get installed on cluster peers.
So, do i need to install it manually in cluster node’s $Splunk_Home/etc/master-apps and then apply the bundle? Or, do i need to install it manually on every cluster indexer?
The TruSTAR Add-on won't get installed automatically on cluster peers. User has to perform the following steps on Cluster Master:
- Copy the extracted folder for TA_trustar in $SPLUNK_HOME/etc/master-apps
- Verify stanza of index in file $SPLUNK_HOME/etc/master-apps/_cluster/local/indexes.conf, (it should be the same name as selected in modular input at heavy forwarder)
coldPath = $SPLUNK_DB/trustar/colddb
enableDataIntegrityControl = 0
enableTsidxReduction = 0
homePath = $SPLUNK_DB/trustar/db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNK_DB/trustar/thaweddb
repFactor = auto
4. Do i need to install the TruSTAR app on all search heads?
Yes you need to install the TruSTAR app on all search heads. However, users do not have to perform this step manually on all search heads. Users need to follow steps below on Search Head Deployer:
- Copy the extracted folder of TA_trustar and Trustar App in $SPLUNK_HOME/etc/shcluster/apps.
- If user has selected index other than default in modular input configuration at heavy forwarder do the following (else skip to next step) : Modify macro definition of "trustar_get_index" in file $SPLUNK_HOME/etc/shcluster/apps/Trustar/default/macros.conf
- Push the app on search head cluster using below command
$SPLUNK_HOME/bin/splunk apply shcluster-bundle -target <SH_URI>:<management_port> -auth <username>:<password>
Note: The target should be the node selected as the captain. You can find the captain by running a cluster status command
5. Am I able to configure custom fields for matches? For example, just match on TruSTARs IP's or urls not everything?
This feature is currently not possible with TruSTAR's Splunk V1 app, however users can run custom search queries to . The new Splunk refresh, TruSTAR Splunk App V2 will have be more configurable and users will have ability to select indexes for the app to search against.
6. Does this app store data into Splunk, i.e. will it consume license and storage?
The TruSTAR app will consume storage - this is standard for threat intelligence applications that let you match intel against your local splunk instance. The TruSTAR platform differentiates from the competition by focusing on high value IOCs that have been submitted by other analysts. This means users will not be receiving large volumes of data.
7. How do I find out if TruSTAR app search is consuming a lot of Splunk memory?
This Splunk knowledge base document has information on how to identify your top memory consuming searches. http://docs.splunk.com/Documentation/Splunk/7.0.0/Troubleshooting/Troubleshootmemoryusage
8. The “export PDF” function on the TruSTAR app on Splunk doesn’t appear to work. How can I export a report?
The Splunk application does not allow the full download of a PDF in the format we have on the app. As a workaround please use your browser to save the report, this will guarantee that the full report is downloaded with a similar appearance.
9. I receive authentication error "Authentication Failed ! Please verify URL, API key, and Secret Key of TruSTAR to Connect." when configuring TruSTAR App.
- Check to make sure all your credentials are entered correctly.
- Verify that you have write and read access for the enclave you have selected.
- Check to see if your firewall is blocking traffic from TruSTAR.
10. If i have Splunk ES can i incorporate data from the TRuSTAR app into ES ?
Currently the TruSTAR app does not have a native integration with Splunk ES. Version 2 of the TruSTAR app will have this capability.
Splunk V2 FAQs
- A user has an older version of the TruSTAR Splunk app. Do they need to do a clean install of the new v2 app?
Yes to get the full functionality of the new V2 app users need to perform a clean install.
- How do i perform a clean install?
Steps to Install New TA and APP:
Delete old app and add-on from backend. Go to $SPLUNK_HOME/etc/apps/ and remove TA-trustar and Trustar
Install latest builds of both TA and APP, either from UI or from backend.
Install from UI: Go to Apps -> Manage Apps, click on button "Install App from file", select latest build of TA and install.
Repeat same steps for the main APP.
- Install from backend:
Copy both the builds under $SPLUNK_HOME/etc/apps/ and extract both the builds.
Note: After successful installation, follow section for configuration of TA from the TruSTAR knowledge base. Create a new index and assign it in Modular Input and also update macro as mentioned in documentation.
- How will this affect the old data already stored? Will that need to be deleted?
This does not affect the old data that is already stored. They can keep the old index along with the data. However user has to select a newly created index in this new setup, while creating Modular Input. So all new data will get collected in new index.
The app will show only data collected in new index.
Users must update macro “trustar_get_index” with index="
to get only new data considered in dashboard for the latest APP. What's the expected search load?
This is difficult to quantify on an individual basis without testing (not done in POV). It can all be tailored / customized by the user The end user has access to all the levers and buttons they need to pull/push to manage this load. The customer can also optimize the queries if they are well-versed with the Splunk query language and update it as they see appropriate.
What type of data does this pull in?
The Splunk app imports two core data types: Technical Indicator of Compromise objects (e.g. IP addresses, email addresses, URLs, file hashes, file names etc.) and Report objects (which have human readable context) from our platform.
What does the polling interval affect? If we do it less frequently or more frequently what is the cost-benefit?
The larger the polling interval, the less-real-time the Splunk instance will be from the TruSTAR enclaves which the customer has elected to import from our platform. Smaller polling interval will keep the data more real time but will consume more API calls and increase the processing load on the Splunk instance. Our best practice recommendation is to keep our default polling intervals. Please note that keeping the data sync as close to real time may only make sense in very specific use cases and we work with our customers to fine tune the polling interval based on their use case.
What would you expect the daily ingest to be in MG/GB?
It depends on the number of enclaves that data is being pulled from. One of our customers had a Splunk instance running for weeks in the cloud, importing from ~15+ enclaves, and the index’s disk size is less than 100MB. In most normal scenarios the quantity of data ingested is likely going to be in the magnitude of single digit MB’s per day.
How many scheduled searches should we expect to be run? How does that work for the TA and the App?
By default our Splunk App has has 30 scheduled searches that run 1-4 times/hr. Our best practice recommendation is to search TruSTAR data against the last 24-72 hours of log data, which can be fine tuned based on volume of the log data. Users can customize the frequency and timing of these searches. Some of the default searches can also be disabled based on the use case. For example: users may not care much about or see some of our IOC types as threats they monitor for (ex: bitcoin addresses, reg keys, CVEs), so the searches that scan the customer’s log data for IOCs of that type could be deactivated altogether, or maybe set to run once / day or once / week. This customization is possible and it does require customer having advanced Splunk expertise. We can work with our customers to fine tune these operations.
How does the ES App work differently?
Our ES app works in tandem with our TruSTAR app. Notable events in ES are created when indicators from TruSTAR platform are matched against any logs on Splunk.