Logs¶
When AMI runs, 3 different log file types are being generated. The detail and verbosity of these logs can be configured in your local.properties
file. See here for an overview of the log files and properties. This page contains information on how to interpret and visualize those logs to troubleshoot performance issues in AMI.
Overview¶
AMI generates 3 different log files in each session:
AmiLog.log
AmiLog.amilog
AmiMessages.log
By default, these are stored in the amione/log
directory of the AMI installation being run, but this location can be changed in local.properties
by setting the property f1.logs.dir
.
Both AmiLog.log
and AmiLog.amilog
contain the same information, however AmiLog.log
is intended to be more human-readable and is the log file you will typically refer to when encountering issues in AMI.
AmiLog.amilog
files however can be used in conjunction with the AMI log viewer to visualize information about the sesion. This can be useful for troubleshooting if you are noticing performance issues, providing insight into memory usage, messages, run time, etc.
The log viewer provides charts detailing the following areas:
- Memory
- Messages
- Threads
- Events
- Objects
- Web
- Web Balancer
- Timers
- Triggers
- Procedures
To understand individual charts and graphs, please see this section of the document.
AMI Log Viewer File¶
Download the layout here, or contact support@3forge.com.
Setup¶
Launch an AMI instance and import the layout to AMI by going to File -> Import and pasting the layout Json into the window.
The log files will be the datasource that the log viewer builds its visualizations on.
To add them, navigate to the Data Modeler under Dashboard -> Data Modeler. You will see an undefined datasource titled "AmiLogFile."
Right click on this datasource and click "Edit Datamodel."
Then navigate to the "Flat File Reader" datasource adapter option and input the path to some directory containing log files of interest (by default this is /amione/log
).
Choose Log File¶
For general diagnostics, use AmiOne.amilog
files.
Under Datasource, select "AmiLogFile" and from the dropdown menu of "File Name", select the log file to analyze. You may need to click the "Refresh" button if you see no entries.
AmiOne.log¶
You can also use the viewer to load and view AmiOne.log
files for evlauiating query performance in AMI.
-
Go to Windows -> AmiQueriesPerformance.
-
Search for a
.log
file, e.g.AmiOne.log
. -
Click on "Run Analytics" on the right side of the field.
-
Go to the "Performance Charts" tab.
-
You can change the Legends' grouping in the top left panel; filter the chart dots with the bottom left panel. Feel free to sort the table on the bottom panel to suit your needs.
Interpreting the Logs¶
For most users, the log viewer is primarily intended for identifying potential memory (crashes) and performance (sluggishness) issues.
Below are some of the common graphs and information on how to interpret them.
Memory Details¶
The memory graph in the log viewer contains 3 key pieces of information:
- How much memory is being used (the green area)
- The OS-allocated memory to AMI (the blue line)
- The maximum memory that AMI can request (the red line)
This shows how the JVM requests and allocates memory for AMI. The JVM will continually perform garbage collection, but may periodically do large garbage cleans (see 8:36 in the image) if the memory use approaches the allocated threshold. At this point, the JVM will then request more memory from the OS assuming the maximum memory threshold has not been exceeded.
The JVM allocates and requests memory from the OS automatically. In cases where the JVM requires more memory but the OS cannot release that memory to the JVM, this can cause crashes.
To avoid dynamic JVM behavior, you can change the initial memory to match the maximum allocated memory. Add the following options to your Java VM Properties, replacing <VALUE>
with the amount of memory you wish to allocate:
Note
Peaks in AMI memory usage where memory used is close to the allocated memory is not inherently cause for concern, provided that levels reduce post-garbage collection. However, if memory use is consistently high, it may be necessary to increase memory capacity and/or allocation.
OOM Killer (Linux)¶
For Linux machines, as memory reaches capacity, the Linux OOM killer may kill non-essential processes to free up memory for the OS. This prioritizes the largest (most memory consuming) non-vital process. In AMI, this is logged in AmiOne.log
with a process ID denoting that AMI was killed by the OOM killer.
The dynamic behavior of the JVM can trigger this behavior, so we suggest setting the initial memory value to maximum to prevent the OS overcommiting memory.
Warning
For sensitive use-cases, you may want to consider changing the OOM score of AMI, or disabling the OOM killer entirely (though this is potentially risky).
Objects¶
Gives the row counts for tables in the center.
Msgs Queued¶
Msgs Queued¶
This graph shows how many messages are being queued in AMI. If these values are high, this is not an immediate cause for concern provided the messages are being cleared (a steady stream of messages being processed).
If the number of these messages increases over time (a sharp gradient), then messages aren't being cleared fast enough as new ones arrive. This could indicate issues with timers and triggers and may result in AMI behaving slowly.
Msgs Per Second¶
Messages per second offers insight into the level of activity of messages being received, which can be helpful in identifying trends such as peak data influx times.
Msgs Processed¶
This is a cumulative graph how many messages have been processed over time and is rarely a point of concern.
Events¶
Event graphs are useful for identifying issues unrelated to AMI.
Observing peaks here usually means the external feeds that your application subscribes to is experiencing a high volume of data traffic or issues on their end.
Web¶
Active¶
The number of users connected to a session.
HTTP Services¶
The number of messages being sent between the server and a user's browser/session.
Rows¶
This graph is particularly important for understanding memory usage. Generally speaking, the more rows, the more memory is consumed.
-
Cached rows (green legend)
- For datamodels that run on subscription.
- As new rows are fetched from the datasource, they are sent to a cache in the web.
-
User rows (red legend)
- Rows in static datamodels.
-
User hidden rows (mustard legend)
- Rows hidden in web but present in datamodels.
These charts can be used to identify redundant or data-intensive rows.
Timers, Triggers, and Procedures¶
If your AMI session is sluggish and slow to respond, these are typically the primary culprits.
A steep gradient on either timers, triggers, or procedures indicates that there may be issues with logic or implementation. This is especially the case if you have one trigger that is drastically more expensive than the others.
The run time charts can help to determine what each individual trigger, timer, or procedure is exhibiting sub-optimal behavior.
Additional Help¶
The log viewer is a useful tool to help diagnose issues with your AMI runtime. If you require more help to interpret your logs, or advice on how to implement solutions, please do not hesitate to contact us at support@3forge.com