Logs¶
When AMI runs, 3 different log file types are being generated. The detail and verbosity of these logs can be configured in your local.properties
file.
This page contains information on how to interpret and visualize the log files to troubleshoot performance issues in AMI.
Overview¶
AMI generates 3 different log files in each session:
AmiOne.log
AmiOne.amilog
AmiMessages.log
By default, these are stored in the amione/log
directory of the AMI installation being run, but this location can be changed in local.properties
by setting the property f1.logs.dir
.
Both AmiOne.log
and AmiOne.amilog
contain the same information, however AmiOne.log
are more verbose and intended to be more human-readable. This is helpful if you are looking for specific information to diagnose issues with your AMI performance.
The level of detail and information recorded in the log files can be tuned in your local.properties
. See this section in the common properties to configure the level of detail you require from the logs.
AmiOne.amilog
files can be used in conjunction with the AMI log viewer to visualize information about the sesion. If you are experiencing performance issues, these files can be used to troubleshoot by providing insight into memory usage, messages, run time, etc.
The log viewer provides charts detailing the following areas:
- Memory
- Messages
- Threads
- Events
- Objects
- Web
- Web Balancer
- Timers
- Triggers
- Procedures
To understand individual charts and graphs, please see this section of the document. Otherwise, to read and interpret the information contained in the file directly, see here.
AMI Log Viewer File¶
Download the layout here, or contact support@3forge.com.
Setup¶
Launch an AMI instance and import the layout to AMI by going to File -> Import and pasting the layout Json into the window.
The log files will be the datasource that the log viewer builds its visualizations on.
To add them, navigate to the Data Modeler under Dashboard -> Data Modeler. You will see an undefined datasource titled "AmiLogFile."
Right click on this datasource and click "Edit Datamodel."
Then navigate to the "Flat File Reader" datasource adapter option and input the path to some directory containing log files of interest (by default this is /amione/log
).
Choose Log File¶
For general diagnostics, use AmiOne.amilog
files.
Under Datasource, select "AmiLogFile" and from the dropdown menu of "File Name", select the log file to analyze. You may need to click the "Refresh" button if you see no entries.
AmiOne.log¶
You can also use the viewer to load and view Messages.log
files for evaluating query performance in AMI.
-
Go to Windows -> AmiQueriesPerformance.
-
Search for a
.log
file, e.g.Messages.log
. -
Click on "Run Analytics" on the right side of the field.
-
Go to the "Performance Charts" tab.
-
You can change the Legends' grouping in the top left panel; filter the chart dots with the bottom left panel. Feel free to sort the table on the bottom panel to suit your needs.
Interpreting the Log Viewer¶
For most users, the log viewer is primarily intended for identifying potential memory (crashes) and performance (sluggishness) issues.
Below are some of the common graphs and information on how to interpret them.
Memory Details¶
The memory graph in the log viewer contains 3 key pieces of information:
- How much memory is being used (the green area)
- The OS-allocated memory to AMI (the blue line)
- The maximum memory that AMI can request (the red line)
This shows how the JVM requests and allocates memory for AMI. The JVM will continually perform garbage collection, but may periodically do large garbage cleans (see 8:36 in the image) if the memory use approaches the allocated threshold. At this point, the JVM will then request more memory from the OS assuming the maximum memory threshold has not been exceeded.
The JVM allocates and requests memory from the OS automatically. In cases where the JVM requires more memory but the OS cannot release that memory to the JVM, this can cause crashes.
To avoid dynamic JVM behavior, you can change the initial memory to match the maximum allocated memory. Add the following options to your Java VM Properties, replacing <VALUE>
with the amount of memory you wish to allocate:
Note
Peaks in AMI memory usage where memory used is close to the allocated memory is not inherently cause for concern, provided that levels reduce post-garbage collection. However, if memory use is consistently high, it may be necessary to increase memory capacity and/or allocation.
OOM Killer (Linux)¶
For Linux machines, as memory reaches capacity, the Linux OOM killer may kill non-essential processes to free up memory for the OS. This prioritizes the largest (most memory consuming) non-vital process. In AMI, this is logged in AmiOne.log
with a process ID denoting that AMI was killed by the OOM killer.
The dynamic behavior of the JVM can trigger this behavior, so we suggest setting the initial memory value to maximum to prevent the OS overcommiting memory.
Warning
For sensitive use-cases, you may want to consider changing the OOM score of AMI, or disabling the OOM killer entirely (though this is potentially risky).
Objects¶
Gives the row counts for tables in the center and when new rows are added.
Msgs Queued¶
Graph showing messages being sent to and from AMI Center when actions are performed.
Msgs Queued¶
This graph shows how many messages are being queued in AMI. If these values are high, this is not an immediate cause for concern provided the messages are being cleared (a steady stream of messages being processed).
If the number of these messages increases over time (a sharp gradient), then messages aren't being cleared fast enough as new ones arrive. This could indicate issues with timers and triggers and may result in AMI behaving slowly.
Msgs Per Second¶
Messages per second offers insight into the level of activity of messages being received, which can be helpful in identifying trends such as peak data influx times.
Msgs Processed¶
This is a cumulative graph how many messages have been processed over time and is rarely a point of concern.
Events¶
Event graphs are useful for identifying issues unrelated to AMI.
Observing peaks here usually means the external feeds that your application subscribes to is experiencing a high volume of data traffic or issues on their end.
Web¶
Graphs related to the web server and HTTP connection to the web session.
Active¶
The number of users connected to a session.
HTTP Services¶
The number of messages being sent between the server and a user's browser/session.
Rows¶
This graph is particularly important for understanding memory usage. Generally speaking, the more rows, the more memory is consumed.
-
Cached rows (green legend)
- For datamodels that run on subscription.
- As new rows are fetched from the datasource, they are sent to a cache in the web.
-
User rows (red legend)
- Rows in static datamodels.
-
User hidden rows (mustard legend)
- Rows hidden in web but present in datamodels.
These charts can be used to identify redundant or data-intensive rows.
Timers, Triggers, and Procedures¶
If your AMI session is sluggish and slow to respond, these are typically the primary culprits.
A steep gradient on either timers, triggers, or procedures indicates that there may be issues with logic or implementation. This is especially the case if you have one trigger that is drastically more expensive than the others.
The run time charts can help to determine what each individual trigger, timer, or procedure is exhibiting sub-optimal behavior.
Additional Help¶
The log viewer is a useful tool to help diagnose issues with your AMI runtime. If you require more help to interpret your logs, or advice on how to implement solutions, please do not hesitate to contact us at support@3forge.com
Interpreting Log Files¶
For quick diagnostics, it is still helpful to read through the AmiOne.amilog
files. For further detail, it is generally better to read through the AmiOne.log
files.
Below is a guide on how to interpret the different message types in the AMI log files without necessarily using the log viewer.
Overview¶
Information about the state of an AMI instance is recorded in the AMI AmiOne.log
files. Broadly, logs can be defined as either web logs, or center/relay logs. AMI instances that use all three components will contain all three log types.
These are the primary log types in the AmiOne.log
files:
- Memory
- Partition
- Topic
- AmiCenterEvents
See the relevant sections below for a guide on interpreting the log type and the information conveyed.
Memory¶
Memory messages can be broken down into primarily two types of message: Memory
and Process
.
Below is an excerpt of what these messages might look like.
This block shows a series of individual memory messages and their respective garbage collectors (the messages with T="Memory"
).
Following a series of memory messages, AMI will also log a process message which provides a snapshot of the overall memory state at that current point in time.
For understanding memory usage of the system, you will generally only need to interpret the process messages:
now
: The time the message was logged (in UTC format)freeMem
: The amount of free/available memory in the systemmaxMem
: The maximum memory AMI can use (set by the user in their Java Options)totMem
: The total memory available to use (assigned by JVM)
You can then calculate the used memory from the process messages:
- Used memory =
totMem
-freeMem
The memory graph in the log viewer is built on the process messages and can be used to view the total memory usage and consumption by AMI.
Partition¶
In AMI, tasks are broken down into a series of "partitions" where each partition can be accessed by at most one thread at a given time. There are two message types: Partition
and Dispatcher
.
In the logs, partition messages look like the following:
A partition is created for a series of different processes, for example:
- for each historical table
- for each user session
- for each datasource added
And more. Each partition message gives a snapshot of that specific partition. Note the ‘inThreadPool’ message which denotes whether that partition is being worked on in that particular snapshot.
Similarly to memory and process messages, dispatcher messages are logged after the partitions. The dispatcher messages provide an overall snapshot of the total partitions and how many partitions are being processed in that given moment. It is essentially the partition manager.
Topic¶
Topics are essentially bidirectional subscriptions/feeds from one portion of AMI to the other. For example:
This message is a source-to-target topic such that messages are going “in” the direction of the web from the center. Conversely, there is an outward (dir = “out”) message acting as the corresponding recipient.
This is particularly useful in identifying if messages are being received promptly, or if they are being queued up (100 messages out but 1 received means 99 are hovering somewhere in a processing queue).
AMI Center Objects¶
These are a group of log messages that represent the state of different objects in the center:
AmiCenterTables
AmiCenterTriggers
AmiCenterStoredProcs
There is an additional message type, AmiCenterEvents
, which provide an overall insight into all the objects and events in the center at the time. For more information on AmiCenterEvents, see this section of the document.
Issues in the different center variables like triggers can cause lags in performance. To interpret each individual message type, see the list below.
AmiCenterTables¶
Each AmiCenterTables message gives the current state of a named table in AMI Center. These can be system tables, prefaced with two underbars (__
), and also user-created tables.
Here is an example of what this looks like:
The information contained in these messages are:
type
: Name of the table (corresponds to AmiCenterObjects in the log viewer)count
: Number of rows
AmiCenterTriggers¶
User-created triggers that live in the center and are activated when executed. In the log files, they might look something like this:
Where:
name
: Trigger name assigned by usercount
: How many times the trigger has run (cumulatively) by that pointmillis
: How long the trigger took in mserrors
: Number of errors thrown during the trigger. Typically occurs on AmiScript triggers due to a user error.
AmiCenterStoredProcs¶
Procedures stored in AMI, both created by the user and those from the center. Center procedures are listed first and prefaced by double underbars (__AMI_PROCEDURE
).
Message format and meaning are as follows:
name
: Name of the procedurecount
: Number of times the procedure has runmillis
: Amount of time in ms that the procedure took to runerrors
: Number of errors thrown during a procedure.
AmiCenterEvents¶
AmiCenterEvents messages are logged after the various center objects have been recorded and provide an overall snapshot of the general state of the center and the events that have occurred by that point.
AmiCenterEvents messages follow this format:
Both information on relay and center events are contained in these messages. Visually, this is represented in the "Events" graph in the log viewer.
These messages are also intrinsically linked to "Topics" messages, since the topic will give an indication of the direction of messages being received and processed.
There are two "events" terms in the messages: events
and relayEvents
:
-
relayEvents
-
Relay events can be thought of as packets or bundles of incoming events from a relay. For example:
- 5 incoming messages coming at a given time will be packed as 1 relay event.
- This individual relay event then gets routed where it needs to go, e.g: center.
-
-
events
- Total number of events being processed in the center.
A big difference between relay events and events likely indicates a lot of incoming data (a lot of data being packaged before arriving at the center).
Web Server Logs¶
The overall status of the web component in AMI is stored in a AmiWebHttpServer
message in the logs. This will only be logged if "web" is in the list of included components in local.properties
.
An example of an AmiWebHttpServer message:
This message contains information on the connection state of users to a web session at a given time. It also provides information on the rows (tables) that are visible and accessible by users.
Rows¶
The information of the rows themselves determines which rows are visible:
userRows
: Total number of rows visible across all usersuserHiddenRows
: Rows hidden by a data filter (where theHIDE
option is enabled)userHiddenAlwaysRows
: Rows permanently hidden by a data filtercachedRows
: How many rows are cached on web-server end (visualization) relative to the data sources.
Cached rows are a realtime feature only.
Essentially, when any user starts a realtime visualization, that datasource gets cached such that a snapshot of that instance lives on the web for fast loading. This will be shared across all sessions for one dashboard.
It is up to individual user discretion to determine how much data should be loaded onto the front-end, but AMI automatically caches tables when webservers spin up. This can be configured in local.properties
.