Changes in 0.14.14
- Node will now log error and exit when writes to rocksdb fail - previously, it would log the message and continue running, which could lead to data loss.
- Fix off-by-one area in internal metric data storage struct that could cause potential crashes.
- Added support for FlatBuffer requests to the
/graphite/tags/findendpoint, which will greatly improve performance for users using Graphite 1.1.
- Fix license expiration date display bug on GUI.
- libmtev 1.6.2
Changes in 0.14.13
- Fix stats and dashboard for NNTBS data
- Enhance snowthsurrogatecontrol to dump all fields, as well as reverse or deleted records.
- Fix various bugs that could result in crashes or deadlocks.
- Various performance improvements.
- Improvements to Graphite tag search - respect Graphite name hierarchy in search results.
- libmtev 1.6.1
Changes in 0.14.12
- Fix proxy bug in the
/findAPI where certain proxy calls were being truncated, leading to incomplete results.
- Added each:sub(x) and each:exp(x) operators to CAQL.
- Performance improvements to full metric delete.
- Deduplicate surrogate IDs from the database on startup.
Changes in 0.14.11
- Fix bug where tagged metrics were not being loaded into the surrogate cache at startup correctly.
- Tune the surrogate asynch update journal settings to improve performance.
Changes in 0.14.10
- Eliminate raw delete timeout.
- Fix bugs in surrogate DB serialization and add additional key validation on deserialization.
Changes in 0.14.9
- Two related bug fixes in the surrogate DB that manifest with metrics whose total stream tag length is more than 127 characters. Metrics with such tag sets could appear to be missing from search results. Metrics that do not have any stream tags, or whose total tag set is less than 127 characters, are not affected.
- Performance improvements to full delete.
- Fix a bug that could cause crashes during reconstitute.
Changes in 0.14.8
- Add optional metric delete debugging.
- Fix bug that causes hanging when trying to delete certain metrics.
- Fix occasional crash related to reading NNTBS data.
Changes in 0.14.7
- Fix a bug where reconstitute process could get deadlocked and not make progress.
- Fix a potential crash that could occur when reconstituting surrogate data.
- Fix a bug where deleting a metric on a system would not remove the surrogate entry if the metric was not local to the node.
Changes in 0.14.6
- Fix bug where text and histogram data transfer could get hung during reconstitute.
- libmtev 1.5.28
Changes in 0.14.5
- Reclassify an error message as a debug message - message occurs in a situation that is not a malfunction and can fill the logs.
Changes in 0.14.4
- Fix crash in metric serialization.
Changes in 0.14.3
- Several memory leaks fixed.
- Fix reconstitute bug edge case where certain metric names would cause the reconstitute to spin/cease progress.
- Fix bug where certain HTTP requests could hang.
- Change default raw db conflict resolver to allow overriding old data with flatbuffer data from a higher generation.
- Documentation: Add configuration section describing the surrogate database and its options.
- Documentation: Mark
/readnumeric API as deprecated. The rollup API should be used instead.
- libmtev 1.5.26
Changes in 0.14.2
- Several memory leaks fixed.
- Improved memory utilization.
- Performance improvements.
- Increased speed of surrogate cache loading at startup.
snowthsurrogatecontroltool, which allows offline review and modification of the surrogate database.
Changes in 0.14.1
- Improvements to raw-to-NNTBS rollup speeds.
- Fix error messages that were printing an uninitialized variable.
- Handle escaped Graphite expansions that are leaves.
- Performance improvements via smarter use of locking.
- More aggressive memory reclamation.
- libmtev 1.5.23
Changes in 0.14.0
- Change some internal HTTP response codes to be more REST compliant/accurate.
- Improve error checking when opening NNTBS timeshards.
- Improve surrogate DB startup informational logging.
- Various memory usage optimizations to reduce the amount of memory needed for snowthd to operate.
- Remove global variables from Backtrace.io traces.
- Add ability to delete surrogates from the system that are no longer used.
- Remove temporary files used during reconstitute - there were a handful of files staying on disk and taking up space unnecessarily.
- Increase timeout for pulling raw data during reconstitutes.
- Move duplicate startup message to debug log - not actually an error, so should not be reported as one.
- Adopt multi-level hash strategy for graphite searches. The goal here is to be faster and more memory-efficient, with a focus on memory efficiency.
- Fix logging bug where long lines could end up running together.
- Fix crash bug in histogram fetching API.
- libmtev 1.5.19
Changes in 0.13.9
- Installer and startup wrapper will update ownership of
/opt/circonus/etc/irondb.confto allow for automatic updating of the topology configuration during rebalance operations.
- Performance improvements to parsing surrogate database at startup.
- Fix some potential crashes.
- Disable saving ptrace stdout output files in the default circonus-watchdog.conf file.
Changes in 0.13.8
- Expose more jobq modification via console.
- Fix wildcard/regex queries inside tag categories.
- Fix issue where certian job queues could have concurrency of zero, causing deadlock.
- Add activity ranges to tag_cats/vals.
- Add category param to tag_vals.
- libmtev 1.5.12
Changes in 0.13.7
- Documentation: fix missing rebalance state.
- Add log deduplication to avoid spamming errorlog with identical messages.
- Fix potential deadlock that could be triggered when forking off a process to be monitored by the watchdog.
- Fix some potential crashes/memory leaks.
- When loading a new topology, return 200 status instead of 500 if the topology is already loaded.
- Support tag removal.
- Performance/stability improvements for activity list operations.
- libmtev 1.5.11
Changes in 0.13.6
- Move Zipkin setup messages out of the error log and into the debug log.
- Skip unparseable metric_locators during replication.
- Turn off sync writes in tagged surrogate writer.
- Fix potential crashes when check_name is NULL.
Changes in 0.13.5
- Disable asynch core dumps by default.
- Use the metric source for incoming metrics instead of hardcoding to RECONNOITER.
- Fix some potential use-after-free crashes.
- Fixed a crash where we would erroneously assume null termination.
- Performance and correctness fixes to internal locking mechanism.
- Fix some instances where we would potentially attempt to access a null metric name.
Changes in 0.13.4
- Installer bug since 0.13.1 set incorrect ZFS properties on some datasets.
New installs of 0.13.1 or later may need to run the following commands to
restore the correct property values. Existing deployments that upgraded from
version 0.13 or earlier were not affected.
zfs inherit -r quota <poolname>/irondb/data zfs inherit -r quota <poolname>/irondb/nntbs zfs inherit -r quota <poolname>/irondb/hist zfs inherit -r quota <poolname>/irondb/localstate zfs inherit -r quota <poolname>/irondb/logs zfs inherit -r quota <poolname>/irondb/lua zfs inherit -r quota <poolname>/irondb/metric_name_db zfs inherit -r logbias <poolname>/irondb/redo zfs inherit -r logbias <poolname>/irondb/text
- Fix memory leaks and invalid access errors that could potentially lead to crashes.
- libmtev 1.5.7
Changes in 0.13.3
- Fix hashing function for the reverse surrogate cache.
- Fix loading of metrics db index when iterating surrogate entries on startup.
- Improve logging for surrogate db when there are ID collisions.
- Accept check name and source in /surrogate/put - do not allow duplicate surrogate ids in the cache.
- Performance improvements to inter-node gossip and NNTBS data writing.
- Allow purging metrics from in-memory cache.
- Fix some potential crashes on unexpected data.
- Allow using tag search to define retention period for metrics.
Changes in 0.13.2
- Fixes for journal surrogate puts and activity rebuilds.
- Fix bug where software would loop forever if journal writes were in the future.
Changes in 0.13.1
- Various performance improvements.
- Use progressive locks in surrogate DB.
- Documentation: fix incorrect header name for raw data submission with Flatbuffer.
- Allow deleting metrics by tag.
- Allow deleting all metrics in a check.
- Allowing deleting metrics based on a wildcard for NNT, text, or histogram data.
- Allow 4096 chars for metric name ingestion
- New CAQL function:
group_by:*package provides functions to aggregate metrics by tags
- libmtev 1.5.5
Changes in 0.13
- Service config change for EL7: We now ship a native systemd service
unit configuration, rather than a traditional init script. The unit name
remains the same, but any configuration management or other scripting that
servicecommands should be updated to use
- Installer: better validation of user input.
- Config option to disable Activity Tracking which
can cause write latency spikes at higher ingest volumes. A fix for this
behavior will be coming in a future release.
- Add an attribute,
irondb.confto disable tracking.
- Note that certain search parameters that depend on activity tracking will
not work while tracking is disabled, and may not be accurate if tracking
is reenabled after some time. Any search query that uses
activity_end_secswill not work when tracking is disabled.
- Add an attribute,
- Memory leak fixes in Graphite result handling.
- New CAQL functions:
- libmtev 1.4.5
Changes in 0.12.5
- Crash fix on unparseable metric names
- Journal fix in pre_commit mmap space
Changes in 0.12.4
- More memory leak fixes
- Fixes for graphite tag support
- Fix for greedy name matching in graphite queries
- Support blank tag values
- CAQL if statements and negation operators
- CAQL optimizations
- Support for building/rebuilding higher level rollups from lower level rollups
- Rebalance adds a new completion state to fix races when finishing rebalance ops
Changes in 0.12.3
- More memory leak fixes in name searches
- Rebalance fixes
- Embed a default license if one isn't provided
- Support for raw deletes
- Add raw delete API
Changes in 0.12.2
- Fix memory leak in name searches
Changes in 0.12.1 (unreleased)
- Enable heap profiling
Changes in 0.12
This release brings several major new features and represents months of hard work by our Engineering and Operations teams.
- New feature: Stream Tags
- These are tags that affect the name of a metric stream. They are
category:valuepairs, and are searchable.
- Each unique combination of metric name and tag list counts as a new metric stream for licensing purposes.
- These are tags that affect the name of a metric stream. They are represented as
- New feature: Activity Tracking
- Quickly determine time ranges when a given metric or group of metrics was being collected.
- New feature: Configurable rollup retention for numeric data.
- Retention is per rollup period defined in configuration.
- Operations: There is a one-time operation on the first startup when
upgrading to version
- As part of Stream Tags support, the metric_name_database has been combined with another internal index and is no longer stored separately on disk.
- The metric name database was always read into memory at startup. After the one-time conversion, its information will be extracted from the other index on subsequent startups. The time to complete the conversion includes the same amount of time to read the existing metric name database as well as to write out an updated index entry for each record encountered. Therefore, it is proportional to the number of unique metric streams stored on this node.
- Operations: The
raw_databaseoption rollup_strategy now defaults to
raw_iteratorif not specified.
- If upgrading with a config that does not specify a
rollup_strategy, an active rollup operation will start over on the timeshard it was processing.
- If upgrading with a config that does not specify a
- Operations: Add the ability to cancel a sweep delete operation.
- Operations: Remove the reconstitute-reset option (
-E) and replace with a more complete solution in the form of a script,
reset_reconstitute, that will enable the operator to remove all local data and start a fresh rebuild.
- CAQL: add methods
- Installer: use default ZFS recordsize (128K) for NNT data. This has been
shown experimentally to yield significantly better compression ratios.
Existing installations will not see any change. To immediately effect these
changes on an existing install, issue the following two commands:
zfs inherit -r recordsize <pool>/irondb/data zfs inherit -r recordsize <pool>/irondb/nntbs
<pool>is the zpool name. Users of versions < 0.11.1 can omit the second command (this dataset will not be present.) The recordsize change only affects new writes; existing data remains at the previous recordsize. If the full benefit of the change is desired, a node rebuild may be performed.
- Documentation: Raw Submission API documentation for already required X-Snowth-Datapoints header
- Documentation: Text and Histogram deletion APIs were out of date.
- Documentation: Update formatting on API pages, which were auto-converted from a previous format.
- Performance and stability fixes too numerous to list here, though there are
- Converted UUID handling from libuuid to libmtev's faster implementation.
- Optimized replication speed.
Changes in 0.11.18
- Fix a bug causing unnecessary duplicated work during sweep deletes
Changes in 0.11.17
- Fix for http header parsing edge case
Changes in 0.11.16
- Allow control over max ingest age for graphite data via config
- Optionally provide graphite find and series queries as flatbuffer data
- Fix epoch metadata fetch for NNTBS data
- Reconstitute state saving bug fixes
- Fix cleanup of journal data post replication
- Add hardware selection advice and system profiles
- Correct color rules for latency summaries
- Various small doc fixes
Changes in 0.11.15
- Fix potential use-after-free in raw numeric fetch path.
- Various fixes to NNTBS batch conversion.
- Crash fixes when dealing with NNTBS shards.
- UI changes for Replication Latency display:
- Initially all remote node latencies are hidden, with just the heading displayed. Click on a heading to expand the remote node listing.
- A node's average replication latency is now displayed at the right end of the heading, and color-coded.
- Disable Lua modules when in reconstitute mode.
- Don't hold on to NNT filehandles after converting them to NNTBS.
- Include files and Lua modules.
- New UI replication tab display.
Changes in 0.11.14
- Fix bug in NNT reconstitution
Changes in 0.11.13 (unreleased)
- Fix for throttling during reconstitute operations
- Several small fixes and cleanups
Changes in 0.11.12
- Add an offline NNT to NNTBS conversion mode.
- Default conversion is "lazy", as NNT metrics are read.
- For read-heavy environments this may produce too much load, so the offline option can be used to take one node at a time out of the cluster and batch-convert all its NNT files to NNTBS block storage.
- Performance improvements to gossip replication, avoids watchdog timeout in some configurations.
- Fix several crash bugs in reconstitute, NNTBS, and journaling.
- Silence noisy error printing during NNTBS conversion.
- Formatting fix to a gossip error message (missing newline).
- Add NNTBS dataset to reconstitute procedure.
- New NNTBS conversion-only operations mode (
- Clarify that in split clusters, write copies are distributed as evenly as possible across both sides.
- Show the gossip age values that lead to green/yellow/red display in the Replication Latency UI tab.
Changes in 0.11.11
- Final deadlock fixes for timeshard management
- Protect against unparseable json coming back from proxy calls
Changes in 0.11.10
- More deadlock fixes for timeshard management
- Note the lazy migration strategy for NNT to NNTBS conversion.
Changes in 0.11.9
- Fix deadlock that can be hit when attempting to delete a shard during heavy read activity.
- Use new libmtev
max_backlogAPI to shed load under extreme conditions.
- Internal RocksDB tuning to reduce memory footprint, reduce file reads and improve performance.
- Add a tool to repair the raw DB if it gets corrupted, as with an unexpected system shutdown.
- Add a "startup" log to shift certain initialization logs out of the
- Reduces clutter and makes it easier to see when your instance is up and running.
- New installs will have this log enabled by default, written to
/irondb/logs/startuplogand rotated on the same policy as
- To enable on an existing installation, add this line to
/opt/circonus/etc/irondb.conf, in the
<logs>stanza (on a single line):
<log name="notice/startup" type="file" path="/irondb/logs/startuplog" timestamps="on" rotate_seconds="86400" retain_seconds="604800"/>
- Appendix with cluster sizing recommendations.
- GET method for
Changes in 0.11.8
- Minor fix to reduce error logging
Changes in 0.11.7
- Minor fixes for histogram database migration
- Add new section on
Changes in 0.11.6
- NNTBS timesharded implementation
- Changes for supporting very large reconstitution
- Do raw database reconstitution in parallel for speed
- Add new section on the
sweep_deleteAPI, useful for implementing retention policies
- Add new section on migrating to a new cluster from an existing one.
- Add page documenting
Changes in 0.11.5
- Yield during reconstitute/rebalance inside NNTBS to prevent starvation of other ops
Changes in 0.11.4
- Fix for iterator re-use in error edge case
Changes in 0.11.3
- Safety fix for rollup code
- Corruption fix on hard shutdown or power loss
Changes in 0.11.2
- Crash fix for rollup code
- Lock fix for conversion code
- Changes for new installations - new installations will have different defaults
granularitygoes from 1 day to 1 week.
min_delete_agegoes from 3 days to 4 weeks.
delete_after_quiescent_agegoes from 12 hours to 2 hours.
rollup_strategywas added. It is fine to mix new nodes installed with these settings with older nodes who have the older settings. It is not fine to change these settings on an existing installation.
Changes in 0.11.1
- Fixes for NNTBS
- Add NNTBS stats to admin UI
- Various smaller fixes
Changes in 0.11
- Store rollup data in a new format yielding better performance on insert and rollup (NNTBS)
- Performance improvements for lua extensions
- Reduce logging to error sink
- Many smaller fixes and improvements
- Dropped support for OmniOS (RIP)
Changes in 0.10.19
- Improve rollup speed by iterating in a more natural DB order, with additional parallelization.
setup-irondbscript will now log its output, in addition to stdout. It will log to
/var/log/irondb-setup.logand if run multiple times will keep up to five (5) previous logs.
- The snowthimport tool will now fail with an error if the topology input file contains any node IDs with uppercase letters.
- Note that all supplied UUIDs during initial setup and cluster configuration should be lowercase. If uppercase UUIDs are supplied, they will be lowercased and a warning logged by setup.
Changes in 0.10.18
- Fix crash in fair queueing
- Finish moving rollups to their own jobq
Changes in 0.10.17
- Restore fdatasync behavior from rocksdb 4.5.1 release
- Move rollups to their own jobq so as to not interfere with normal reads
- Implement fair job queueing for reads so large read jobs cannot starve out other smaller reads
Changes in 0.10.16
- New rocksdb library version 5.8.6
Changes in 0.10.15
- More aggressively load shed by forcing local data fetch jobs to obey timeouts
Changes in 0.10.14
- Allow config driven control over the concurrency of the data_read_jobq
- Short circuit local data read jobs if the timeout has elapsed
- Add all hidden stats to internal UI tab
Changes in 0.10.13
- Fix potential double free crash upon query cache expiry
Changes in 0.10.12
- Lock free cache for topology hashes
- Fix graphite response when we have no data for a known metric name
Changes in 0.10.11
- Disable cache for topology hashes due to live lock
Changes in 0.10.10
- Validate incoming /metrics/find queries are well formed
- Move query cache to an LFU
Changes in 0.10.9
- Fix for crash on extremely long /metrics/find queries
Changes in 0.10.8
- IRONdb now supports listening via the Pickle protocol.
--writecountargument for limiting the number of data points submitted per request
- Submit to the primary owning node for a given metric
- Disable HTTP keepalive
--find_closest_nameparameter. This is needed for sites that do name manipulation via modules in the metric_name_db and submit metrics with one name but search on them with another name. For example, a metric would get submitted that resembles
foo.bar_avgand returned from the metric_name_db as
foo.bar. Ingestion of whisper data has to use the
foo.bar_avgname but whisper files on disk do not follow this format. To combat this a new switch which uses the
/graphite/metrics/findurl to lookup an already ingested name based on the whisper name as a prefix and uses that name for metric submission under NNT.
Changes in 0.10.7
- Prevent OOM conditions when there are large chunks of new metric_name_db values
- Pre-populate the metric_name_db cache on startup
- Replace usage of fnmatch with PCRE, fixing some cases where fnmatch fails
- Allow proxied metrics/find queries to utilize the cache
Changes in 0.10.6
- Increased parallelism in metric_name_db maintenance
- whisper2nnt: include in submission those archives with a period coarser than the minimum
- whisper2nnt: re-raise exception after two consecutive submission failures
- Better error handling for topology loading failures
- Several memory-related bug fixes
- The IRONdb Relay installer no longer insists on ZFS, and creates directories instead.
- Explicitly document that cluster resize/rebalance does not support changes to "sidedness". A new cluster and full reconstitute is required for changing to/from a sided cluster.
Changes in 0.10.5
- Eliminate lock contention on a hot path when debugging is not enabled.
- Correct a logic error in choosing the most up-to-date node when proxying.
- Fix escaped wildcard queries when proxy-querying leaf nodes.
- Log-and-skip rather than crash on flatbuffer read errors.
- Crash fix for stack underflow.
- Several whisper2nnt fixes:
- Retry submissions when a connection to IRONdb is reset.
- Sort output before submitting to IRONdb, avoids rewinding epoch on numeric data files.
- New arguments to help with debugging:
- Includes libmtev fix for a startup issue with file permissions.
Changes in 0.10.4
- Fixes for reconstitute status handling.
- Fix use-after-free in graphite GET path.
- Add documentation for irondb-relay, a cluster-aware carbon-relay/carbon-c-relay replacement.
- Merge content for deleting numeric metrics and entire checks.
Changes in 0.10.3
- Ensure metrics injected via whisper2nnt tool are visible.
Changes in 0.10.2
- Another late-breaking fix to speed up writes to the metric_name_db.
Changes in 0.10.1
- Late-breaking optimization to avoid sending /metrics/find requests to down nodes.
Changes in 0.10.0
- New replication protocol format, utilizing Google FlatBuffers. This is a backward-incompatible change. A typical rolling upgrade should be performed, but nodes will not send replication data until they detect FlatBuffer support on the other end. As a result, there may be increased replication latency until all nodes are upgraded.
- Improved error handling during reconstitute.
- New page documenting cluster resizing procedures.
- Add system tuning suggestions to the Installation page.
Changes in 0.9.11
- Reconstitute fixes.
- Fix a bug that prevents a graphite listener from running properly with SSL/TLS on.
Changes in 0.9.10
- Fix bugs in proxying graphite requests where unnecessary work was being triggered.
- Generated JSON was badly formatted when mixing remote and local results.
- Add internal timeout support for graphite fetches.
- Optimize JSON construction for proxy requests.
- Enable gzip compression on reconstitute requests.
- New page documenting the configuration files.
Changes in 0.9.9
- Split graphite metric fetches into separate threads for node-local vs. remote to improve read latency
- Provide a configuration option for toggling LZ4 compression on journal sends (WAL replay to other cluster nodes). The default is on (use compression) and is best for most users.
- To disable compression on journal sends, set an attribute
- To disable compression on journal sends, set an attribute
- Added instructions for rebuilding failed or damaged nodes
Changes in 0.9.8
- Optimize JSON processing on metrics_find responses.
- Additional fixes to timeouts to prevent cascading congestion on metrics_find queries.
Changes in 0.9.7
- Fix for potential thundering herd on metrics_find queries
Changes in 0.9.6
- Fix a performance regression from 0.9.5 in topology placement calculations
- Various minor fixes
Changes in 0.9.5
- Fix lookup key for topology in flatbuffer-based ingestion. Flatbuffer ingestion format is currently only used by the experimental irondb-relay.
- Update to new libmtev config API
Changes in 0.9.4
- Various fixes
Changes in 0.9.3
- Fix race condition on Linux with dlopen() of libzfs
- Crash fix: skip blank metric names during rollup
- Return the first level of metrics_db properly on certain wildcard queries
- More efficient Graphite metric parsing
Changes in 0.9.2
- Improve query read speed when synthesizing rollups from raw data
- Fix double-free crash in handling of series_multi requests
Changes in 0.9.1
- Fix crash in topology handling for clusters of more than 10 nodes
- Check topology configuration more carefully on initial import
- Various stability fixes
- Document network ports and protocols required for operation
Changes in 0.9.0
- Support for parallelizing rollups, which can be activated by adding a
"rollup" element to the
irondb.conf, with a "concurrency" attribute:
<pools> ... <rollup concurrency="N"/> </pools>
Nis an integer in the range from 1 up to the value of
nnt_putconcurrency but not greater than 16. If not specified, rollups will remain serialized (concurrency of 1). A value of 4 has been shown to provide the most improvement over serialized rollups.
- Fix for watchdog-panic when fetching large volumes of data via graphite endpoints.
- Stop stripping NULLs from beginning and end of graphite responses.
- Do not return graphite metric data from before the start of collection for that metric.
- Optimization for graphite fetches through the storage finder plugin.
- Changes to support data ingestion from new irondb-relay.
Changes in 0.8.35
- Add an option to not use database rollup logic when responding to graphite queries
Changes in 0.8.34
- Throughput optimizations
Changes in 0.8.33
- Fix a bug in database comparator introduced in 0.8.30
Changes in 0.8.32
- Fix a bug with ZFS on Linux integration in the admin UI that caused a segfault on startup.
Changes in 0.8.31
Changes in 0.8.30
- Optimizations for raw data ingestion.
- Better internal defaults for raw metrics database, to reduce compaction stalls, improving throughput.
- Cache SHA256 hashes in topology-handling code to reduce CPU consumption.
- Fix memory-usage errors in LRU cache for Graphite queries.
- Fix memory leaks relating to replication journals.
- Fix for failed deletes due to filename-too-long errors.
Changes in 0.8.29
Changes in 0.8.28
Changes in 0.8.27
- Fix a bug that caused contention between reads and writes during rollup.
- Reduce contention in the raw database write path.
Changes in 0.8.26
- Fix LRU-cache bug for metric queries.
Changes in 0.8.25
- Graphite request proxying preserves original start/end timestamps.
- Increase replication performance by bulk-reading from the write-ahead log.
- Improve reconstitute performance.
- Fix several memory leaks.
- Note: 0.8.24 was an unreleased internal version. Its changes are included here.
Changes in 0.8.23
- Cache /metrics/find queries.
- Improved journaling performance.
- Additional bug fixes.
Changes in 0.8.22
- Efficiency improvement in Graphite queries; we now strip NULLs from both ends of the returned response.
- Fix a bug in Graphite query that would return a closely related metric instead of the requested one.
- Fix a bug that caused us to request millisecond resolution when zoomed out too far, and 1-day would be better.
- First draft of a progress UI for reconstitute.
Changes in 0.8.21
- Inspect and repair write-ahead journal on open.
- Add a statistic for
total_put_tuples, covering all metric types.
- (libmtev) Use locks to protect against cross-thread releases.
Changes in 0.8.20
- Fix for brace expansion in Graphite metric name queries.
- Resume in-progress rollups after application restart.
- Improved reconstitute handling.
- Minor UI fix for displaying sub-minute rollups.
- Crash and memory leak fixes.
Changes in 0.8.19
- Lower default batch size for replication log processing from 500K to 50K messages. Can still be tuned higher if necessary.
- Improve ingestion performance in the Graphite listener.
Changes in 0.8.18
- Fix potential races in replication.
- Speed up metric querying.
Changes in 0.8.17
- (libmtev) Crash fix in HTTP request handling.
- Disable watchdog timer during long-running operations at startup.
- Limit writing metrics forward into new time shards.
- Add multi-threaded replication.
Changes in 0.8.16
- Support brace expansion and escaped queries for Graphite requests.
- Faster reconstituting of raw data.
- Fix metric name handling during reconstitute.
Changes in 0.8.15
- Move Graphite listener connection processing off the main thread to avoid blocking.
Changes in 0.8.14
- Improve replicate_journal message handling.
- Speed up journal processing.
- Increase write buffer and block size in raw database to reduce write stalls.
Changes in 0.8.13
- Reduce CPU usage on journal_reader threads.
- Fix crash during rollup when rewinding the epoch of a data file.
- Increase default read buffer size for Graphite listener.
- Use proper libcurl error defines in replication code.
Changes in 0.8.12
- Remove problematic usage of alloca().
- Add lz4f support to reconstitute.
Changes in 0.8.11
- Speed up reconstitute through parallel processing.
Changes in 0.8.10
- Improve throughput via socket and send-buffer tuning fixes.
- Fix watchdog timeouts when reloading large metric databases.
Changes in 0.8.9
- Preserve null termination in metric names for proper duplicate detection.
Changes in 0.8.8
- Turn off gzip in reconstitute, as testing shows throughput is better without it.
- Avoid performing rollups or deletions on a reconstituting node.
- Memory leak fixes.
Changes in 0.8.7
- Performance fixes for reconstitute.
- Memory leak fixes.
Changes in 0.8.6
- Fix internal wildcard queries, and limit Graphite metric names to 256 levels.
Changes in 0.8.5
- Build Graphite responses using
mtev_jsoninstead of custom strings.
Changes in 0.8.4
- Set a maximum metric name length on ingestion.
Changes in 0.8.3
- Various replication fixes.
- Fixes for parsing errors and startup crashes.
Changes in 0.8.2
- Reject Graphite metrics with an encoded length greater than 255.
Changes in 0.8.1
- Internal testing fixes.
Changes in 0.8
- De-duplicate proxied requests.
- Deal with unparseably large number strings.
Changes in 0.7
- Add raw ingestion.
- Stricter Graphite record parsing.
- Memory leak and header-parsing fixes.
Changes in 0.6
- Better handling of JSON parse errors during reconstitute.
Accept-Encoding: gzip, compress outgoing replication POSTs with lz4f.
- Optimize UUID comparison to speed up reconstitute.
Changes in 0.5
- Fix crash from Graphite listener connection handling.
- Refactor text metric processing in preparation for raw database.
Changes in 0.4
- Fix rollup span calculation for Graphite fetches.
- Support getting the topology configuration from an included config file.
Changes in 0.3
- Allow reconstituting of individual data types.
- UI fixes for displaying licenses.
- Memory leak, crash and hang fixes.
Changes in 0.2
- Don't recaclulate
counter_stddevwhen counter in
Changes in 0.1
- Add Graphite support.
Changes in 0.0.2
- Fix issues with various inputs being
Changes in 0.0.1
- Initial version. Start of "IRONdb" branding of Circonus's internal TSDB implementation.