question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
Eclipse is driving me nuts right now. It's probably something trivial but I just don't get it. Whenever I like to add a breakpoint, the regular icons are crossed out in the editor and breakpoints view. As you might have guessed, this isn't strictly a graphical problem ;) The breakpoints are simply ignored while debugging. The breakpoint's properties aren't helpful either. Any hint is very well appreciated! EDIT: I've tested different JDKs without success. I've successfully debugged projects in another workspace Okay, so it's not about the JDK or the installed plugins. Seems to be workspace related. Anything I could try?
It seems you have the Skip All Breakpoints option enabled in the Breakpoints view.
Helios
3,187,805
14
First, Eclipse is not my native IDE -- I'm barely a n00b with it. I set up a project in a workspace that was actually in the directory of another client's project (I didn't really follow the whole workspace/project thing) and, in fact, now I can't even find the Eclipse workspace file to open it. What I'd like to do is: Open my eclipse project (/workspace?) -- I know where all the files are on disk, just not what to open in order to see them in Eclipse -- and Move my project to a new workspace, which I guess I will put in a generic Eclipse-y place, and have that one workspace reference all my Eclipse projects. (Is that the right way to do it? Does Eclipse dislike me being a one-project == one-workspace kind of guy?) Please educate me regarding The Eclipse Way so that I can get back to work writing code. Thanks!
Roughly a workspace (which is a directory) in Eclipse contains: configuration (installed JRE, Servers runtimes, code formatting rules, ...) one or more projects You can of course have as many workspaces as you want (but only one can be opened at a time) and a project can also be part of different workspaces. If you know where your sources are and want to move them to a new workspace here is a possible solution: Start Eclipse and when prompted for a workspace choose where you want the workspace to be created (if directory doesn't exist it will be created). For example you can choose C:/Dev/Workspace/. If you are not prompted, go to File->Switch workspace->Other Once you are in your workspace you can import your exisiting project with File->Import then General->Existing Projects into workspace Navigate to the folder containing your project sources, select your project and click finish I don't know if it's a best practice or not but what I usually do is the following: I have one workspace for each of my customer (workspace_cust1, workspace_cust2) Each workspace references my commons library projects and add client specific projects This way each time I change my commons library it's up to date in every workspace.
Helios
7,180,474
14
I have created an Xtext plugin in eclipse. Every time I launch it as an 'Eclipse Application' via the context menu, I get a few moments grace before the new Eclipse instance crashes. I switch back to the original instance and in the console window I see Root exception: java.lang.OutOfMemoryError: PermGen space I have looked back at some solutions in the forums but a lot relate to tomcat. Can someone give me a few suggestions as to how I could fix this? I am using Eclipse helios. My 'eclipse.ini' file looks like: -startup plugins/org.eclipse.equinox.launcher_1.1.1.R36x_v20101122_1400.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.1.2.R36x_v20101222 -product org.eclipse.epp.package.java.product --launcher.defaultAction openFile --launcher.XXMaxPermSize 256M -showsplash org.eclipse.platform --launcher.XXMaxPermSize 256m --launcher.defaultAction openFile -vmargs -Dosgi.requiredJavaVersion=1.5 -Xms40m -Xmx384m The machine I am running eclipse on has just about 4GB of RAM ====================================================================== Update: I hope this is helpful to anyone who may have the same problem. I followed the instructions here and tried setting the -XX:MaxPermSize=256m in my eclipse.ini file. This did not work. Eventually, I had to uninstall java sdk (I was using the latest jdk1.6.0_26) and I installed an older version (jdk1.6.0_20) from here. I then set -XX:MaxPermSize=256m in my eclipse.ini and it now looks like the following: -startup plugins/org.eclipse.equinox.launcher_1.1.1.R36x_v20101122_1400.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.1.2.R36x_v20101222 -product org.eclipse.epp.package.java.product --launcher.defaultAction openFile -showsplash org.eclipse.platform --launcher.defaultAction openFile -vmargs -Xms40m -Xmx1024m -XX:MaxPermSize=256m I hope this helps out someone in the same situation. This problem was happening when I'd launch my Xtext plugin.
Please add following to the vm parameters in the launch configuration (Run -> Run Configurations) -XX:MaxPermSize=128m That should help.
Helios
6,537,217
12
I notice this question has been asked a few times but I don't really like the solution with two Eclipses in parallel. I just want my Galileo upgraded to Helios with preservation of all settings, plugins and workspaces, without the mumbo-jumbo like copying plugins manually and stuff. I've got the Android plugin, the C/C++ plugin, the PyDev plugin and what not more. Is there a quick and sure way to upgrade Eclipse like this? I've found some instructions on the Eclipse wiki, but it doesn't seem to work with my system (Ubuntu 10.04; I add the Helios site and then Check for Updates, but it doesn't take Helios as an update for Eclipse). Has someone found a solution for this? UPDATE: The way described in the wiki seems to work on my Windows-installed Eclipse, Check For Updates lists "Eclipse SDK v.3.6.0" as an available update.
You cannot. You have to download, uncompress and run it without the ubuntu repository. In this scenario you can upgrade without problems ;-) If you want to use Helios through the ubuntu repository you have to wait beyond Maverick (10.10) as you can see here: https://launchpad.net/ubuntu/+source/eclipse
Helios
3,680,250
11
When I use Find/Replace in Eclipse (Helios Service Release 2), it does not find the words above my current location even it reaches to the end of file and I have to set Direction to Backward. I remember the eclipse (maybe indigo) was returning to the beginning of file when it reached to the end. Is there any setting for this?
As you have already found the answer. Just adding the official page which describes this feature in eclipse.There is a wrap search option for this. From the docs:- Select the Wrap Search checkbox if you want the dialog to continue from the top of the file once the bottom is reached (if you find / replace in Selected Lines only, then this option applies to those lines only) Below is the screen shot for the same:-
Helios
18,737,158
11
I have tried executing this docker command to setup Jaeger Agent and jaeger collector with elasticsearch. sudo docker run \ -p 5775:5775/udp \ -p 6831:6831/udp \ -p 6832:6832/udp \ -p 5778:5778 \ -p 16686:16686 \ -p 14268:14268 \ -e SPAN_STORAGE_TYPE=elasticsearch \ --name=jaeger \ jaegertracing/all-in-one:latest but this command gives the below error. How to configure Jaeger with ElasticSearch? "msg":"Failed to init storage factory","error":"health check timeout: no Elasticsearch node available","errorVerbose":"no Elasticsearch node available\
After searching a solution for some time, I found a docker-compose.yml file which had the Jaeger Query,Agent,collector and Elasticsearch configurations. docker-compose.yml version: "3" services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.3.1 networks: - elastic-jaeger ports: - "127.0.0.1:9200:9200" - "127.0.0.1:9300:9300" restart: on-failure environment: - cluster.name=jaeger-cluster - discovery.type=single-node - http.host=0.0.0.0 - transport.host=127.0.0.1 - ES_JAVA_OPTS=-Xms512m -Xmx512m - xpack.security.enabled=false volumes: - esdata:/usr/share/elasticsearch/data jaeger-collector: image: jaegertracing/jaeger-collector ports: - "14269:14269" - "14268:14268" - "14267:14267" - "9411:9411" networks: - elastic-jaeger restart: on-failure environment: - SPAN_STORAGE_TYPE=elasticsearch command: [ "--es.server-urls=http://elasticsearch:9200", "--es.num-shards=1", "--es.num-replicas=0", "--log-level=error" ] depends_on: - elasticsearch jaeger-agent: image: jaegertracing/jaeger-agent hostname: jaeger-agent command: ["--collector.host-port=jaeger-collector:14267"] ports: - "5775:5775/udp" - "6831:6831/udp" - "6832:6832/udp" - "5778:5778" networks: - elastic-jaeger restart: on-failure environment: - SPAN_STORAGE_TYPE=elasticsearch depends_on: - jaeger-collector jaeger-query: image: jaegertracing/jaeger-query environment: - SPAN_STORAGE_TYPE=elasticsearch - no_proxy=localhost ports: - "16686:16686" - "16687:16687" networks: - elastic-jaeger restart: on-failure command: [ "--es.server-urls=http://elasticsearch:9200", "--span-storage.type=elasticsearch", "--log-level=debug" ] depends_on: - jaeger-agent volumes: esdata: driver: local networks: elastic-jaeger: driver: bridge The docker-compose.yml file installs the elasticsearch, Jaeger collector,query and agent. Install docker and docker compose first https://docs.docker.com/compose/install/#install-compose Then, execute these commands in order 1. sudo docker-compose up -d elasticsearch 2. sudo docker-compose up -d 3. sudo docker ps -a start all the docker containers - Jaeger agent,collector,query and elasticsearch. sudo docker start container-id access -> http://localhost:16686/
Jaeger
51,785,812
18
I'm trying to use OpenTracing.Contrib.NetCore with Serilog. I need to send to Jaeger my custom logs. Now, it works only when I use default logger factory Microsoft.Extensions.Logging.ILoggerFactory My Startup: public void ConfigureServices(IServiceCollection services) { services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2); services.AddSingleton<ITracer>(sp => { var loggerFactory = sp.GetRequiredService<ILoggerFactory>(); string serviceName = sp.GetRequiredService<IHostingEnvironment>().ApplicationName; var samplerConfiguration = new Configuration.SamplerConfiguration(loggerFactory) .WithType(ConstSampler.Type) .WithParam(1); var senderConfiguration = new Configuration.SenderConfiguration(loggerFactory) .WithAgentHost("localhost") .WithAgentPort(6831); var reporterConfiguration = new Configuration.ReporterConfiguration(loggerFactory) .WithLogSpans(true) .WithSender(senderConfiguration); var tracer = (Tracer)new Configuration(serviceName, loggerFactory) .WithSampler(samplerConfiguration) .WithReporter(reporterConfiguration) .GetTracer(); //GlobalTracer.Register(tracer); return tracer; }); services.AddOpenTracing(); } and somewhere in controller: [Route("api/[controller]")] public class ValuesController : ControllerBase { private readonly ILogger<ValuesController> _logger; public ValuesController(ILogger<ValuesController> logger) { _logger = logger; } [HttpGet("{id}")] public ActionResult<string> Get(int id) { _logger.LogWarning("Get values by id: {valueId}", id); return "value"; } } in a result I will able to see that log in Jaeger UI But when I use Serilog, there are no any custom logs. I've added UseSerilog() to WebHostBuilder, and all custom logs I can see in console but not in Jaeger. There is open issue in github. Could you please suggest how I can use Serilog with OpenTracing?
This is a limitation in the Serilog logger factory implementation; in particular, Serilog currently ignores added providers and assumes that Serilog Sinks will replace them instead. So, the solutions is implementaion a simple WriteTo.OpenTracing() method to connect Serilog directly to OpenTracing public class OpenTracingSink : ILogEventSink { private readonly ITracer _tracer; private readonly IFormatProvider _formatProvider; public OpenTracingSink(ITracer tracer, IFormatProvider formatProvider) { _tracer = tracer; _formatProvider = formatProvider; } public void Emit(LogEvent logEvent) { ISpan span = _tracer.ActiveSpan; if (span == null) { // Creating a new span for a log message seems brutal so we ignore messages if we can't attach it to an active span. return; } var fields = new Dictionary<string, object> { { "component", logEvent.Properties["SourceContext"] }, { "level", logEvent.Level.ToString() } }; fields[LogFields.Event] = "log"; try { fields[LogFields.Message] = logEvent.RenderMessage(_formatProvider); fields["message.template"] = logEvent.MessageTemplate.Text; if (logEvent.Exception != null) { fields[LogFields.ErrorKind] = logEvent.Exception.GetType().FullName; fields[LogFields.ErrorObject] = logEvent.Exception; } if (logEvent.Properties != null) { foreach (var property in logEvent.Properties) { fields[property.Key] = property.Value; } } } catch (Exception logException) { fields["mbv.common.logging.error"] = logException.ToString(); } span.Log(fields); } } public static class OpenTracingSinkExtensions { public static LoggerConfiguration OpenTracing( this LoggerSinkConfiguration loggerConfiguration, IFormatProvider formatProvider = null) { return loggerConfiguration.Sink(new OpenTracingSink(GlobalTracer.Instance, formatProvider)); } }
Jaeger
56,156,809
18
I instrumented a simple Spring-Boot application with Jaeger, but when I run the application within a Docker container with docker-compose, I can't see any traces in the Jaeger frontend. I'm creating the tracer configuration by reading the properties from environment variables that I set in the docker-compose file. This is how I create the tracer: Configuration config = Configuration.fromEnv(); return config.getTracer(); And this is my docker-compose file: version: '2' services: demo: build: opentracing_demo/. ports: - "8080:8080" environment: - JAEGER_SERVICE_NAME=hello_service - JAEGER_AGENT_HOST=jaeger - JAEGER_AGENT_PORT=6831 jaeger: image: jaegertracing/all-in-one:latest ports: - "5775:5775/udp" - "6831:6831/udp" - "6832:6832/udp" - "5778:5778" - "16686:16686" - "14268:14268" - "9411:9411" You can also find my project on GitHub. What am I doing wrong?
I found the solution to my problem, in case anybody is facing similar issues. I was missing the environment variable JAEGER_SAMPLER_MANAGER_HOST_PORT, which is necessary if the (default) remote controlled sampler is used for tracing. This is the working docker-compose file: version: '2' services: demo: build: opentracing_demo/. ports: - "8080:8080" environment: - JAEGER_SERVICE_NAME=hello_service - JAEGER_AGENT_HOST=jaeger - JAEGER_AGENT_PORT=6831 - JAEGER_SAMPLER_MANAGER_HOST_PORT=jaeger:5778 jaeger: image: jaegertracing/all-in-one:latest ports: - "5775:5775/udp" - "6831:6831/udp" - "6832:6832/udp" - "5778:5778" - "16686:16686" - "14268:14268" - "9411:9411"
Jaeger
50,173,643
11
There is an existing Spring Boot app which is using SLF4J logger. I decided to add the support of distributed tracing via standard opentracing API with Jaeger as the tracer. It is really amazing how easy the initial setup is - all that is required is just adding two dependencies to the pom.xml: <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-spring-web-autoconfigure</artifactId> <version>${io.opentracing.version}</version> </dependency> <dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-core</artifactId> <version>${jaegerVersion}</version> </dependency> and providing the Tracer bean with the configuration: @Bean public io.opentracing.Tracer getTracer() throws ConfigurationException { return new new io.jaegertracing.Tracer.Builder("my-spring-boot-app").build(); } All works like a charm - app requests are processed by Jaeger and spans are created: However, in the span Logs there are only preHandle & afterCompletion events with info about the class / method that were called during request execution (no logs produced by slf4j logger are collected) : The question is if it is possible to configure the Tracer to pickup the logs produced by the app logger (slf4j in my case) so that all the application logs done via: LOG.info / LOG.warn / LOG.error etc. would be also reflected in Jaeger NOTE: I have figured out how to log to span manually via opentracing API e.g.: Scope scope = tracer.scopeManager().active(); if (scope != null) { scope.span().log("..."); } And do some manual manipulations with the ERROR tag for exception processing in filters e.g. } catch(Exception ex) { Tags.ERROR.set(span, true); span.log(Map.of(Fields.EVENT, "error", Fields.ERROR_OBJECT, ex, Fields.MESSAGE, ex.getMessage())); throw ex } But, I'm still wondering if it is possible to configure the tracer to pickup the application logs automatically: LOG.info -> tracer adds new log to the active span LOG.error -> tracer adds new log to the active span plus adds ERROR tag UPDATE: I was able to add the application logs to the tracer by adding wrapper for the logger e.g. public void error(String message, Exception e) { Scope scope = tracer.scopeManager().active(); if (scope != null) { Span span = scope.span(); Tags.ERROR.set(span, true); span.log(Map.of(Fields.EVENT, "error", Fields.ERROR_OBJECT, e, Fields.MESSAGE, e.getMessage())); } LOG.error(message, e); } However, so far I was not able to find opentracing configuration options that would allow to add the application logs to the tracer automatically by default. Basically, it seems that it is expected that dev would add extra logs to tracer programmatically if needed. Also, after investigating tracing more it appeared to be that normally logging and tracing are handled separately and adding all the application logs to the tracer is not a good idea (tracer should mainly include sample data and tags for request identification) https://github.com/openzipkin/zipkin/issues/1453 https://peter.bourgon.org/blog/2016/02/07/logging-v-instrumentation.html
https://github.com/opentracing-contrib/java-spring-cloud project automatically sends standard logging to the active span. Just add the following dependency to your pom.xml <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-spring-cloud-starter</artifactId> </dependency> Or use this https://github.com/opentracing-contrib/java-spring-cloud/tree/master/instrument-starters/opentracing-spring-cloud-core starter if you want only logging integration.
Jaeger
50,855,480
11
Can I run non-MPI CUDA applications concurrently on NVIDIA Kepler GPUs with MPS? I'd like to do this because my applications cannot fully utilize the GPU, so I want them to co-run together. Is there any code example to do this?
The necessary instructions are contained in the documentation for the MPS service. You'll note that those instructions don't really depend on or call out MPI, so there really isn't anything MPI-specific about them. Here's a walkthrough/example. Read section 2.3 of the above-linked documentation for various requirements and restrictions. I recommend using CUDA 7, 7.5, or later for this. There were some configuration differences with prior versions of CUDA MPS that I won't cover here. Also, I'll demonstrate just using a single server/single GPU. The machine I am using for test is a CentOS 6.2 node using a K40c (cc3.5/Kepler) GPU, with CUDA 7.0. There are other GPUs in the node. In my case, the CUDA enumeration order places my K40c at device 0, but the nvidia-smi enumeration order happens to place it as id 2 in the order. All of these details matter in a system with multiple GPUs, impacting the scripts given below. I'll create several helper bash scripts and also a test application. For the test application, we'd like something with kernel(s) that can obviously run concurrently with kernels from other instances of the application, and we'd also like something that makes it obvious when those kernels (from separate apps/processes) are running concurrently or not. To meet these needs for demonstration purposes, let's have an app that has a kernel that just runs in a single thread on a single SM, and simply waits for a period of time (we'll use ~5 seconds) before exiting and printing a message. Here's a test app that does that: $ cat t1034.cu #include <stdio.h> #include <stdlib.h> #define MAX_DELAY 30 #define cudaCheckErrors(msg) \ do { \ cudaError_t __err = cudaGetLastError(); \ if (__err != cudaSuccess) { \ fprintf(stderr, "Fatal error: %s (%s at %s:%d)\n", \ msg, cudaGetErrorString(__err), \ __FILE__, __LINE__); \ fprintf(stderr, "*** FAILED - ABORTING\n"); \ exit(1); \ } \ } while (0) #include <time.h> #include <sys/time.h> #define USECPSEC 1000000ULL unsigned long long dtime_usec(unsigned long long start){ timeval tv; gettimeofday(&tv, 0); return ((tv.tv_sec*USECPSEC)+tv.tv_usec)-start; } #define APPRX_CLKS_PER_SEC 1000000000ULL __global__ void delay_kernel(unsigned seconds){ unsigned long long dt = clock64(); while (clock64() < (dt + (seconds*APPRX_CLKS_PER_SEC))); } int main(int argc, char *argv[]){ unsigned delay_t = 5; // seconds, approximately unsigned delay_t_r; if (argc > 1) delay_t_r = atoi(argv[1]); if ((delay_t_r > 0) && (delay_t_r < MAX_DELAY)) delay_t = delay_t_r; unsigned long long difft = dtime_usec(0); delay_kernel<<<1,1>>>(delay_t); cudaDeviceSynchronize(); cudaCheckErrors("kernel fail"); difft = dtime_usec(difft); printf("kernel duration: %fs\n", difft/(float)USECPSEC); return 0; } $ nvcc -arch=sm_35 -o t1034 t1034.cu $ ./t1034 kernel duration: 6.528574s $ We'll use a bash script to start the MPS server: $ cat start_as_root.bash #!/bin/bash # the following must be performed with root privilege export CUDA_VISIBLE_DEVICES="0" nvidia-smi -i 2 -c EXCLUSIVE_PROCESS nvidia-cuda-mps-control -d $ And a bash script to launch 2 copies of our test app "simultaneously": $ cat mps_run #!/bin/bash ./t1034 & ./t1034 $ We could also have a bash script to shut down the server, although it's not needed for this walkthrough: $ cat stop_as_root.bash #!/bin/bash echo quit | nvidia-cuda-mps-control nvidia-smi -i 2 -c DEFAULT $ Now when we just launch our test app using the mps_run script above, but without actually enabling the MPS server, we get the expected behavior that one instance of the app takes the expected ~5 seconds, whereas the other instance takes approximately double that (~10 seconds) because, since it does not run concurrently with an app from another process, it waits for 5 seconds while the other app/kernel is running, and then spends 5 seconds running its own kernel, for a total of ~10 seconds: $ ./mps_run kernel duration: 6.409399s kernel duration: 12.078304s $ On the other hand, if we start the MPS server first, and repeat the test: $ su Password: # ./start_as_root.bash Set compute mode to EXCLUSIVE_PROCESS for GPU 0000:82:00.0. All done. # exit exit $ ./mps_run kernel duration: 6.167079s kernel duration: 6.263062s $ we see that both apps take the same amount of time to run, because the kernels are running concurrently, due to MPS. You're welcome to experiment as you see fit. If this sequence appears to work correctly for you, but running your own application doesn't seem to give the expected results, one possible reason may be that your app/kernels are not able to run concurrently with other instances of the app/kernels due to the construction of your kernels, not anything to do with MPS. You might want to verify the requirements for concurrent kernels, and/or study the concurrentKernels sample app. Much of the information here was recycled from the test/work done here albeit the presentation here with separate apps is different than the MPI case presented there. UPDATE: The scheduler behavior in the non-MPS case when running kernels from multiple processes appears to have changed with Pascal and newer GPUs. The above test results still are correct for the GPUs tested on (e.g. Kepler), but when running the above test case on a Pascal or newer GPU, different results will be observed in the non-MPS case. The scheduler is described as a "time-sliced" scheduler in the latest MPS doc and what appears to be happening is that rather than wait for a kernel from one process to complete, the scheduler may, according to some unpublished rules, choose to pre-empt a running kernel so that it can switch to another kernel from another process. This still doesn't mean that kernels from separate processes are running "concurrently" in the traditional usage of that word in CUDA documentation, but the above code is "tricked" by the time-sliced scheduler (on Pascal and newer) because it depends on using the SM clock to set kernel duration. The combination of the time-sliced scheduler plus this usage of the SM clock makes this test case appear to run "concurrently". However, as described in the MPS doc, the code from kernel A is not executing in the same clock cycle(s) as the code from kernel B, when A and B originate from separate processes in the non-MPS case. An alternative method to demonstrate this using the above general approach might be to use a kernel duration that is set by a number of loops, rather than a kernel duration that is set by reading the SM clock, as described here. Care must be taken in that case to avoid having the loops "optimized out" by the compiler.
Kepler
34,709,749
13
I need to create a dev dashboard very similar to an existing prod one, and was wondering if there was an easy way of copying the existing dashboard. Any help would be greatly appreciated!
There is a "Save As..." button in the dashboard settings:
Grafana
26,599,176
125
what is default username and password for Grafana for http://localhost:3000/login page ? attaching a home page screenshot also. I want to watch mySql database for through it.
By looking up the docs we can find that the magic combo is admin as username and admin as password. However if you changed some configuration file you should be able to find it there. The default config file can be found here: $WORKING_DIR/conf/defaults.ini and can be overridden using the --config parameter The item in the config you're looking for should be in the section: [security] admin_user = admin admin_password = admin
Grafana
54,039,604
106
I have on a dashboard, a number of panels (numbering around 6)to display data points chart making queries to dockerised instance of PostgreSQL database. Panels were working fine until very recently, some stop working and report an error like this: pq: could not resize shared memory segment "/PostgreSQL.2058389254" to 12615680 bytes: No space left on device Any idea why this? how to work around solving this. Docker container runs on remote host accessed via ssh. EDIT Disk space: $df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 197G 140G 48G 75% / devtmpfs 1.4G 0 1.4G 0% /dev tmpfs 1.4G 4.0K 1.4G 1% /dev/shm tmpfs 1.4G 138M 1.3G 10% /run tmpfs 1.4G 0 1.4G 0% /sys/fs/cgroup /dev/dm-16 10G 49M 10G 1% /var/lib/docker/devicemapper/mnt/a0f3c5ab84aa06d5b2db00c4324dd6bf7141500ff4c83e23e9aba7c7268bcad4 /dev/dm-1 10G 526M 9.5G 6% /var/lib/docker/devicemapper/mnt/8623a774d736ed3dc0d2db89b7d07cae85c3d1bcafc245180eec4ffd738f93a5 shm 64M 0 64M 0% /var/lib/docker/containers/260552ebcdf2bf0961329108d3d975110f8ada0a41325f5e7dd81b8ddad9d18b/mounts/shm /dev/dm-4 10G 266M 9.8G 3% /var/lib/docker/devicemapper/mnt/6f873e62607e7cac4c4b658c72874c787b90290f74d1159eca81af61cb467cfb shm 64M 50M 15M 78% /var/lib/docker/containers/84c66d9fb5b6ae023d051766f4d35ced87a519a1fee68ca5c89d61ff87cf1e5a/mounts/shm /dev/dm-2 10G 383M 9.7G 4% /var/lib/docker/devicemapper/mnt/cb3df1ae654ed78802c2e5bd7a51a1b0bdd562855a7c7803750b80b33f5c206e shm 64M 0 64M 0% /var/lib/docker/containers/22ba2ae2b6859c24623703dcb596527d64257d2d61de53f4d88e00a8e2335211/mounts/shm /dev/dm-3 10G 99M 9.9G 1% /var/lib/docker/devicemapper/mnt/492a19fc8f3e254c4e5cc691c3300b5fee9d1a849422673bf0c19b4b2d1db571 shm 64M 0 64M 0% /var/lib/docker/containers/39abe855a9b107d4921807332309517697f024b2d169ebc5f409436208f766d0/mounts/shm /dev/dm-7 10G 276M 9.8G 3% /var/lib/docker/devicemapper/mnt/55c6a6c17c892d149c1cc91fbf42b98f1340ffa30a1da508e3526af7060f3ce2 shm 64M 0 64M 0% /var/lib/docker/containers/bf2e7254cd7e2c6000da61875343580ec6ff5cbf40c017a398ba7479af5720ec/mounts/shm /dev/dm-8 10G 803M 9.3G 8% /var/lib/docker/devicemapper/mnt/4e51f48d630041316edd925f1e20d3d575fce4bf19ef39a62756b768460d1a3a shm 64M 0 64M 0% /var/lib/docker/containers/72d4ae743de490ed580ec9265ddf8e6b90e3a9d2c69bd74050e744c8e262b342/mounts/shm /dev/dm-6 10G 10G 20K 100% /var/lib/docker/devicemapper/mnt/3dcddaee736017082fedb0996e42b4c7b00fe7b850d9a12c81ef1399fa00dfa5 shm 64M 0 64M 0% /var/lib/docker/containers/9f2bf4e2736d5128d6c240bb10da977183676c081ee07789bee60d978222b938/mounts/shm /dev/dm-5 10G 325M 9.7G 4% /var/lib/docker/devicemapper/mnt/65a2bf48cbbfe42f0c235493981e62b90363b4be0a2f3aa0530bbc0b5b29dbe3 shm 64M 0 64M 0% /var/lib/docker/containers/e53d5ababfdefc5c8faf65a4b2d635e2543b5a807b65a4f3cd8553b4d7ef2d06/mounts/shm /dev/dm-9 10G 1.2G 8.9G 12% /var/lib/docker/devicemapper/mnt/3216c48346c3702a5cd2f62a4737cc39666983b8079b481ab714cdb488400b08 shm 64M 0 64M 0% /var/lib/docker/containers/5cd0774a742f54c7c4fe3d4c1307fc93c3c097a861cde5f611a0fa9b454af3dd/mounts/shm /dev/dm-10 10G 146M 9.9G 2% /var/lib/docker/devicemapper/mnt/6a98acd1428ae670e8f1da62cb8973653c8b11d1c98a8bf8be78f59d2ddba062 shm 64M 0 64M 0% /var/lib/docker/containers/a878042353f6a605167e7f9496683701fd2889f62ba1d6c0dc39c58bc03a8209/mounts/shm tmpfs 285M 0 285M 0% /run/user/0 EDIT-2 $df -ih Filesystem Inodes IUsed IFree IUse% Mounted on /dev/vda1 13M 101K 13M 1% / devtmpfs 354K 394 353K 1% /dev tmpfs 356K 2 356K 1% /dev/shm tmpfs 356K 693 356K 1% /run tmpfs 356K 16 356K 1% /sys/fs/cgroup /dev/dm-16 10M 2.3K 10M 1% /var/lib/docker/devicemapper/mnt/a0f3c5ab84aa06d5b2db00c4324dd6bf7141500ff4c83e23e9aba7c7268bcad4 /dev/dm-1 10M 19K 10M 1% /var/lib/docker/devicemapper/mnt/8623a774d736ed3dc0d2db89b7d07cae85c3d1bcafc245180eec4ffd738f93a5 shm 356K 1 356K 1% /var/lib/docker/containers/260552ebcdf2bf0961329108d3d975110f8ada0a41325f5e7dd81b8ddad9d18b/mounts/shm /dev/dm-4 10M 11K 10M 1% /var/lib/docker/devicemapper/mnt/6f873e62607e7cac4c4b658c72874c787b90290f74d1159eca81af61cb467cfb shm 356K 2 356K 1% /var/lib/docker/containers/84c66d9fb5b6ae023d051766f4d35ced87a519a1fee68ca5c89d61ff87cf1e5a/mounts/shm /dev/dm-2 10M 5.6K 10M 1% /var/lib/docker/devicemapper/mnt/cb3df1ae654ed78802c2e5bd7a51a1b0bdd562855a7c7803750b80b33f5c206e shm 356K 1 356K 1% /var/lib/docker/containers/22ba2ae2b6859c24623703dcb596527d64257d2d61de53f4d88e00a8e2335211/mounts/shm /dev/dm-3 10M 4.6K 10M 1% /var/lib/docker/devicemapper/mnt/492a19fc8f3e254c4e5cc691c3300b5fee9d1a849422673bf0c19b4b2d1db571 shm 356K 1 356K 1% /var/lib/docker/containers/39abe855a9b107d4921807332309517697f024b2d169ebc5f409436208f766d0/mounts/shm /dev/dm-7 10M 7.5K 10M 1% /var/lib/docker/devicemapper/mnt/55c6a6c17c892d149c1cc91fbf42b98f1340ffa30a1da508e3526af7060f3ce2 shm 356K 1 356K 1% /var/lib/docker/containers/bf2e7254cd7e2c6000da61875343580ec6ff5cbf40c017a398ba7479af5720ec/mounts/shm /dev/dm-8 10M 12K 10M 1% /var/lib/docker/devicemapper/mnt/4e51f48d630041316edd925f1e20d3d575fce4bf19ef39a62756b768460d1a3a shm 356K 1 356K 1% /var/lib/docker/containers/72d4ae743de490ed580ec9265ddf8e6b90e3a9d2c69bd74050e744c8e262b342/mounts/shm /dev/dm-6 7.9K 7.3K 623 93% /var/lib/docker/devicemapper/mnt/3dcddaee736017082fedb0996e42b4c7b00fe7b850d9a12c81ef1399fa00dfa5 shm 356K 1 356K 1% /var/lib/docker/containers/9f2bf4e2736d5128d6c240bb10da977183676c081ee07789bee60d978222b938/mounts/shm /dev/dm-5 10M 27K 10M 1% /var/lib/docker/devicemapper/mnt/65a2bf48cbbfe42f0c235493981e62b90363b4be0a2f3aa0530bbc0b5b29dbe3 shm 356K 1 356K 1% /var/lib/docker/containers/e53d5ababfdefc5c8faf65a4b2d635e2543b5a807b65a4f3cd8553b4d7ef2d06/mounts/shm /dev/dm-9 10M 53K 10M 1% /var/lib/docker/devicemapper/mnt/3216c48346c3702a5cd2f62a4737cc39666983b8079b481ab714cdb488400b08 shm 356K 1 356K 1% /var/lib/docker/containers/5cd0774a742f54c7c4fe3d4c1307fc93c3c097a861cde5f611a0fa9b454af3dd/mounts/shm /dev/dm-10 10M 5.2K 10M 1% /var/lib/docker/devicemapper/mnt/6a98acd1428ae670e8f1da62cb8973653c8b11d1c98a8bf8be78f59d2ddba062 shm 356K 1 356K 1% /var/lib/docker/containers/a878042353f6a605167e7f9496683701fd2889f62ba1d6c0dc39c58bc03a8209/mounts/shm tmpfs 356K 1 356K 1% /run/user/0 EDIT-3 postgres container service: version: "3.5" services: #other containers go here.. postgres: restart: always image: postgres:10 hostname: postgres container_name: fiware-postgres expose: - "5432" ports: - "5432:5432" networks: - default environment: - "POSTGRES_PASSWORD=password" - "POSTGRES_USER=postgres" - "POSTGRES_DB=postgres" volumes: - ./postgres-data:/var/lib/postgresql/data build: context: . shm_size: '4gb' Database size: postgres=# SELECT pg_size_pretty( pg_database_size('postgres')); pg_size_pretty ---------------- 42 GB (1 row) EDIT-4 Sorry, but none of the workaround related to this question actually work, including this one On the dashboard, I have 5 panels intended to display data points. The queries are similar, except that each displays different parameters for temperature, relativeHumidity, illuminance, particles and O3. This is the query: SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time, avg(attrvalue::float) as illuminance FROM urbansense.weather WHERE attrname='illuminance' AND attrvalue<>'null' GROUP BY time ORDER BY time asc; The difference is in the WHERE attrname=#parameterValue statement. I modified the postgresql.conf file to write logs but the logs doesn't seem to provide helpfull tips: here goes the logs: $ vim postgres-data/log/postgresql-2019-06-26_150012.log . . 2019-06-26 15:03:39.298 UTC [45] LOG: statement: SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time, avg(attrvalue::float) as o3 FROM urbansense.airquality WHERE attrname='O3' AND attrvalue<>'null' GROUP BY time ORDER BY time asc; 2019-06-26 15:03:40.903 UTC [41] ERROR: could not resize shared memory segment "/PostgreSQL.1197429420" to 12615680 bytes: No space left on device 2019-06-26 15:03:40.903 UTC [41] STATEMENT: SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time, avg(attrvalue::float) as illuminance FROM urbansense.weather WHERE attrname='illuminance' AND attrvalue<>'null' GROUP BY time ORDER BY time asc; 2019-06-26 15:03:40.905 UTC [42] FATAL: terminating connection due to administrator command 2019-06-26 15:03:40.905 UTC [42] STATEMENT: SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time, avg(attrvalue::float) as illuminance FROM urbansense.weather WHERE attrname='illuminance' AND attrvalue<>'null' GROUP BY time ORDER BY time asc; 2019-06-26 15:03:40.909 UTC [43] FATAL: terminating connection due to administrator command 2019-06-26 15:03:40.909 UTC [43] STATEMENT: SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time, avg(attrvalue::float) as illuminance FROM urbansense.weather WHERE attrname='illuminance' AND attrvalue<>'null' GROUP BY time ORDER BY time asc; 2019-06-26 15:03:40.921 UTC [1] LOG: worker process: parallel worker for PID 41 (PID 42) exited with exit code 1 2019-06-26 15:03:40.922 UTC [1] LOG: worker process: parallel worker for PID 41 (PID 43) exited with exit code 1 2019-06-26 15:07:04.058 UTC [39] LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp39.0", size 83402752 2019-06-26 15:07:04.058 UTC [39] STATEMENT: SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800)as time, avg(attrvalue::float) as relativeHumidity FROM urbansense.weather WHERE attrname='relativeHumidity' AND attrvalue<>'null' GROUP BY time ORDER BY time asc; 2019-06-26 15:07:04.076 UTC [40] LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp40.0", size 83681280 2019-06-26 15:07:04.076 UTC [40] STATEMENT: SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800)as time, avg(attrvalue::float) as relativeHumidity FROM urbansense.weather WHERE attrname='relativeHumidity' AND attrvalue<>'null' GROUP BY time ORDER BY time asc; 2019-06-26 15:07:04.196 UTC [38] LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp38.0", size 84140032 Anyone with a idea how to solve this?
This is because docker by-default restrict size of shared memory to 64MB. You can override this default value by using --shm-size option in docker run. docker run -itd --shm-size=1g postgres or in docker-compose: db: image: "postgres:11.3-alpine" shm_size: 1g Check this out. More info here.
Grafana
56,751,565
89
Despite these settings, Grafana still requires the use of a password to view Dashboards. Can someone please help me with the correct settings? [auth.anonymous] # enable anonymous access enabled = true [auth.basic] enabled = false
Thanks @Donald Mok for his answer; I just want to make it as clear as possible. In the Grafana interface you can create an organization. After that you can create some dashboards for this organization. So, there is a problem that you need to specify the organization for anonymous users. And it should be a real organization (for your Grafana). And anonymous users will be able to see only dashboards from this organization. #################################### Anonymous Auth ########################## [auth.anonymous] # enable anonymous access enabled = true # specify organization name that should be used for unauthenticated users org_name = ORGANIZATION
Grafana
33,111,835
89
I'm making a Grafana dashboard and want a panel that reports the latest version of our app. The version is reported as a label in the app_version_updated (say) metric like so: app_version_updated{instance="eu99",version="1.5.0-abcdefg"} I've tried a number of Prometheus queries to extract the version label as a string from the latest member of this time series, to no effect. For example, the query count(app_version_updated) by (version) returns a {version="1.5.0-abcdefg"} element with a value of 1. When put in a Grafana dashboard in a single value panel, this doesn't display the version string but instead the count value (1). How can I construct a Prometheus query that returns the version string?
My answer tries to elaborate on Carl's answer. I assume that the GUI layout may have changed a little since 2016, so it took me while to find the "name" option. Assuming you have a metric as follows: # HELP db2_prometheus_adapter_info Information on the state of the DB2-Prometheus-Adapter # TYPE db2_prometheus_adapter_info gauge db2_prometheus_adapter_info{app_state="UP"} 1.0 and you would like to show the value of the label app_state. Follow these steps: Create a "SingleStat" visualization. Go to the "Queries" tab: Enter the name (here db2_prometheus_adapter_info) of the metric. Enter the label name as the legend using the {{[LABEL]}} notation (here {{app_state}}). Activate the "instant" option. Go to the "Visualization" tab: Choose the value "Name" under "Value - Stat". Note on the "Instant" setting: This setting switches from a range query to a simplified query only returning the most recent value of the metric (also see What does the "instant" checkbox in Grafana graphs based on prometheus do?). If not activated, the panel will show an error as soon as there is more than one distinct value for the label in the history of the metric. For a "normal" metric you would remedy this by choosing "current" in the "Value - Stat" option. But doing so here prevents your label value to be shown.
Grafana
38,525,891
65
I want to count number of unique label values. Kind of like select count (distinct a) from hello_info For example if my metric 'hello_info' has labels a and b. I want to count number of unique a's. Here the count would be 3 for a = "1", "2", "3". hello_info(a="1", b="ddd") hello_info(a="2", b="eee") hello_info(a="1", b="fff") hello_info(a="3", b="ggg")
count(count by (a) (hello_info)) First you want an aggregator with a result per value of a, and then you can count them.
Grafana
51,882,134
63
I need to write a query that use any of the different jobs I define. {job="traefik" OR job="cadvisor" OR job="prometheus"} Is it possible to write logical binary operators?
Prometheus has an or logical binary operator, but what you're asking about here is vector selectors. You can use a regex for this {job=~"traefik|cadvisor|prometheus"}, however that you want to do this is a smell.
Grafana
43,134,060
48
I have no clue what the option "instant" means in Grafana when creating graph with Prometheus. Any ideas?
It uses the query API endpoint rather than the query_range API endpoint on Prometheus, which is more efficient if you only care about the end of your time range and don't want to pull in data that Grafana is going to throw away again.
Grafana
51,728,031
47
I'm using Grafana with Prometheus and I'd like to build a query that depends on the selected period of time selected in the upper right corner of the screen. Is there any variable (or something like that) to use in the query field? In other words, If I select 24hs I'd like to use that data in the query.
There are two ways that I know: You can use the $__interval variable like this: increase(http_requests_total[$__interval]) There is a drawback that the $__interval variable's value is adjusted by resolution of the graph, but this may also be helpful in some situations. This approach should fit your case better: Go to Dashboard's Templating settings, create new variable with the type of Interval. Enable "Auto Option", adjust "Step count" to be equal 1. Then ensure that the "auto" is selected in corresponding drop-down list at the top of the dashboard. Let's assume you name it timeRange, then the query will look like this: increase(http_requests_total[$timeRange]) This variable will not be adjusted by graph resolution and if you select "Last 10 hours" its value will be 10h.
Grafana
47,141,967
38
I need to monitor very different log files for errors, success status etc. And I need to grab corresponding metrics using Prometheus and show in Grafana + set some alerting on it. Prometheus + Grafana are OK I already use them a lot with different exporters like node_exporter or mysql_exporter etc. Also alerting in new Grafana 4.x works very well. But I have quite a problem to find suitable exporter/ program which could analyze log files "on fly" and extract metrics from them. So far I tried: mtail (https://github.com/google/mtail) - works but existing version cannot easily monitor more files - in general it cannot bind specific mtail program (receipt for analysis) to some specific log file + I cannot easily add log file name into tag grok_exporter (https://github.com/fstab/grok_exporter) - works but I can extract only limited information + one instance can monitor only one log file which mean I would have to start more instances exporting on more ports and configure all off them in prometheus - which makes too many new points of failure fluentd prometheus exporter (https://github.com/kazegusuri/fluent-plugin-prometheus) - works but looks like I can extract only very simple metrics and I cannot make any advanced regexp analysis of a line(s) from log file Does any one here has a really running solution for monitoring advanced metrics from log files using "some exporter" + Prometheus + Grafana? Or instead of exporter some program from which I could grab results using Prometheus push gateway. Thanks.
Take a look at Telegraf. It does support tailing logs using input plugins logparser and tail. To export metrics as prometheus endpoint use prometheus_client output plugin. You also may apply on the fly aggregations. I've found it simpler to configure for multiple log files than grok_exporter or mtail
Grafana
41,160,883
38
I'm attracted to prometheus by the histogram (and summaries) time-series, but I've been unsuccessful to display a histogram in either promdash or grafana. What I expect is to be able to show: a histogram at a point in time, e.g. the buckets on the X axis and the count for the bucket on the Y axis and a column for each bucket a stacked graph of the buckets such that each bucket is shaded and the total of the stack equals the inf bucket A sample metric would be the response time of an HTTP server.
Grafana v5+ provides direct support for representing Prometheus histograms as heatmap. http://docs.grafana.org/features/panels/heatmap/#histograms-and-buckets Heatmaps are preferred over histogram because a histogram does not show you how the trend changes over time. So if you have a time-series histogram, then use the heatmap panel to picture it. To get you started, here is an example (for Prometheus data): Suppose you've a histogram as follows, http_request_duration_seconds_bucket(le=0.2) 1, http_request_duration_seconds_bucket(le=0.5) 2, http_request_duration_seconds_bucket(le=1.0) 2, http_request_duration_seconds_bucket(le=+inf) 5 http_request_duration_seconds_count 5 http_request_duration_seconds_sum 3.07 You can picture this histogram data as a heatmap by using the query: sum(increase(http_request_duration_seconds_bucket[10m])) by (le), making sure to set the format as "heatmap," the legend format as {{ le }}, and setting the visualization in the panel settings to "heatmap."
Grafana
39,135,026
34
I try to get Total and Free disk space on my Kubernetes VM so I can display % of taken space on it. I tried various metrics that included "filesystem" in name but none of these displayed correct total disk size. Which one should be used to do so? Here is a list of metrics I tried node_filesystem_size_bytes node_filesystem_avail_bytes node:node_filesystem_usage: node:node_filesystem_avail: node_filesystem_files node_filesystem_files_free node_filesystem_free_bytes node_filesystem_readonly
According to my Grafana dashboard, the following metrics work nicely for alerting for available space, 100 - ((node_filesystem_avail_bytes{mountpoint="/",fstype!="rootfs"} * 100) / node_filesystem_size_bytes{mountpoint="/",fstype!="rootfs"}) The formula gives out the percentage of available space on the pointed disk. Make sure you include the mountpoint and fstype within the metrics.
Grafana
57,357,532
33
We are using Grafana 4 and have implemented alert notifications to a slack channel through an Incoming Webhook. The notifications are sent as and wen expected, except that the link in the notification points to the wrong place. For instance, if you take the following test notification: Then I would expect the link in [Alerting] Test notification to point to the Grafana server. However, the host in that link is localhost. I thought it might be just a problem with test notifications, but this also happens with real notifications: the path will be correct, but the host and port will be wrong (localhost:62033, for full details). I have tried to find the place where this host/port is configured, with no luck. Any tips so as to how to fix this? Thanks in advance.
There are a number of options you can add to your ini file to tell Grafana how to build self-referential urls: #################################### Server ############################## [server] # Protocol (http or https) protocol = http # The http port to use http_port = 3000 # The public facing domain name used to access grafana from a browser domain = localhost # The full public facing url root_url = %(protocol)s://%(domain)s:%(http_port)s/ You should start by setting the protocol, http_port and domain to the proper values. If you're accessing Grafana on port 80 or 443 and don't want to have the port explicitly in the url you can remove :%(http_port) from the root_url setting.
Grafana
41,282,961
33
I'm trying to write a prometheus query in grafana that will select visits_total{route!~"/api/docs/*"} What I'm trying to say is that it should select all the instances where the route doesn't match /api/docs/* (regex) but this isn't working. It's actually just selecting all the instances. I tried to force it to select others by doing this: visits_total{route=~"/api/order/*"} but it doesn't return anything. I found these operators in the querying basics page of prometheus. What am I doing wrong here?
May be because you have / in the regex. Try with something like visits_total{route=~".*order.*"} and see if the result is generated or not. Try this also, visits_total{route!~"\/api\/docs\/\*"} If you want to exclude all the things that has the word docs you can use below, visits_total{route!~".*docs.*"}
Grafana
54,813,545
31
I'm displaying Prometheus query on a Grafana table. That's the query (Counter metric): sum(increase(check_fail{app="monitor"}[20m])) by (reason) The result is a table of failure reason and its count. The problem is that the table is also showing reasons that happened 0 times in the time frame and I don't want to display them. AFAIK it's not possible to hide them through Grafana. I know prometheus has comparison operators but I wasn't able to apply them.
I don't know how you tried to apply the comparison operators, but if I use this very similar query: sum(increase(up[1d])) by (job) I get a result of zero for all jobs that have not restarted over the past day and a non-zero result for jobs that have had instances restart. If I now tack on a != 0 to the end of it, all zero values are filtered out: sum(increase(up[1d])) by (job) != 0
Grafana
54,762,265
31
I have several dashboards in the Grafana, when I log in to the Grafana, I encounter with the Dashboard Not Found error. I want to set one of the Grafana dashboards as the home page (default page) when I log in to the Grafana.
Grafana v4.6.3 In grafana click on the Grafana Menu, got to > Profile and under preferences you can set the Home Dashboard for yourself. For you organization you'll need to logon as an admin and under Grafana Menu > Main Org > Preferences you can set the home dashboard for your organization. This is for v4.6.3, but it should be the same in previous version. Update for July 2020 Seeing as this answer is still getting up votes I've decided to update it to cover later versions of Grafana Grafana versions 5.4.5, 6.7.3, 7.1.0 The same method works for all of these versions. As ROOT says below: To set a dashboard as a default, or include it in the list of dashboards that can be used as the default, you need to favourite that dashboard as follows: Open the dashboard you want as the default Mark it as favourite dashboard To do this click on the star icon in the dashboard header (in version 7+ the start is on the right-hand side) Setting the default dashboard for yourself To set a default dashboard for yourself do the following: Select Preferences from the Profile menu (you'll find this toward the bottom of the Grafana menu bar on the left side of the screen) In the Preferences view, under the Preferences group select the Home Dashboard you want as your default Click on Save Setting the default dashboard for a team To set a default dashboard for a team you need admin rights. To set the default dashboard for a team: Select Teams from the Configuration menu on Grafana menu bar In the Teams view, select the team you want by clicking on it In the Team view, select the Settings tab Under the Preferences group select the Home Dashboard you want as your default Click on Save For an organization To set the default dashboard for an organization you need admin rights, and you need to be in the Organization profile. You can set which organization profile you're in on the Profile menu. To set the default dashboard for the organization do the following: Select Preferences from the Configuration menu on the Grafana menu bar In Preferences view, under the Preferences group select the Home Dashboard you want as your default Click on Save
Grafana
48,164,754
29
I didn't find a 'moving average' feature and I'm wondering if there's a workaround. I'm using influxdb as the backend.
Grafana supports adding a movingAverage(). I also had a hard time finding it in the docs, but you can (somewhat hilariously) see its usage on the feature intro page: As is normal, click on the graph title, edit, add the metric movingAverage() as per described in the graphite documentation: movingAverage(seriesList, windowSize) Graphs the moving average of a metric (or metrics) over a fixed number of past points, or a time interval. Takes one metric or a wildcard seriesList followed by a number N of datapoints or a quoted string with a length of time like ‘1hour’ or ‘5min’ (See from / until in the render_api_ for examples of time formats). Graphs the average of the preceding datapoints for each point on the graph. All previous datapoints are set to None at the beginning of the graph. Example: &target=movingAverage(Server.instance01.threads.busy,10) &target=movingAverage(Server.instance*.threads.idle,'5min')
Grafana
27,489,056
29
Because Prometheus topk returns more results than expected, and because https://github.com/prometheus/prometheus/issues/586 requires client-side processing that has not yet been made available via https://github.com/grafana/grafana/issues/7664, I'm trying to pursue a different near-term work-around to my similar problem. In my particular case most of the metric values that I want to graph will be zero most of the time. Only when they are above zero are they interesting. I can find ways to write prometheus queries to filter data points based on the value of a label, but I haven't yet been able to find a way to tell prometheus to return time series data points only if the value of the metric meets a certain condition. In my case, I want to filter for a value greater than zero. Can I add a condition to a prometheus query that filters data points based on the metric value? If so, where can I find an example of the syntax to do that?
If you're confused by brian's answer: The result of filtering with a comparison operator is not a boolean, but the filtered series. E.g. min(flink_rocksdb_actual_delayed_write_rate > 0) Will show the minimum value above 0. In case you actually want a boolean (or rather 0 or 1), use something like sum (flink_rocksdb_actual_delayed_write_rate >bool 0) which will give you the greater-than-zero count.
Grafana
46,697,754
26
I'm playing with grafana and I want to create a panel where I compare data from one app server against the average of all the others except that one. Something like: apps.machine1.someMetric averageSeries(apps.*.not(machine1).someMetric) Can that be done? How?
Sounds like you want to filter a seriesList, you an do that inclusively using the 'grep' function or exclusively using the 'exclude' function exclude(apps.machine*.someMetric,"machine1") and pass that into averageSeries averageSeries(exclude(apps.machine*.someMetric,"machine1")) You can read more about those functions here: http://graphite.readthedocs.io/en/latest/functions.html#graphite.render.functions.exclude
Grafana
34,214,149
24
I have a InfluxDB dataseries. It stores one information about mulitple machines. I distingsh between these machines with a tag. I can display the information for all three machines at once using a "Group by tag(machine)" clause. the name of the legends is "table.derivative {machine: 1}", "table.derivative {machine: 2}" and so on. How can I change it to "machine 1", "machine 2" and so on? So far I came across this suggesting to use $groupby (or $g ?), but both are just added a literally.
In Grafana, you can use alias patterns. There is a description of alias patterns at the bottom of the metrics tab: In your case, in the Alias By field you would write $tag_machine. Here is an InfluxDb example on the Grafana demo site that uses the Alias By field: https://play.grafana.org/d/000000002/influxdb-templated?editPanel=1&orgId=1
Grafana
42,397,891
23
I am charting data with a Grafana table, and I want to aggregate all data points from a single day into one row in the table. As you can see below my current setup is displaying the values on a per / minute basis. Question: How can I make a Grafana table that displays values aggregated by day? | Day | ReCaptcha | T & C | |-------------------|------------|-------| | February 21, 2017 | 9,001 | 8,999 | | February 20, 2017 | 42 | 17 | | February 19, 2017 | ... | ... |
You can use the summarize function on the metrics panel. Change the query by pressing the + then selecting transform summarize(24h, sum, false) this will aggregate the past 24hr data points into a single point by summing them. http://graphite.readthedocs.io/en/latest/functions.html#graphite.render.functions.summarize results
Grafana
42,374,268
23
I am using Grafana Loki and I need to calculate the total number of a certain log message for a specific time interval. For example, I need the total number of log message "some-text" in the period from 12:00:00 to 14:00:00. I just found the following way to count the occurrences of the last minute, something this: count_over_time({container="some-containter"} |= "some-text")[1m], but i did not found any way to query a specific interval. I would be very happy if this is possible and someone could help.
If you're using Grafana Explore to query your logs you can do an instant query and use the time range and global variables. So you can select the time range as seen in the screenshot below and your query would become count_over_time({container="some-container"} |= "some-text"[$__range]) You can check my example in the Grafana Playground.
Grafana
72,607,465
21
On my ActiveMQ I have some Queues, which end with .error. On a Grafana Dashboard I want to list all queues without these .error-queues. Example: some.domain.one some.domain.one.error some.domain.two some.domain.two.error To list all queues I use this query: org_apache_activemq_localhost_QueueSize{Type="Queue",Destination=~"some.domain.*",} How do I exclude all .error-queues?
You can use a negative regex matcher: org_apache_activemq_localhost_QueueSize{Type="Queue",Destination=~"some.domain.*",Destination!~".*\.error"}
Grafana
40,277,612
20
We graph fast counters with sum(rate(my_counter_total[1m])) or with sum(irate(my_counter_total[20s])). Where the second one is preferrable if you can always expect changes within the last couple of seconds. But how do you graph slow counters where you only have some increments every couple of minutes or even hours? Having values like 0.0013232/s is not very human friendly. Let's say I want to graph how many users sign up to our service (we expect a couple of signups per hour). What's a reasonable query? We currently use the following to graph that in grafana: Query: 3600 * sum(rate(signup_total[1h])) Step: 3600s Resolution: 1/1 Is this reasonable? I'm still trying to understand how all those parameters play together to draw a graph. Can someone explain how the range selector ([10m]), the rate() and the irate() functions, the Step and Resolution settings in grafana influence each other?
That's a correct way to do it. You can also use increase() which is syntactic sugar for using rate() that way. Can someone explain how the range selector This is only used by Prometheus, and indicates what data to work over. the Step and Resolution settings in grafana influence each other? This is used on the Grafana side, it affects how many time slices it'll request from Prometheus. These settings do not directly influence each other. However the resolution should work out to be smaller than the range, or you'll be undersampling and miss information.
Grafana
38,659,784
20
Using grafana with influxdb, I am trying to show the per-second rate of some value that is a counter. If I use the non_negative_derivative(1s) function, the value of the rate seems to change dramatically depending on the time width of the grafana view. I'm using the last selector (but could also use max which is the same value since it is a counter). Specifically, I'm using: SELECT non_negative_derivative(last("my_counter"), 1s) FROM ... According to the influxdb docs non-negative-derivative: InfluxDB calculates the difference between chronological field values and converts those results into the rate of change per unit. So to me, that means that the value at a given point should not change that much when expanding the time view, since the value should be rate of change per unit (1s in my example query above). In graphite, they have the specific perSecond function, which works much better: perSecond(consolidateBy(my_counter, 'max')) Any ideas on what I'm doing wrong with the influx query above?
If you want per second results that don't vary, you'll want to GROUP BY time(1s). This will give you accurate perSecond results. Consider the following example: Suppose that the value of the counter at each second changes like so 0s → 1s → 2s → 3s → 4s 1 → 2 → 5 → 8 → 11 Depending on how we group the sequence above, we'll see different results. Consider the case where we group things into 2s buckets. 0s-2s → 2s-4s (5-1)/2 → (11-5)/2 2 → 3 versus the 1s buckets 0s-1s → 1s-2s → 2s-3s → 3s-4s (2-1)/1 → (5-2)/1 → (8-5)/1 → (11-8)/1 1 → 3 → 3 → 3 Addressing So to me, that means that the value at a given point should not change that much when expanding the time view, since the value should be rate of change per unit (1s in my example query above). The rate of change per unit is a normalizing factor, independent of the GROUP BY time unit. Interpreting our previous example when we change the derivative interval to 2s may offer some insight. The exact equation is ∆y/(∆x/tu) Consider the case where we group things into 1s buckets with a derivative interval of 2s. The result we should see is 0s-1s → 1s-2s → 2s-3s → 3s-4s 2*(2-1)/1 → 2*(5-2)/1 → 2*(8-5)/1 → (11-8)/1 2 → 6 → 6 → 6 This may seem a bit odd, but if you consider what this says it should make sense. When we specify a derivative interval of 2s what we're asking for is what the 2s rate of change is for the 1s GROUP BY bucket. If we apply similar reasoning to the case of 2s buckets with a derivative interval of 2s is then 0s-2s → 2s-4s 2*(5-1)/2 → 2*(11-5)/2 4 → 6 What we're asking for here is what the 2s rate of change is for the 2s GROUP BY bucket and in the first interval the 2s rate of change would be 4 and the second interval the 2s rate of change would be 6.
Grafana
38,016,051
20
I have a grafana docker image which have hawkular-datasource pre-configured using configuration files. After after running grafana instance, I have a json given by teammate, which can be imported inside grafana and that json file creates dashboard when imported. How do I make that dashboards appear by default in Grafana instance? I tried copying the json file to /etc/grafana/provisioning/dashboards/ folder and created a new docker image. But when I run the image, the instance doesn't contain the dashboard at the homepage or anywhere in it. How do I add this json file in docker image. Am I following the correct way? I tried this http://docs.grafana.org/administration/provisioning/ But it didn't help out much. Any suggestion? Here is the json file. { "id": null, "title": "Openshift Metrics", "tags": [], "style": "dark", "timezone": "browser", "editable": true, "hideControls": false, "sharedCrosshair": false, "rows": [ { "collapse": false, "editable": true, "height": "322px", "panels": [ { "content": "<center><p style='font-size: 40pt'>$app</p></center>", "editable": true, "error": false, "id": 23, "isNew": true, "links": [], "mode": "html", "repeatIteration": 1476706310439, "scopedVars": {}, "span": 2, "style": { "font-size": "36pt" }, "title": "", "type": "text" }, { "aliasColors": {}, "bars": false, "datasource": "Hawk-DS", "editable": true, "error": false, "fill": 1, "grid": { "threshold1": null, "threshold1Color": "rgba(216, 200, 27, 0.27)", "threshold2": null, "threshold2Color": "rgba(234, 112, 112, 0.22)" }, "id": 9, "isNew": true, "legend": { "avg": false, "current": false, "max": false, "min": false, "show": true, "total": false, "values": false }, "lines": true, "linewidth": 2, "links": [], "nullPointMode": "connected", "percentage": false, "pointradius": 5, "points": false, "renderer": "flot", "repeatIteration": 1476706310439, "scopedVars": {}, "seriesOverrides": [], "span": 6, "stack": false, "steppedLine": false, "targets": [ { "queryBy": "tags", "rate": false, "refId": "A", "seriesAggFn": "none", "tags": [ { "name": "container_name", "value": "$app" }, { "name": "descriptor_name", "value": "memory/usage" } ], "target": "select metric", "tagsQL": "container_name IN [$app] AND descriptor_name='memory/usage'", "timeAggFn": "avg", "type": "gauge" } ], "timeFrom": null, "timeShift": null, "title": "Memory usage", "tooltip": { "msResolution": true, "shared": true, "sort": 0, "value_type": "cumulative" }, "type": "graph", "xaxis": { "show": true }, "yaxes": [ { "format": "bytes", "label": null, "logBase": 1, "max": null, "min": 0, "show": true }, { "format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true } ] }, { "cacheTimeout": null, "colorBackground": true, "colorValue": false, "colors": [ "rgba(50, 172, 45, 0.97)", "rgba(237, 129, 40, 0.89)", "rgba(245, 54, 54, 0.9)" ], "datasource": "Hawk-DS", "editable": true, "error": false, "format": "bytes", "gauge": { "maxValue": 100, "minValue": 0, "show": false, "thresholdLabels": false, "thresholdMarkers": true }, "height": "100px", "id": 12, "interval": null, "isNew": true, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "repeatIteration": 1476706310439, "scopedVars": {}, "span": 2, "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": false }, "targets": [ { "queryBy": "tags", "rate": false, "refId": "A", "seriesAggFn": "sum", "tags": [ { "name": "container_name", "value": "$app" }, { "name": "descriptor_name", "value": "memory/usage" } ], "target": "select metric", "tagsQL": "container_name IN [$app] AND descriptor_name='memory/usage'", "timeAggFn": "live", "type": "gauge" } ], "thresholds": "140000000,180000000", "title": "Live, all pods", "type": "singlestat", "valueFontSize": "80%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "avg" }, { "cacheTimeout": null, "colorBackground": true, "colorValue": false, "colors": [ "rgba(50, 172, 45, 0.97)", "rgba(237, 129, 40, 0.89)", "rgba(245, 54, 54, 0.9)" ], "datasource": "Hawk-DS", "editable": true, "error": false, "format": "bytes", "gauge": { "maxValue": 100, "minValue": 0, "show": false, "thresholdLabels": false, "thresholdMarkers": true }, "height": "100px", "id": 15, "interval": null, "isNew": true, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "repeatIteration": 1476706310439, "scopedVars": {}, "span": 2, "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": false }, "targets": [ { "queryBy": "tags", "rate": false, "refId": "A", "seriesAggFn": "avg", "tags": [ { "name": "container_name", "value": "$app" }, { "name": "descriptor_name", "value": "memory/usage" } ], "target": "select metric", "tagsQL": "container_name IN [$app] AND descriptor_name='memory/usage'", "timeAggFn": "live", "type": "gauge" } ], "thresholds": "140000000,180000000", "title": "Live per pod", "type": "singlestat", "valueFontSize": "80%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "avg" }, { "cacheTimeout": null, "colorBackground": true, "colorValue": false, "colors": [ "rgba(50, 172, 45, 0.97)", "rgba(237, 129, 40, 0.89)", "rgba(245, 54, 54, 0.9)" ], "datasource": "Hawk-DS", "editable": true, "error": false, "format": "bytes", "gauge": { "maxValue": 100, "minValue": 0, "show": false, "thresholdLabels": false, "thresholdMarkers": true }, "height": "100px", "id": 10, "interval": null, "isNew": true, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "repeatIteration": 1476706310439, "scopedVars": {}, "span": 2, "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": false }, "targets": [ { "queryBy": "tags", "rate": false, "refId": "A", "seriesAggFn": "sum", "tags": [ { "name": "container_name", "value": "$app" }, { "name": "descriptor_name", "value": "memory/usage" } ], "target": "select metric", "tagsQL": "container_name IN [$app] AND descriptor_name='memory/usage'", "timeAggFn": "avg", "type": "gauge" } ], "thresholds": "140000000,180000000", "title": "Average, all pods", "type": "singlestat", "valueFontSize": "80%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "avg" }, { "cacheTimeout": null, "colorBackground": true, "colorValue": false, "colors": [ "rgba(50, 172, 45, 0.97)", "rgba(237, 129, 40, 0.89)", "rgba(245, 54, 54, 0.9)" ], "datasource": "Hawk-DS", "editable": true, "error": false, "format": "bytes", "gauge": { "maxValue": 100, "minValue": 0, "show": false, "thresholdLabels": false, "thresholdMarkers": true }, "height": "100px", "id": 13, "interval": null, "isNew": true, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "repeatIteration": 1476706310439, "scopedVars": {}, "span": 2, "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": false }, "targets": [ { "queryBy": "tags", "rate": false, "refId": "A", "seriesAggFn": "avg", "tags": [ { "name": "container_name", "value": "$app" }, { "name": "descriptor_name", "value": "memory/usage" } ], "target": "select metric", "tagsQL": "container_name IN [$app] AND descriptor_name='memory/usage'", "timeAggFn": "avg", "type": "gauge" } ], "thresholds": "140000000,180000000", "title": "Average per pod", "type": "singlestat", "valueFontSize": "80%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "avg" }, { "cacheTimeout": null, "colorBackground": true, "colorValue": false, "colors": [ "rgba(50, 172, 45, 0.97)", "rgba(237, 129, 40, 0.89)", "rgba(245, 54, 54, 0.9)" ], "datasource": "Hawk-DS", "editable": true, "error": false, "format": "bytes", "gauge": { "maxValue": 100, "minValue": 0, "show": false, "thresholdLabels": false, "thresholdMarkers": true }, "height": "100px", "id": 11, "interval": null, "isNew": true, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "repeatIteration": 1476706310439, "scopedVars": {}, "span": 2, "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": false }, "targets": [ { "queryBy": "tags", "rate": false, "refId": "A", "seriesAggFn": "sum", "tags": [ { "name": "container_name", "value": "$app" }, { "name": "descriptor_name", "value": "memory/usage" } ], "target": "select metric", "tagsQL": "container_name IN [$app] AND descriptor_name='memory/usage'", "timeAggFn": "max", "type": "gauge" } ], "thresholds": "140000000,180000000", "title": "Max, all pods", "type": "singlestat", "valueFontSize": "80%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "avg" }, { "cacheTimeout": null, "colorBackground": true, "colorValue": false, "colors": [ "rgba(50, 172, 45, 0.97)", "rgba(237, 129, 40, 0.89)", "rgba(245, 54, 54, 0.9)" ], "datasource": "Hawk-DS", "editable": true, "error": false, "format": "bytes", "gauge": { "maxValue": 100, "minValue": 0, "show": false, "thresholdLabels": false, "thresholdMarkers": true }, "height": "100px", "id": 14, "interval": null, "isNew": true, "links": [], "mappingType": 1, "mappingTypes": [ { "name": "value to text", "value": 1 }, { "name": "range to text", "value": 2 } ], "maxDataPoints": 100, "nullPointMode": "connected", "nullText": null, "postfix": "", "postfixFontSize": "50%", "prefix": "", "prefixFontSize": "50%", "rangeMaps": [ { "from": "null", "text": "N/A", "to": "null" } ], "repeatIteration": 1476706310439, "scopedVars": {}, "span": 2, "sparkline": { "fillColor": "rgba(31, 118, 189, 0.18)", "full": false, "lineColor": "rgb(31, 120, 193)", "show": false }, "targets": [ { "queryBy": "tags", "rate": false, "refId": "A", "seriesAggFn": "avg", "tags": [ { "name": "container_name", "value": "$app" }, { "name": "descriptor_name", "value": "memory/usage" } ], "target": "select metric", "tagsQL": "container_name IN [$app] AND descriptor_name='memory/usage'", "timeAggFn": "max", "type": "gauge" } ], "thresholds": "140000000,180000000", "title": "Max per pod", "type": "singlestat", "valueFontSize": "80%", "valueMaps": [ { "op": "=", "text": "N/A", "value": "null" } ], "valueName": "avg" } ], "repeat": "app", "scopedVars": { "app": { "text": "aloha", "value": "aloha", "selected": true } }, "title": "New row" } ], "time": { "from": "now-30m", "to": "now" }, "timepicker": { "refresh_intervals": [ "5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d" ], "time_options": [ "5m", "15m", "1h", "6h", "12h", "24h", "2d", "7d", "30d" ] }, "templating": { "list": [ { "current": {}, "datasource": "Hawk-DS", "hide": 0, "includeAll": true, "label": "Application", "multi": true, "name": "app", "options": [], "query": "tags/container_name:*", "refresh": 1, "regex": "", "type": "query" } ] }, "annotations": { "list": [] }, "schemaVersion": 12, "version": 32, "links": [], "gnetId": null }
You should put a YAML file pointing to the JSON files in that folder. For example write /etc/grafana/provisioning/dashboards/local.yml: apiVersion: 1 providers: - name: 'default' orgId: 1 folder: '' type: file disableDeletion: false updateIntervalSeconds: 10 #how often Grafana will scan for changed dashboards options: path: /var/lib/grafana/dashboards And then write your JSON file to /var/lib/grafana/dashboards/openshift.json. Before Grafana 5, my previous solution was to wrap the whole Docker process in a script that uses the API to create the dashboard once the Docker container is up. You can use the GF_SECURITY_ADMIN_PASSWORD environment variable to set the password. You can also use GF_AUTH_ANONYMOUS_ENABLED, but you'll need to make sure it's not accessible to the outside world. docker run -p 3000:3000 -e GF_AUTH_ANONYMOUS_ENABLED=true grafana/grafana ... sleep 10 # wait for grafana to load (use a better method of waiting in production) curl -skfS -XPOST --header "Content-Type: application/json" "http://localhost:3000/grafana/api/dashboards/db" --data-binary @dashboard.json
Grafana
54,813,704
19
I have a counter that measures the number items sold every 10 minutes. I currently use this to track the cumulative number of items: alias(integral(app.items_sold), 'Today') And it looks like this: Now, what I want to do to show how well we were are doing TODAY vs best, avg (or may median) worst day we've had for the past say 90 days. I tried something like this: alias(integral(maxSeries(timeStack(app.items_sold, '1d', 0, 90))),'Max') alias(integral(averageSeries(timeStack(app.items_sold, '1d', 0,90))), 'Avg') alias(integral(minSeries(timeStack(app.items_sold, '1d',0, 90))), 'Min') which looks great but actually shows me the cumulative amount of all the max, avg and min for all series interval. Can anyone suggest a way to achieve what I'm looking for? i.e. determine what the best (and worst and median) day was for the past 90 days and plot that. Can it be done using purely Graphite functions? Thanks.
The answer was to just flip the order to the function calls: (maxSeries before integral) Thanks to turner on the [email protected] board for the answer alias(maxSeries(integral(timeStack(app.items_sold, '1d', 0, 90))),'Max') alias(averageSeries(integral(timeStack(app.items_sold, '1d', 0,90))), 'Avg') alias(minSeries(integral(timeStack(app.items_sold, '1d',0, 90))), 'Min')
Grafana
29,264,515
19
I am sending json logs to loki and visualizing in grafana. Initially, my logs looked like as following. { "log": "{\"additionalDetails\":{\"body\":{},\"ip\":\"::ffff:1.1.1.1\",\"params\":{},\"query\":{},\"responseTime\":0,\"userAgent\":\"ELB-HealthChecker/2.0\"},\"context\":\"http\",\"endpoint\":\"/healthz\",\"level\":\"info\",\"message\":\"[::ffff:1.1.1.1] HTTP/1.1 GET 200 /healthz 0ms\",\"requestId\":\"9fde4910-86cd-11ec-a1c5-cd8277a61e4a\",\"statusCode\":200}\n", "stream": "stdout", "time": "2022-02-05T21:49:58.178290044Z" } To make it more usable, I am using following query. {app="awesome-loki-logs-with-grafana"} | json | line_format "{{.log}}" And the results are really good. It automaticaly detects fileds as following. How can I filter by statusCode, which is already being detected by grafana?
You can create a "status" custom variable with values like 200, 401, 403, 404, etc, and use the variable in the LogQL, like in the following example: {app="awesome-loki-logs-with-grafana"} | json | statusCode=$status
Grafana
71,002,529
18
Small question regarding Spring Boot, some of the useful default metrics, and how to properly use them in Grafana please. Currently with a Spring Boot 2.5.1+ (question applicable to 2.x.x.) with Actuator + Micrometer + Prometheus dependencies, there are lots of very handy default metrics that come out of the box. I am seeing many many of them with pattern _max _count _sum. Example, just to take a few: spring_data_repository_invocations_seconds_max spring_data_repository_invocations_seconds_count spring_data_repository_invocations_seconds_sum reactor_netty_http_client_data_received_bytes_max reactor_netty_http_client_data_received_bytes_count reactor_netty_http_client_data_received_bytes_sum http_server_requests_seconds_max http_server_requests_seconds_count http_server_requests_seconds_sum Unfortunately, I am not sure what to do with them, how to correctly use them, and feel like my ignorance makes me miss on some great application insights. Searching on the web, I am seeing some using like this, to compute what seems to be an average with Grafana: irate(http_server_requests_seconds::sum{exception="None", uri!~".*actuator.*"}[5m]) / irate(http_server_requests_seconds::count{exception="None", uri!~".*actuator.*"}[5m]) But Not sure if it is the correct way to use those. May I ask what sort of queries are possible, usually used when dealing with metrics of type _max _count _sum please? Thank you
UPD 2022/11: Recently I've had a chance to work with these metrics myself and I made a dashboard with everything I say in this answer and more. It's available on Github or Grafana.com. I hope this will be a good example of how you can use these metrics. Original answer: count and sum are generally used to calculate an average. count accumulates the number of times sum was increased, while sum holds the total value of something. Let's take http_server_requests_seconds for example: http_server_requests_seconds_sum 10 http_server_requests_seconds_count 5 With the example above one can say that there were 5 HTTP requests and their combined duration was 10 seconds. If you divide sum by count you'll get the average request duration of 2 seconds. Having these you can create at least two useful panels: average request duration (=average latency) and request rate. Request rate Using rate() or irate() function you can get how many there were requests per second: rate(http_server_requests_seconds_count[5m]) rate() works in the following way: Prometheus takes samples from the given interval ([5m] in this example) and calculates difference between current timepoint (not necessarily now) and [5m] ago. The obtained value is then divided by the amount of seconds in the interval. Short interval will make the graph look like a saw (every fluctuation will be noticeable); long interval will make the line more smooth and slow in displaying changes. Average Request Duration You can proceed with http_server_requests_seconds_sum / http_server_requests_seconds_count but it is highly likely that you will only see a straight line on the graph. This is because values of those metrics grow too big with time and a really drastic change must occur for this query to show any difference. Because of this nature, it will be better to calculate average on interval samples of the data. Using increase() function you can get an approximate value of how the metric changed during the interval. Thus: increase(http_server_requests_seconds_sum[5m]) / increase(http_server_requests_seconds_count[5m]) The value is approximate because under the hood increase() is rate() multiplied by [inverval]. The error is insignificant for fast-moving counters (such as the request rate), just be ready that there can be an increase of 2.5 requests. Aggregation and filtering If you already ran one of the queries above, you have noticed that there is not one line, but many. This is due to labels; each unique set of labels that the metric has is considered a separate time series. This can be fixed by using an aggregation function (like sum()). For example, you can aggregate request rate by instance: sum by(instance) (rate(http_server_requests_seconds_count[5m])) This will show you a line for each unique instance label. Now if you want to see only some and not all instances, you can do that with a filter. For example, to calculate a value just for nodeA instance: sum by(instance) (rate(http_server_requests_seconds_count{instance="nodeA"}[5m])) Read more about selectors here. With labels you can create any number of useful panels. Perhaps you'd like to calculate the percentage of exceptions, or their rate of occurrence, or perhaps a request rate by status code, you name it. Note on max From what I found on the web, max shows the maximum recorded value during some interval set in settings (default is 2 minutes if to trust the source). This is somewhat uncommon metric and whether it is useful is up to you. Since it is a Gauge (unlike sum and count it can go both up and down) you don't need extra functions (such as rate()) to see dynamics. Thus http_server_requests_seconds_max ... will show you the maximum request duration. You can augment this with aggregation functions (avg(), sum(), etc) and label filters to make it more useful.
Grafana
67,964,176
18
I am new to Prometheus and Grafana. My primary goal is to get the response time per request. For me it seemed to be a simple thing - but whatever I do I do not get the results I require. I need to be able to analyse the service latency in the last minutes/hours/days. The current implementation I found was a simple SUMMARY (without definition of quantiles) which is scraped every 15s. Is it possible to get the average request latency of the last minute from my Prometheus SUMMARY? If YES: How? If NO: What should I do? Currently I am using the following query: rate(http_response_time_sum{application="myapp",handler="myHandler", status="200"}[1m]) / rate(http_response_time_count{application="myapp",handler="myHandler", status="200"}[1m]) I am getting two "datasets". The value of the first is "NaN". I suppose this is the result from a division by zero. (I am using spring-client).
Your query is correct. The result will be NaN if there have been no queries in the past minute.
Grafana
47,305,424
18
I have a Grafana dashboard with template variables for services and instances. When I select a service how can I make it filter the second template variable list based on the first?
You can reference the first variable in the second variables query. I'm not certain if there is a way using the label_values helper though. First variable query: up regex: /.*app="([^"]*).*/ Second variable: query: up{app="$app"} regex: /.*instance="([^"]*).*/
Grafana
41,773,162
18
I'm trying to figure out a way to create a data-source plugin which can communicate with an external REST API and provide relevant data to draw a panel. Anyone with previous experience?
The Simple JSON Datasource does roughly what you're proposing, and would definitely be a good base for you to start from. There is also documentation on datasource plugins available.
Grafana
39,608,362
18
I am in the processing of migrating my panels from using the SQL syntax (from InfluxDB version 1.X) to the new influx syntax (InfluxDB version 2). There is an issue with the labels of the data. It includes the attributes that I used to filter it. For example, if I select data from a range that contains 2 days, it splits the data up. See the screenshot below: This completely messes the chart up. The base code looks like this: from(bucket: "main") |> range(start: v.timeRangeStart, stop:v.timeRangeStop) |> filter(fn: (r) => r._measurement == "POWER" and r._field == "value" and r.device == "living_room" ) |> aggregateWindow(every: v.windowPeriod, fn: sum) It should obviously just be "POWER" and "CURRENT". I tried a dozen of different approaches, but cannot come up with a working solution. For example, if I do: from(bucket: "main") |> range(start: v.timeRangeStart, stop:v.timeRangeStop) |> filter(fn: (r) => r._measurement == "POWER" and r._field == "value" and r.device == "living_room" ) |> aggregateWindow(every: v.windowPeriod, fn: sum) |> map(fn: (r) => ({ POWER: r._value })) it says "Data does not have a time field". I also tried using from(bucket: "main") |> range(start: v.timeRangeStart, stop:v.timeRangeStop) |> filter(fn: (r) => r._measurement == "POWER" and r._field == "value" and r.device == "living_room" ) |> aggregateWindow(every: v.windowPeriod, fn: sum) |> yield(name: "POWER") that does not work either. I tried many other things without success. How can I fix this?
After hours of trial and error, I was able to produce a working solution. I imagine that other users may stumble upon the same issue, I will therefore not delete the question and instead provide my solution. I basically had to map the required fields and tags and assign the desired label, instead of just mapping the value that should be displayed (because then the date/time data is missing). The solution looks like this: from(bucket: "main") |> range(start: v.timeRangeStart, stop:v.timeRangeStop) |> filter(fn: (r) => r._measurement == "POWER" and r._field == "value" and r.device == "living_room" ) |> aggregateWindow(every: v.windowPeriod, fn: max) |> map(fn: (r) => ({ _value:r._value, _time:r._time, _field:"Power (W)" })) Power (W) is the label/alias that is going to be used. I wish that Influx would provide an easier way to alias the desired field. The current approach is not very intuitive.
Grafana
68,122,202
17
I'm trying to understand helm and I wonder if someone could ELI5 to me something or help me with something. So i did run below: helm repo add coreos https://s3-eu-west-1.amazonaws.com/coreos-charts/stable/ Then I installed kube-prometheus by using below: helm install coreos/kube-prometheus --name kube-prometheus -f values.yaml --namespace monitoringtest Everything works fine but I'm trying to add some custom dashboards from json files and I'm struggling to understand how to do it. I was following this: https://blogcodevalue.wordpress.com/2018/09/16/automate-grafana-dashboard-import-process/ In my values.yaml I added below serverDashboardConfigmaps: - example-dashboards I understand that if I do: helm upgrade --install kube-prometheus -f values.yaml --namespace monitoringtest coreos/kube-prometheus That should cause grafana to pickup a below configmap called example-dashboards and load *.json files from custom-dashboards folder. apiVersion: v1 kind: ConfigMap metadata: name: example-dashboards data: {{ (.Files.Glob "custom-dashboards/*.json").AsConfig | indent 2 }} # Or # # data: # custom-dashboard.json: |- # {{ (.Files.Get "custom.json") | indent 4 }} # # The filename (and consequently the key under data) must be in the format `xxx-dashboard.json` or `xxx-datasource.json` # for them to be picked up. Now two questions: How do I add above configmap to this helm release? Where is this custom-dashboards folder located? Is it on my laptop and then is send to grafana? Do I need to copy all of https://s3-eu-west-1.amazonaws.com/coreos-charts/stable/ onto my laptop? Sorry for explaining everything but I'm just trying to understand this.
You can find a good example of how to do this in the charts for prometheus-operator here: https://github.com/helm/charts/tree/master/stable/prometheus-operator/templates/grafana It is a ConfigMapList that gets all JSONs from a given directory and stores them into ConfigMaps which are read by Grafana. {{- $files := .Files.Glob "dashboards/*.json" }} {{- if $files }} apiVersion: v1 kind: ConfigMapList items: {{- range $path, $fileContents := $files }} {{- $dashboardName := regexReplaceAll "(^.*/)(.*)\\.json$" $path "${2}" }} - apiVersion: v1 kind: ConfigMap metadata: name: {{ printf "%s-%s" (include "prometheus-operator.fullname" $) $dashboardName | trunc 63 | trimSuffix "-" }} namespace: {{ template "prometheus-operator.namespace" . }} labels: {{- if $.Values.grafana.sidecar.dashboards.label }} {{ $.Values.grafana.sidecar.dashboards.label }}: "1" {{- end }} app: {{ template "prometheus-operator.name" $ }}-grafana {{ include "prometheus-operator.labels" $ | indent 6 }} data: {{ $dashboardName }}.json: {{ $.Files.Get $path | toJson }} {{- end }} {{- end }} Mind that the size of a ConfigMap might be limited: https://stackoverflow.com/a/53015758/4252480
Grafana
55,830,850
17
As the Grafana Dashboard comes with Default black back ground color. It is possible to change the color to some other color of user choice?
What version are you using? If you're using > v3.x, you'll be able to set a default theme for the main organisation. Set it to the light theme and you're good.
Grafana
41,006,070
17
I have successfully running a grafana instance on my server. It runs on http without a problem. Now I want to switch from http to https. My grafana.ini is shown bellow: #################################### Server #################################### [server] # Protocol (http or https) protocol = https # The ip address to bind to, empty will bind to all interfaces http_addr = 0.0.0.0 # The http port to use http_port = 3000 # The public facing domain name used to access grafana from a browser ;domain = localhost # Redirect to correct domain if host header does not match domain # Prevents DNS rebinding attacks ;enforce_domain = false # The full public facing url ;root_url = %(protocol)s://%(domain)s:%(http_port)s/ # Log web requests ;router_logging = false # the path relative working path ;static_root_path = public # enable gzip ;enable_gzip = false # https certs & key file cert_file = /usr/local/ssl/crt/certificate.cer cert_key = /usr/local/ssl/private/private_key.key
The above configuration may have a problem: after changing the grafana.ini file the "grafana-server" service will not start again. Here's how I solved my problem: Change grafana.ini as mentioned above. Copy the certificate files (pem, crt and key) to /etc/grafana. Change the file permissions of the certificate files to 644 (go+r) and the owner to root:root. After that the grafana service will work properly in HTTPS mode.
Grafana
39,956,790
17
I have a Grafana dashboard panel configured to render the results of a Prometheus query. There are a large number of series returned by the query, with the legend displayed to the right. If the user is looking for a specific series, they have to potentially scroll through all of them, and it's easy to miss the one they're looking for. So I'd like to sort the legend by series name, but I can't find any way to do that. My series name is a concatenation of two labels, so if I could sort the instant vector returned from the PromQL query by label value, I think Grafana would use that order in the legend. But I don't see any way to do that in Prometheus. There is a sort() function, but it sorts by sample value. And I don't see any way to sort the legend in Grafana.
As far as I know, You can only use the function sort() to sort metrics by value. According to this PR, Prometheus does not intend to provide the function sort_by_label(). According to this Issue, Grafana displays the query results from Prometheus without sorting. According to this Issue, Grafana supports sorting by value when displaying legend. In Grafana 7, Prometheus metrics can be transformed from time series format to table format using the Transform module, so that you can sort the metrics by any label or value. In December 2023, prometheus v2.49 finally added sort_by_label() and sort_by_label_desc()
Grafana
64,395,442
16
I have configured grafana dashboard to monitor promethus metrics for some of the spring boot services. I have a single panel and a prom query for every service on it. Now I want to add alerts for each on of those queries. But I couldn't find a way to add multiple alerts on single panel. I could add only only for one of the queries. Is there a way to do it? Or would I need to split panel into multiple panels?
You can specify the query that the alert threshold is evaluating within the 'conditions' but it will still be just one alert. As such your Alert message won't include anything to distinguish which specific query condition triggered the alert, it's just whatever text is in the box (AFAIK there's not currently any way to add variables to the alert message). I've ended up with a separate dashboard which isn't generally viewed, just for alerts with multiple panels for each alert. You can quickly duplicate them by using the panel json and a search/replace for the node name, service name etc.
Grafana
63,245,770
16
I am exploring grafana for my log management and system monitoring. I found kibana is also used for same process. I just don't know when to use kibana and when to use grafana and when to use zabbix?
Zabbix - complex monitoring solution including data gathering, data archiving (trends, compaction,...), visualizer with dashboards, alerting and some management support for alerts escalations. (have a look at collectd, prometheus, cacti. They are all able to gather data) Grafana - visualizer of data. It can read data at least from prometheus, graphite and elastics. Its primary goal is to visualize things in user defined dashboards and correlate things from possibly various sources. You can for example see cpu load (float time serie data from prometheus for example) with nice annotations referring to some special event in log file (loaded from elastics of course) Kibana - visualization + analytics on logged data into elastics. Have a fast look at kibana discover to get idea. It is "must to have" tool when you need to search your logs (various services, various servers) in one place.
Grafana
40,882,040
16
We have different type of logs for one and the same application. Some are coming from our application which logs in a JSON format and others are different types of log messages. For example these 3 log lines: "{\"written_at\": \"2022-03-30T07:51:04.934Z\", \"written_ts\": 1648626664934052000, \"msg\": \"Step 'X' started at 2022-03-30 07:51:04\", \"type\": \"log\", \"logger\": \"my-logger\", \"thread\": \"MainThread\", \"level\": \"DEBUG\", \"module\": \"my.module\", \"line_no\": 48}\n" " ERROR Data processing error: Did not work \n" "FileNotFoundError: [Errno 2] No such file or directory: '/local.json'\n" To parse our application JSON logs we perform the following LogQL query: | json log="log" | line_format "{{.log}}" | json | line_format "{{.msg}}" | __error__ != "JSONParserErr" As our query already states, we can not parse the other line logs since they are not in JSON format. Can we define different parsing and formatting depending on conditions? Or as fallback when the JSONParserErr happens?
Not sure if you managed to get an answer to this, as I'm looking to see if this is possible in a single query, however you can do this with multiple queries… For the JSON rows | json log="log" | line_format "{{.log}}" | json | line_format "{{.msg}}" | __error__ != "JSONParserErr" # … more processing For the non-JSON rows… {swarm_stack="apiv2-acme", swarm_service="apiv2-acme_tenant-import"} | json | __error__ = "JSONParserErr" | drop __error__ # … more processing
Grafana
71,675,210
15
In https://github.com/grafana/loki/issues/4249 I found interesting screenshot. On this screenshot I see that log level and message are displayed bold with white text and other metadata (collected from log message) displayed on separate line with grey color. I have searched docs and haven't found how it's possible to achieve that. To be honest I'm searching for something like "short message" in ELK to make developers see metadata only when they are actually needs it. Could you please point me to the doc how to achieve that please?
Short answer: I found that there isn't such UI function in Grafana UI. But there's two features that can help you achieve such result: Line formating - allows you to show only selected parts of message ANSI escape sequence rendering - that allows you to change font settings (bold/italic/color) Long answer: Here's my initial test query (that shows only messages that have "HornetQ") {appname=~".+"} |= "HornetQ" it produces following output. I have added line formatting to the query to show only message field by default {appname=~".+"} |= "HornetQ" | json | line_format "{{ .message }}" But if you would open message details you would see all json fields anyway Let's add modify line format to show preview of extra fields on separate line. We would use pattern '<_entry>' to save initial json for further processing. Also we would use gotpl loop in line_format and if that would skip message field {appname=~".+"} |= "HornetQ" | pattern `<_entry>` | json | line_format "{{ .message }}\n{{ range $k, $v := (fromJson ._entry)}}{{if ne $k \"message\"}}{{$k}}: {{$v}} {{ end }}{{ end }}" Let's make our messages better readable by changing font options. To achieve that we would use ANSI escape sequences (additional info) {appname=~".+"} | pattern `<_entry>` | json | line_format "\033[1;37m{{ .message }}\033[0m\n{{ range $k, $v := (fromJson ._entry)}}{{if ne $k \"message\"}}\033[1;30m{{$k}}: \033[0m\033[2;37m{{$v}}\033[0m {{ end }}{{ end }}" You can see that |= "HornetQ" part is missing in last query, that because it breaks last query (with colouring), so I skip it. P.S. So for now my solution doesn't work with fulltext search :(
Grafana
69,761,162
15
Is there a way to send logs to Loki directly without having to use one of it's agents? For example, if I have an API, is it possible to send request/response logs directly to Loki from an API, without the interference of, for example, Promtail?
Loki HTTP API Loki HTTP API allows pushing messages directly to Grafana Loki server: POST /loki/api/v1/push /loki/api/v1/push is the endpoint used to send log entries to Loki. The default behavior is for the POST body to be a snappy-compressed protobuf message: Protobuf definition Go client library Alternatively, if the Content-Type header is set to application/json, a JSON post body can be sent in the following format: { "streams": [ { "stream": { "label": "value" }, "values": [ [ "<unix epoch in nanoseconds>", "<log line>" ], [ "<unix epoch in nanoseconds>", "<log line>" ] ] } ] } You can set Content-Encoding: gzip request header and post gzipped JSON. Example: curl -v -H "Content-Type: application/json" -XPOST -s "http://localhost:3100/loki/api/v1/push" --data-raw \ '{"streams": [{ "stream": { "foo": "bar2" }, "values": [ [ "1570818238000000000", "fizzbuzz" ] ] }]}' So it is easy to create JSON-formatted string with logs and send it to the Grafana Loki. Libraries There are some libraries implementing several Grafana Loki protocols. There is also (my) zero-dependency library in pure Java 1.8, which implements pushing logs in JSON format to Grafana Loki. Works on Java SE and Android platform: https://github.com/mjfryc/mjaron-tinyloki-java Security Above API doesn't support any access restrictions as written here - when using over public network, consider e.g. configuring Nginx proxy with HTTPS from Certbot and Basic Authentication.
Grafana
67,316,535
15
I am a little unclear on when to exactly use increase and when to use sum_over_time in order to calculate a periodic collection of data in Grafana. I want to calculate the total percentage of availability of my system. Thanks.
The "increase" function calculates how much a counter increased in the specified interval. The "sum_over_time" function calculates the sum of all values in the specified interval. Suppose you have the following data series in the specified interval: 5, 5, 5, 5, 6, 6, 7, 8, 8, 8 Then you would get: increase = 8-5 = 3 sum_over_time = 5+5+5+5+6+6+7+8+8+8 = 63 If your goal is to calculate the total percentage of availability I think it's better to use the "avg_over_time" function.
Grafana
63,289,864
15
My password once worked, but I don't remember if I changed it or not. However, I can't reset it. I tried with no success: kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo > DpveUuOyxNrandompasswordYuB5Fs2cEKKOmG <-- does not work (anymore?) PS: I did not set any admin email for web-based reset
Ok found. Best way is to run grafana-cli inside grafana's pod. namespace=monitoring kubectl exec --namespace $namespace -it $(kubectl get pods --namespace $namespace -l "app.kubernetes.io/name=grafana" -o jsonpath="{.items[0].metadata.name}") -- grafana cli admin reset-admin-password yourNewPasswordHere INFO[01-21|10:24:17] Connecting to DB logger=sqlstore dbtype=sqlite3 INFO[01-21|10:24:17] Starting DB migration logger=migrator Admin password changed successfully ✔
Grafana
59,838,690
15
I am setting up Grafana in Fargate using Docker. Once the Grafana container is active, is there an endpoint I can call that Fargate could use to determine if the container is "healthy" or not? For example: http://grafana/healthy or http://grafana/status Thanks!
Returns health information about Grafana GET /api/health It is documented: https://grafana.com/docs/grafana/latest/http_api/other/#returns-health-information-about-grafana Or undocumented (so use it only if you understand how it works and what are consequences): GET /healthz https://github.com/grafana/grafana/pull/27536
Grafana
66,535,307
14
I have recently set up Grafana with InfluxDB. I'd like to show a panel that indicates how long it has been since an event took place. Examples: Server last reported in: 33 minutes ago Last user sign up: 17 minutes ago I can get a single metric pretty easily with the following code: SELECT time, last("duration") as last_duration FROM custom_events ORDER BY time DESC But I can't seem to get Grafana to do what I want with the time field. Any suggestions?
Since Grafana(4.6.0) this is now possible with singlestat panels. GoTo the Options-Tab Select Value -> Stat -> Time of last point Select Value -> Stat -> Unit -> Date & time -> From Now
Grafana
35,640,978
14
I have a Spring boot app throwing out open metric stats using micrometer. For each of my HTTP endpoints, I can see the following metric which I believe tracks the number of requests for the given endpoint: http_server_requests_seconds_count My question is how do I use this in a Grafana query to present the number number of requests calling my endpoint say every minute? I tried http_client_requests_seconds_count{} and sum(rate(http_client_requests_seconds_count{}[1m])) but neither work. Thanks in advance.
rate(http_client_requests_seconds_count{}[1m]) will provide you the number of request your service received at a per-second rate. However by using [1m] it will only look at the last minute to calculate that number, and requires that you collect samples at a rate quicker than a minute. Meaning, you need to have collected 2 scrapes in that timeframe. increase(http_client_requests_seconds_count{}[1m]) would return how much that count increased in that timeframe, which is probably what you would want, though you still need to have 2 data points in that window to get a result. Other way you could accomplish your result: increase(http_client_requests_seconds_count{}[2m]) / 2 By looking over 2 minutes then dividing it, you will have more data and it will flatten spikes, so you'll get a smoother chart. rate(http_client_requests_seconds_count{}[1m]) * 60 By multiplying the rate by 60 you can change the per-second rate to a per-minute value. Here is a writeup you can dig into to learn more about how they are calculated and why increases might not exactly align with integer values: https://promlabs.com/blog/2021/01/29/how-exactly-does-promql-calculate-rates
Grafana
66,282,512
13
I have below labels in prometheus, how to create wildcard query while templating something like “query”: “label_values(application_*Count_Total,xyx)” . These values are generated from a Eclipse Microprofile REST-API application_getEnvVariablesCount_total application_getFEPmemberCount_total application_getLOBDetailsCount_total application_getPropertiesCount_total { "allValue": null, "current": { "isNone": true, "selected": false, "text": "None", "value": "" }, "datasource": "bcnc-prometheus", "definition": "microprofile1", "hide": 0, "includeAll": false, "label": null, "multi": false, "name": "newtest", "options": [ { "isNone": true, "selected": true, "text": "None", "value": "" } ], "query": "microprofile1", "refresh": 0, "regex": "{__name__=~\"application_.*Count_total\"}", "skipUrlSync": false, "sort": 0, "tagValuesQuery": "", "tags": [], "tagsQuery": "", "type": "query", "useTags": false },
Prometheus treats metric names the same way as label values with a special label - __name__. So the following query should select all the values for label xyx across metrics with names matching application_.*Count_total regexp: label_values({__name__=~"application_.*Count_total"}, xyx)
Grafana
59,684,225
13
I'm looking at Prometheus metrics in a Grafana dashboard, and I'm confused by a few panels that display metrics based on an ID that is unfamiliar to me. I assume that /kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c points to a single pod, and I assume that /kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c/<another-long-string> resolves to a container in the pod, but how do I resolve this ID to the pod name and a container i.e. how to do I map this ID to the pod name I see when I run kubectl get pods? I already tried running kubectl describe pods --all-namespaces | grep "99b2fe2a-104d-11e8-baa7-06145aa73a4c" but that didn't turn up anything. Furthermore, there are several subpaths in /kubepods, such as /kubepods/burstable and /kubepods/besteffort. What do these mean and how does a given pod fall into one or another of these subpaths? Lastly, where can I learn more about what manages /kubepods? Prometheus Query: sum (container_memory_working_set_bytes{id!="/",kubernetes_io_hostname=~"^$Node$"}) by (id) / Thanks for reading. Eric
OK, now that I've done some digging around, I'll attempt to answer all 3 of my own questions. I hope this helps someone else. How to do I map this ID to the pod name I see when I run kubectl get pods? Given the following, /kubepods/burstable/pod99b2fe2a-104d-11e8-baa7-06145aa73a4c, the last bit is the pod UID, and can be resolved to a pod by looking at the metadata.uid property on the pod manifest: kubectl get pod --all-namespaces -o json | jq '.items[] | select(.metadata.uid == "99b2fe2a-104d-11e8-baa7-06145aa73a4c")' Once you've resolved the UID to a pod, we can resolve the second UID (container ID) to a container by matching it with the .status.containerStatuses[].containerID in the pod manifest: ~$ kubectl get pod my-pod-6f47444666-4nmbr -o json | jq '.status.containerStatuses[] | select(.containerID == "docker://5339636e84de619d65e1f1bd278c5007904e4993bc3972df8628668be6a1f2d6")' Furthermore, there are several subpaths in /kubepods, such as /kubepods/burstable and /kubepods/besteffort. What do these mean and how does a given pod fall into one or another of these subpaths? Burstable, BestEffort, and Guaranteed are Quality of Service (QoS) classes that Kubernetes assigns to pods based on the memory and cpu allocations in the pod spec. More information on QoS classes can be found here https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/. To quote: For a Pod to be given a QoS class of Guaranteed: Every Container in the Pod must have a memory limit and a memory request, and they must be the same. Every Container in the Pod must have a cpu limit and a cpu request, and they must be the same. A Pod is given a QoS class of Burstable if: The Pod does not meet the criteria for QoS class Guaranteed. At least one Container in the Pod has a memory or cpu request. For a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not have any memory or cpu limits or requests. Lastly, where can I learn more about what manages /kubepods? /kubepods/burstable, /kubepods/besteffort, and /kubepods/guaranteed are all a part of the cgroups hierarchy, which is located in /sys/fs/cgroup directory. Cgroups is what manages resource usage for container processes such as CPU, memory, disk I/O, and network. Each resource has its own place in the cgroup hierarchy filesystem, and in each resource sub-directory are /kubepods subdirectories. More info on cgroups and Docker containers here: https://docs.docker.com/config/containers/runmetrics/#control-groups
Grafana
49,035,724
13
Use helm installed Prometheus and Grafana on minikube at local. $ helm install stable/prometheus $ helm install stable/grafana Prometheus server, alertmanager grafana can run after set port-forward: $ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") $ kubectl --namespace default port-forward $POD_NAME 9090 $ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}") $ kubectl --namespace default port-forward $POD_NAME 9093 $ export POD_NAME=$(kubectl get pods --namespace default -l "app=excited-crocodile-grafana,component=grafana" -o jsonpath="{.items[0].metadata.name}") $ kubectl --namespace default port-forward $POD_NAME 3000 Add Data Source from grafana, got HTTP Error Bad Gateway error: Import dashboard 315 from: https://grafana.com/dashboards/315 Then check Kubernetes cluster monitoring (via Prometheus), got Templating init failed error: Why?
In the HTTP settings of Grafana you set Access to Proxy, which means that Grafana wants to access Prometheus. Since Kubernetes uses an overlay network, it is a different IP. There are two ways of solving this: Set Access to Direct, so the browser directly connects to Prometheus. Use the Kubernetes-internal IP or domain name. I don't know about the Prometheus Helm-chart, but assuming there is a Service named prometheus, something like http://prometheus:9090 should work.
Grafana
48,338,122
13
What is the best-practice in terms of migrating grafana (configuration, dashboards etc.) to a a newer version ? I want to migrate a v3 grafana installation to a new server which will be running the v4 codebase with alerting! According to the docs, grafana v4 will automatically update the database schema once you start it so I assume this process is essentially: Install grafana v4 on new server. Copy the /var/lib/grafana/grafana.db from old server to the new one. Merge the /etc/grafana/grafana.ini file. Install any plugins Restart grafana-server Is their anything I'm missing? UPDATE: What if grafana it's deployed as a docker container ? Bellow there's a docker-compose file which spins up a grafana 7.3.5 container, what files should I migrate to the container via mount volume ? version: "3.1" services: grafana_seven: image: "grafana/grafana:${NEW_TAG}" user: "${UID}:${GID}" container_name: newgrafana ports: - "3001:3000" volumes: - ./tmp_volume/graf_volume/new_grafana/:/var/lib/grafana
That should do it. If your using sqlite you can just copy the data/grafana.db file to the new server.
Grafana
41,009,392
13
I tried to obtains these measurements from prometheus: increase(http_server_requests_seconds_count{uri="myURI"}[10s]) increase(http_server_requests_seconds_count{uri="myURI"}[30s]) rate(http_server_requests_seconds_count{uri="myURI"}[10s]) rate(http_server_requests_seconds_count{uri="myURI"}[30s]) Then I run a python script where 5 threads are created, each of them hitting this myURI endpoint: What I see on Grafana is: I received these values: 0 6 0 0.2 I expected to receive these (but didn't): 5 (as in the last 10 seconds this endpoint received 5 calls) 5 (as in the last 30 seconds this endpoint received 5 calls) 0.5 (the endpoint received 5 calls in 10 seconds 5/10) 0.167 (the endpoint received 5 calls in 30 seconds 5/30) Can someone explain with my example the formula behind this function and a way to achieve the metrics/value I expect?
Prometheus calculates increase(m[d]) at timestamp t in the following way: It fetches raw samples stored in the database for time series matching m on a time range (t-d .. t]. Note that samples at timestamp t-d aren't included in the time range, while samples at t are included. It is expected that every selected time series is a counter, since increase() works only with counters. It calculates the difference between the last and the first raw sample value on the selected time range individually per each time series matching m. Note that Prometheus doesn't take into account the difference between the last raw sample just before the (t-d ... t] time range and the first raw samples at this time range. This may lead to lower than expected results in some cases. It extrapolates results obtained at step 2 if the first and/or the last raw samples are located too far from time range boundaries (t-d .. t]. This may lead to unexpected results. For example, fractional results for integer counters. See this issue for details. Prometheus calculates rate(m[d]) as increase(m[d]) / d, so rate() results may be also unexpected sometimes. Prometheus developers are aware of these issues and are going to fix them eventually - see these design docs. In the meantime you can use VictoriaMetrics - this is Prometheus-like monitoring solution I work on. It provides increase() and rate() functions, which are free from issues mentioned above.
Grafana
70,835,778
12
I want to exclude mulitple app groups from my query... Not sure how to go about it.. My thoughts are like this count(master_build_state{app_group~! "oss-data-repair", "pts-plan-tech-solution", kubernets_namespace = "etc"} ==0) I do not want to include those two app_groups, but am not sure how to implement in PromQL. You would thing to add () or [], but it throws errors. Let me know if anyone can help! Thanks
count(master_build_state{app_group !~ "(oss-data-repair|pts-plan-tech-solution)", kubernets_namespace="etc"} ==0)
Grafana
68,681,720
12
I went through the PromQL docs and found rate little bit confusing. Then I tried one query from Prometheus query dashboard and found below given results Time Count increase rate(count[1m]) 15s 4381 0 0 30s 4381 0 0 45s 4381 0 0 1m 4381 0 0 15s 4381 0 0 30s 4402 21 0.700023 45s 4402 0 0.700023 2m 4423 21 0.7 15s 4423 0 0.7 30s 4440 17 0.56666666 45s 4440 0 0.56666666 3m 4456 16 0.53333333 Last column value I am getting from dashboard but I am not able to understand how is this calculated. Resolution - 15s scrape_interval: 30s
The "increase" function calculates how much some counter has grown and the "rate" function calculates the amount per second the measure grows. Analyzing your data I think you used [30s] for the "increase" and [1m] for the "rate" (the correct used values are important to the result). Basically, for example, in time 2m we have: increase[30s] = count at 2m - count at 1.5m = 4423 - 4402 = 21 rate[1m] = (count at 2m - count at 1m) / 60 = (4423 - 4381) / 60 = 0.7 Prometheus documentation: increase and rate.
Grafana
66,674,880
12
In my project we use influx dB and Grafana for our log and other analysis which is running on an Ubuntu machine. Now recently due to a migration process, the ports were blocked like 3000(for Grafana) and 8086 (for influx dB) which will be remain blocked for some security reason. So, I am unable to connect them through the browser and postman. So as a worked around we are planning to move these (at least the dashboards) to a local setup. I checked the process are up and running. But unable to locate the physical location of the dashboard files. I have a default setting, don't have any separate database configuration for grafana. [database] # You can configure the database connection by specifying type, host, name, user and password # as separate properties or as on string using the url properties. # Either "mysql", "postgres" or "sqlite3", it's your choice ;type = sqlite3 ;host = 127.0.0.1:3306 ;name = grafana ;user = root # If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;""" ;password = # Use either URL or the previous fields to configure the database # Example: mysql://user:secret@host:port/database ;url = # For "postgres" only, either "disable", "require" or "verify-full" ;ssl_mode = disable ;ca_cert_path = ;client_key_path = ;client_cert_path = ;server_cert_name = Is there any location where I can find these JSON file?
I figure it out by some research, thought I can help the community if someone same searching for the answer. The default folder of the dashboard is /var/lib/grafana. If you navigate to the folder, you will find a file name grafana.db. Download this file to your local machine or any machine which you want. Please download sqlitebrowser from here. Now on the sqlitebrowser click on open database and select the grafana.db file. And right click on the dashboard table and select Browse Table. and select the data section and you will find the dashboard.
Grafana
65,860,003
12
I would like to add an annotation on all panels (graphs) in the Grafana dashboard. I could add annotations manually one-by-one on all panels -- but I hope there is a better way how to do it although I didn't find any information in the official documentation. I suppose I can write a script using Grafana API to create annotation on all panels in dashboard; however this seems like a complicated workaround more than an actual solution. Do you know how to easily add annotations on all graphs at once?
I struggled with the same, but found the answer here: https://stackoverflow.com/a/50416641/3803228 In short, you have to go to your dashboard settings, and in the "annotations" tab, create a new query... that one will automatically display the annotation in all the dashboard panels. This is the part that worked to me from @aussiedan Once you have your deployments being added as annotations, you can display those on your dashboard by going to the annotations tab in the dashboard settings and adding a new annotation source: adding annotation source Then the annotations will be shown on the panels in your dashboard: panel showing annotation
Grafana
62,422,980
12
I'm trying to show system uptime as DD-HH-MM-SS format, doing it using common code wouldn't be an issue but I'm doing it using Prometheus (PromQL) and Grafana only, here's the PromQL query: time()-process_start_time_seconds{instance="INSTANCE",job="JOB"} I achieved the basic output I wanted, it shows me the process life time. The output for the query above gives me time in seconds (for instance 68003) and converts it to bigger time units (minutes, hours etc.), but in its decimal form: The 89 after the decimal point refers to 89% of an hour,about 53 minutes. That's not a really "intuitive" way to show time, I would have liked it to display a normal DD:HH:MM:SS presentation of that time like the following screenshot from a simple online tool that converts seconds to time: Is there away to achieve it using only PromQL and Grafana configuration?
You can achieve this using "Unit" drop-down in the visualization section and select your unit as duration with the hh:mm:ss format, as you can see in the screenshot.
Grafana
60,757,793
12
There's an article "Tracking Every Release" which tells about displaying a vertical line on graphs for every code deployment. They are using Graphite. I would like to do something similar with Prometheus 2.2 and Grafana 5.1. More specifically I want to get an "application start" event displayed on a graph. Grafana annotations seem to be the appropriate mechanism for this but I can't figure out what type of prometheus metric to use and how to query it.
The simplest way to do this is via the same basic approach as in the article, by having your deployment tool tell Grafana when it performs a deployment. Grafan has a built-in system for storing annotations, which are displayed on graphs as vertical lines and can have text associated with them. It would be as simple as creating an API key in your Grafana instance and adding a curl call to your deploy script: curl -H "Authorization: Bearer <apikey>" http://grafana:3000/api/annotations -H "Content-Type: application/json" -d '{"text":"version 1.2.3 deployed","tags":["deploy","production"]}' For more info on the available options check the documentation: http://docs.grafana.org/http_api/annotations/ Once you have your deployments being added as annotations, you can display those on your dashboard by going to the annotations tab in the dashboard settings and adding a new annotation source: Then the annotations will be shown on the panels in your dashboard:
Grafana
50,415,659
12
I have 20 plus dashboards in Grafana hosting at Server1. We acquired another server and we did installed same version of Grafana on Server2 machine. I want to know is this possible that i can completely clone Server-1 Grafana instance along with all dashboards to Server2? As of now Grafana only supports one by one dashboard import and export. One other possibility i am thinking is to copy all Grafana files/directories from Server-1 to server-2 using standard SCP command. But i am not sure which files do i need to copy.
If you are using the built-in sqlite3 database, then you can indeed just copy your data directory and conf/custom.ini to the new server and that will include all your dashboards, plugins, etc. In that setup the database is contained in data/grafana.db under your grafana installation.
Grafana
42,036,663
12
I am following this tutorial link to create a grafana plugin. But when I copy this code link from the tutorial to my test server(without the dist/ folder) and run npm install npm doesn’t create a new dist/ folder instead it creates a node_modules folder. Am I missing a step here or am I understanding something incorrect? Since I expected that command to create a dist/ folder out of the files in the src/ folder? The grunt file: module.exports = (grunt) => { require('load-grunt-tasks')(grunt); grunt.loadNpmTasks('grunt-execute'); grunt.loadNpmTasks('grunt-contrib-clean'); grunt.initConfig({ clean: ['dist'], copy: { src_to_dist: { cwd: 'src', expand: true, src: ['**/*', '!**/*.js', '!**/*.scss'], dest: 'dist' }, pluginDef: { expand: true, src: [ 'plugin.json', 'README.md' ], dest: 'dist', } }, watch: { rebuild_all: { files: ['src/**/*', 'plugin.json'], tasks: ['default'], options: {spawn: false} }, }, babel: { options: { sourceMap: true, presets: ['es2015'], plugins: ['transform-es2015-modules-systemjs', 'transform-es2015-for-of'], }, dist: { files: [{ cwd: 'src', expand: true, src: ['*.js'], dest: 'dist', ext: '.js' }] }, }, }); grunt.registerTask('default', ['clean', 'copy:src_to_dist', 'copy:pluginDef', 'babel']); }; The package.json: { "name": "clock-panel", "version": "1.0.0", "description": "Clock Panel Plugin for Grafana", "main": "src/module.js", "scripts": { "lint": "eslint --color .", "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [ "clock", "grafana", "plugin", "panel" ], "author": "Raintank", "license": "MIT", "devDependencies": { "babel": "~6.5.1", "babel-eslint": "^6.0.0", "babel-plugin-transform-es2015-modules-systemjs": "^6.5.0", "babel-preset-es2015": "^6.5.0", "eslint": "^2.5.1", "eslint-config-airbnb": "^6.2.0", "eslint-plugin-import": "^1.4.0", "grunt": "~0.4.5", "grunt-babel": "~6.0.0", "grunt-contrib-clean": "~0.6.0", "grunt-contrib-copy": "~0.8.2", "grunt-contrib-uglify": "~0.11.0", "grunt-contrib-watch": "^0.6.1", "grunt-execute": "~0.2.2", "grunt-systemjs-builder": "^0.2.5", "load-grunt-tasks": "~3.2.0" }, "dependencies": { "lodash": "~4.0.0", "moment": "^2.12.0" } }
You are missing running grunt default task You should run: npm install (which installs your dependencies), followed by a grunt (which copies src files to dist as you can see in the Gruntfile.js copy:src_to_dist task) So in short just run: $ npm install && grunt
Grafana
37,975,500
12
env: kubernetes provider: gke kubernetes version: v1.13.12-gke.25 grafana version: 6.6.2 (official image) grafana deployment manifest: apiVersion: apps/v1 kind: Deployment metadata: name: grafana namespace: monitoring spec: replicas: 1 selector: matchLabels: app: grafana template: metadata: name: grafana labels: app: grafana spec: containers: - name: grafana image: grafana/grafana:6.6.2 ports: - name: grafana containerPort: 3000 # securityContext: # runAsUser: 104 # allowPrivilegeEscalation: true resources: limits: memory: "1Gi" cpu: "500m" requests: memory: "500Mi" cpu: "100m" volumeMounts: - mountPath: /var/lib/grafana name: grafana-storage volumes: - name: grafana-storage persistentVolumeClaim: claimName: grafana-pvc Problem when I deployed this grafana dashboard first time, its working fine. after sometime I restarted the pod to check whether volume mount is working or not. after restarting, I getting below error. mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied GF_PATHS_DATA='/var/lib/grafana' is not writable. You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later what I understand from this error, user could create these files. How can I give this user appropriate permission to start grafana successfully?
I recreated your deployment with appropriate PVC and noticed that grafana pod was failing. Output of command: $ kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE grafana-6466cd95b5-4g95f 0/1 Error 2 65s Further investigation pointed the same errors as yours: mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied GF_PATHS_DATA='/var/lib/grafana' is not writable. You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later This error showed on first creation of a pod and the deployment. There was no need to recreate any pods. What I did to make it work was to edit your deployment: apiVersion: apps/v1 kind: Deployment metadata: name: grafana namespace: monitoring spec: replicas: 1 selector: matchLabels: app: grafana template: metadata: name: grafana labels: app: grafana spec: securityContext: runAsUser: 472 fsGroup: 472 containers: - name: grafana image: grafana/grafana:6.6.2 ports: - name: grafana containerPort: 3000 resources: limits: memory: "1Gi" cpu: "500m" requests: memory: "500Mi" cpu: "100m" volumeMounts: - mountPath: /var/lib/grafana name: grafana-storage volumes: - name: grafana-storage persistentVolumeClaim: claimName: grafana-pvc Please take a specific look on part: securityContext: runAsUser: 472 fsGroup: 472 It is a setting described in official documentation: Kubernetes.io: set the security context for a pod Please take a look on this Github issue which is similar to yours and pointed me to solution that allowed pod to spawn correctly: https://github.com/grafana/grafana-docker/issues/167 Grafana had some major updates starting from version 5.1. Please take a look: Grafana.com: Docs: Migrate to v5.1 or later Please let me know if this helps.
Grafana
60,727,107
11
I have several metrics with the label "service". I want to get a list of all the "service" levels that begin with "abc" and end with "xyz". These will be the values of a grafana template variable. This is that I have tried: label_values(service) =~ "abc.*xyz" However this produces a error Template variables could not be initialized: parse error at char 13: could not parse remaining input "(service_name) "... Any ideas on how to filter the label values?
This should work (replacing up with the metric you mention): label_values(up{service=~"abc.*xyz"}, service) Or, in case you actually need to look across multiple metrics (assuming that for some reason some metrics have some service label values and other metrics have other values): label_values({__name__=~"metric1|metric2|metric3", service=~"abc.*xyz"}, service)
Grafana
55,958,636
11
I have used a variable in grafana which looks like this: label_values(some_metric, service) If the metric is not emitted by the data source at the current time the variable values are not available for the charts. The variable in my case is the release name and all the charts of grafana are dependent on this variable. After the server I was monitoring crashed, this metric is not emitted. Even if I set a time range to match the time when metric was emitted, it has no impact as the query for the variable is not taking the time range into account. In Prometheus I can see the values for the metric using the query: some_metric[24h] In grafana this is invalid: label_values(some_metric[24h], service) Also as per the documentation its invalid to provide $__range etc for label_values. If I have to use the query_result instead how do I write the above invalid grafana query in correct way so that I get the same result as label_values? Is there any other way to do this? The data source is Prometheus.
I'd suggest query_result(count by (somelabel)(count_over_time(some_metric[$__range]))) and then use regular expressions to extract out the label value you want. That I'm using count here isn't too important, it's more that I'm using an over_time function and then aggregating.
Grafana
52,778,031
11
When i make a table panel and go to "Options" tab, columns parameter set to Auto: Columns and their order are determined by the data query. Is there a doc on how write prometheus queries for grafana tables? My prometheus data is a metric with 2 labels my_col and my_row: my_metric{instance="lh",job="job",my_col="1",my_row="A"} 6 my_metric{instance="lh",job="job",my_col="2",my_row="A"} 8 my_metric{instance="lh",job="job",my_col="1",my_row="B"} 10 my_metric{instance="lh",job="job",my_col="2",my_row="B"} 17 I want to make a table that looks like: | | 1 | 2 | | A | 6 | 8 | | B | 10| 17|
After some experimentations in Grafana 9.1.1, I have obtained a way to construct a table like you have described with prometheus metric like that. Here are Grafana transform functions you will need: Labels to fields This function separate the labels in the metric to columns. Set Mode to Columns Set Labels to be only the columns that are relevant so in this case it's my_col and my_row Set Value field name to my_col Reduce This function reduce all values in separate times into one row. Set Mode to Reduce fields Set Calculations to Last*. You may change this part according to your needs. Set include time to false Merge This function will merge all datasets into one with corresponding columns. Organize fields Finally, this function will help you reorganize the table into something that's more proper. For presenting the data in bar chart, ensure that my_row column is the most left one.
Grafana
52,163,689
11
I did a Grafana-docker deployment with persistent storage as said in their GitHub for doing tests for my company. I did exactly as they say (I paste) and it works: # create /var/lib/grafana as persistent volume storage docker run -d -v /var/lib/grafana --name grafana-storage busybox:latest # start grafana docker run \ -d \ -p 3000:3000 \ --name=grafana \ --volumes-from grafana-storage \ grafana/grafana Problem: if I restart the server where it runs, "I" lose all the configurations, I mean, I cannot find how to start it taking the same volume (I'm sure it's there, but I could not find the way to start again the image with them). I also do a docker volume ls and the output is quite difficult to understand I was checking on the documentation and trying commands, but no result, I was looking for the answer, but I could not find exactly how to recover the config in this case. How I can start it recovering the old volume, so, all the configs, dashboards, etc? Also: if possible, could also someone link to me the right guide to read and understand this? In advance, thanks a lot.
I would recommend the following solution: $ docker volume create grafana-storage grafana-storage $ docker volume ls DRIVER VOLUME NAME local grafana-storage This is created in /var/lib/docker/volumes/grafana-storage on UNIX. Then you can start your grafana container and mount the content of /var/lib/grafana (from inside your container) to the grafana-storage which is a named docker volume. Start your container docker run -d -p 3000:3000 --name=grafana -v grafana-storage:/var/lib/grafana grafana/grafana When you go to /var/lib/docker/volumes/grafana-storage/_data as root you can see your content. You can reuse this content (delete your grafana container: docker rm -f xxx) and start a new container. Reuse -v grafana-storage:/var/lib/grafana. The --volumes-from is an "old" method to achieve the same in a 'more ugly' way. This command will create an empty volume in /var/lib/docker/volumes: $ docker run -d -v /var/lib/grafana --name grafana-storage busybox:latest Empty storage is here: cd /var/lib/docker/volumes/6178f4831281df02b7cb851cb32d8025c20029f3015e9135468a374d13386c21/_data/ You start your grafana container: docker run -d -p 3000:3000 --name=grafana --volumes-from grafana-storage grafana/grafana The storage of /var/lib/grafana from inside your container will be stored inside /var/lib/docker/volumes/6178f4831281df02b7cb851cb32d8025c20029f3015e9135468a374d13386c21/_data/ which you've created by the busybox container. If you delete your grafana container, the data will remain there. # cd /var/lib/docker/volumes/6178f4831281df02b7cb851cb32d8025c20029f3015e9135468a374d13386c21/_data/ # ls grafana.db plugins
Grafana
45,730,213
11
I want to label series by hostname + metric name. I know I can use aliasByNode(1) to do first part and aliasByMetric() to do the second. Any ideas how can I merge those two functions in a single metric?
aliasByNode can take multiple arguments. aliasByNode(apps.fakesite.web_server_01.counters.requests.count, 2,5) returns web_server_01.count. The Grafana query editor for Graphite does not support this but if you toggle edit mode then you can edit the raw query. After editing it, you can toggle back.
Grafana
38,281,290
11
I'm trying to set up graphite to work with grafana in docker based on this project : https://github.com/kamon-io/docker-grafana-graphite and when I run my dockerfile I get 403 Forbidden error for nginx. my configurations for nginx are almost the same as the project's configurations. I run my dockerfiles on a server and test them on my windows machine. So the configurations are not exactly the same ... for example I have : server { listen 80 default_server; server_name _; location / { root /src/grafana/dist; index index.html; } location /graphite/ { proxy_pass http:/myserver:8000/; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-Host $host; proxy_set_header Host $host; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header Access-Control-Allow-Origin "*"; add_header Access-Control-Allow-Methods "GET, OPTIONS"; add_header Access-Control-Allow-Headers "origin, authorization, accept"; } But I still keep getting 403 forbidden. Checking the error log for nginx says : directory index of "/src/grafana/dist/" is forbidden Stopping and running it again it says : directory index of "/src/grafana/dist/" is forbidden I'm very new to nginx ... was wondering if there's something in the configurations that I'm misunderstanding. Thanks in advance.
That's because you are hitting the first location block and the index file is not found.
Grafana
27,303,967
11
Let's take this processor as an example: a CPU with 2 cores and 4 threads (2 threads per core). From what I've read, such a CPU has 2 physical cores but can process 4 threads simultaneously through hyper threading. But, in reality, one physical core can only truly run one thread at a time, but using hyper threading, the CPU exploits the idle stages in the pipeline to process another thread. Now, here is Kubernetes with Prometheus and Grafana and their CPU resource units measurement - millicore/millicpu. So, they virtually slice a core to 1000 millicores. Taking into account the hyper threading, I can't understand how they calculate those millicores under the hood. How can a process, for example, use 100millicore (10th part of the core)? How is this technically possible? PS: accidentally, found a really descriptive explanation here: Multi threading with Millicores in Kubernetes
This gets very complicated. So k8s doesn't actually manage this it just provides a layer on top of the underlying container runtime (docker, containerd etc). When you configure a container to use 100 millicore k8's hands that down to the underlying container runtime and the runtime deals with it. Now once you start going to this level you have to start looking at the Linux kernel and how it does cpu scheduling / rate with cgroups. Which becomes incredibly interesting and complicated. In a nutshell though: The linux CFS Bandwidth Control is the thing that manages how much cpu a process (container) can use. By setting the quota and period params to the schedular you can control how much CPU is used by controlling how long a process can run before being paused and how often it runs. as you correctly identify you cant only use a 10th of a core. But you can use a 10th of the time and by doing that you can only use a 10th of the core over time. For example if I set quota to 250ms and period to 250ms. That tells the kernel that this cgroup can use 250ms of CPU cycle time every 250ms. Which means it can use 100% of the CPU. if I set quota to 500ms and keep the period to 250ms. That tells the kernel that this cgroup can use 500ms of CPU cycle time every 250ms. Which means it can use 200% of the CPU. (2 cores) if I set quota to 125ms and keep the period to 250ms. That tells the kernel that this cgroup can use 125ms of CPU cycle time every 250ms. Which means it can use 50% of the CPU. This is a very brief explanation. Here is some further reading: https://blog.krybot.com/a?ID=00750-cfae57ed-c7dd-45a2-9dfa-09d42b7bd2d7 https://www.kernel.org/doc/html/latest/scheduler/sched-bwc.html
Grafana
71,944,390
10
I need a way to add a Dropdown menu ( or similar solution ) to show the metrics for only one cpu ( cpu 1 or cpu 0 ). Is this possible? The metric is node_cpu_seconds_total
Create a custom variable named "cpu" with values 0 and 1 in Dashboard settings > Variables > New Create the graph panel using the cpu variable (use the ":pipe" suffix to enable the use of 0 and 1 options at the same time) The CPU can be selected in the dropdown menu
Grafana
67,263,623
10
I would like a Grafana variable that contains all the Prometheus metric names with a given prefix. I would like to do this so I can control what graphs are displayed with a drop down menu. I'd like to be able to display all the metrics matching the prefix without having to create a query for each one. In the Grafana documentation under the Prometheus data source I see: metrics(metric) Returns a list of metrics matching the specified metric regex. -- Using Prometheus in Grafana I tried creating a variable in Grafana using this metrics function but it didn't work. See the screenshot for the variable settings I have: settings As you can see the "Preview of values" only shows "None"
In promql, you can select metrics by name by using the internal __name__ label: {__name__=~"mysql_.*"} And then, you can reuse it to extract the metrics name using query label_values(): label_values({__name__=~"mysql_.*"},__name__) This will populate your variable with metrics name starting with mysql_. You can get the same result using metrics(); I don't know why it doesn't work for you (it should also works with prefix): metrics(mysql_)
Grafana
60,874,653
10
I am trying to create a table/chart in Grafana showing the total number of unique users who have logged in to a given application over a given time range (e.g. last 24 hours). I have a metric, app_request_path which records the number of requests hitting a specific path per minute: app_request_count{app="my-app", path="/login"} This gives me the following: app_request_count{app="my-app",path="/login",status="200",username="username1"} app_request_count{app="my-app",path="/login",status="200",username="username2"} Now I want to count the number of unique usernames, so I run: count_values("username", app_request_count{app="my_app", path="/login"}) and I get: {username="0"} {username="1"} {username="2"} {username="3"} {username="4"} {username="5"} What am I missing / what am I doing wrong? Ideally I'd like to get a single scalar value that display the total number of unique usernames who have logged in in the past 24 hours. Many thanks.
count without (username)(app_request_count) count_values is for metric values, count is for time series. It's also not advised to have something like usernames as label values as they tend to be high cardinality. They may also be PII, which could have legal implications.
Grafana
59,935,902
10
In Grafana I have a drop down for variable $topic with values "topic_A" "topic_B" "topic_A" is selected so $topic = "topic_A" I want to query prometheus using function{topic=$topic} and that works fine. How would I implement function{topic="$topic" + "_ERROR"} (this fails) where what I want to query would be "topic_A_ERROR" if "topic_A" is selected. How do I combine variable $topic and string "_ERROR" in the query?
UPDATE 2020-08-17: There is a new syntax for Grafana variables, new format is to use curly braces after dollar sign: function{topic=~"${topic}_ERROR"} Double brackets syntax is deprecated and will be deleted soon. Also now you can define the format of the variable, which may help to solve some spacial characters issues. Example: ${topic:raw} Docs: https://grafana.com/docs/grafana/latest/variables/syntax/ If you want to include text in the middle you need to use a different syntax: function{topic=~"[[topic]]_ERROR"} Note not only the double brackets but also the change from = to =~. It is documented on the link at the end of my comment, basically it says: When the Multi-value or Include all value options are enabled, Grafana converts the labels from plain text to a regex compatible string. Which means you have to use =~ instead of =. You can check the official explanation here: https://grafana.com/docs/grafana/latest/features/datasources/prometheus/#using-variables-in-queries
Grafana
59,792,809
10
I'm trying to use a customisation file (custom.ini) for my Grafana installation. Unfortunately this isn't working. What I have done: Installed a VM with CentOS 7 Added the Grafana Yum Repo as described in the official documentation Installed Grafana with yum install grafana Then I created a simple customisation file vi /etc/grafana/custom.ini With this content default_theme = light Restarted Grafana systemctl restart grafana-server Unfortunately the theme has not changed from dark to light. If I uncomment the same line in the /etc/grafana/grafana.ini then it is working correctly. Any suggestions? Many Thanks
...the parameter is /usr/sbin/grafana-server --config=/etc/grafana/grafana.ini, also the custom.ini is not called at all. ... You were correct at this point in your comments: a custom.ini file is simply not used. The /etc/grafana/grafana.ini file is the custom config file for your platform (Yum repo = rpm package). See this comment by Grafana co-founder and the note in this docs section.
Grafana
50,925,016
10
Is there a way to round a decimal value in grafana? round() and ceil() functions gets an "instant-vector", not a numeric value, and for example, adding a query like ceil(1/15) will return 0.
It depends what you're using to display the data, for example a single stat or gauge you'll find the 'Decimals' option in Grafana, for graphs it's in the 'Axes' options. You don't need to do this in the query for the metric.
Grafana
50,634,445
10
I have install the Grafan in my Kubenernetes 1.9 cluster. When I access with my ingress URL (http://sample.com/grafana/ ) getting the first page. After that javascript, css download not adding /grafana to the URL. here is my ingress rule: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: grafana-ingress-v1 namespace: monitoring annotations: ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: nginx spec: tls: - hosts: - sample.com secretName: ngerss-tls rules: - host: sample.com http: paths: - path: /grafana/ backend: serviceName: grafana-grafana servicePort: 80 Here I see the discussion about the same topic. but its not helping my issue. https://github.com/kubernetes/contrib/issues/860 Below images shows first request goes to /grafana/ but second request didn't get added /grafana/ in the url.
Your ingress rule is correct and nginx creates correct virtual host to forward traffic to grafana's service (I left only needed strings to show): server { server_name sample.com; listen 80; listen [::]:80; set $proxy_upstream_name "-"; location ~* ^/grafana/(?<baseuri>.*) { set $proxy_upstream_name "default-grafana-grafana-80"; set $namespace "default"; set $ingress_name "grafana-ingress-v1"; rewrite /grafana/(.*) /$1 break; rewrite /grafana/ / break; proxy_pass http://default-grafana-grafana-80; } And yes, when you go to sample.com/grafana/ you get the response from grafana pod, but it redirects to sample.com/login page (you see this from screenshot you provided): $ curl -v -L http://sample.com/grafana/ * Trying 192.168.99.100... * Connected to sample.com (192.168.99.100) port 80 (#0) > GET /grafana/ HTTP/1.1 > Host: sample.com > User-Agent: curl/7.47.0 > Accept: */* > < HTTP/1.1 302 Found < Server: nginx/1.13.5 < Date: Tue, 30 Jan 2018 21:55:21 GMT < Content-Type: text/html; charset=utf-8 < Content-Length: 29 < Connection: keep-alive < Location: /login < Set-Cookie: grafana_sess=c07ab2399d82fef4; Path=/; HttpOnly < Set-Cookie: redirect_to=%252F; Path=/ < * Ignoring the response-body * Connection #0 to host sample.com left intact * Issue another request to this URL: 'http://sample.com/login' * Found bundle for host sample.com: 0x563ff9bf7f20 [can pipeline] * Re-using existing connection! (#0) with host sample.com * Connected to sample.com (192.168.99.100) port 80 (#0) > GET /login HTTP/1.1 > Host: sample.com > User-Agent: curl/7.47.0 > Accept: */* > < HTTP/1.1 404 Not Found < Server: nginx/1.13.5 < Date: Tue, 30 Jan 2018 21:55:21 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 21 < Connection: keep-alive < * Connection #0 to host sample.com left intact default backend 404 because by default grafana's root_url is just /: root_url = %(protocol)s://%(domain)s:%(http_port)s/ and when request redirects to just sample.com nginx forwards it to default backend 404. Solution: You need to change root_url grafana's server setting to /grafana/: root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana/ You can do this changing this setting in grafana's configmap object.
Grafana
48,410,293
10
Use Helm installed Prometheus and Grafana in a kubernetes cluster: helm install stable/prometheus helm install stable/grafana It has an alertmanage service. But I saw a blog introduced how to setup alertmanager config with yaml files: http://blog.wercker.com/how-to-setup-alerts-on-prometheus Is it possible to use the current way(installed by helm) to set some alert rules and config for CPU, memory and send Email without create other yaml files? I saw a introduction for k8s configmap to alertmanager: https://github.com/kubernetes/charts/tree/master/stable/prometheus#configmap-files But not clear how to use and how to do. Edit I downloaded source code of stable/prometheus to see what it do. From the values.yaml file I found: serverFiles: alerts: "" rules: "" prometheus.yml: |- rule_files: - /etc/config/rules - /etc/config/alerts scrape_configs: - job_name: prometheus static_configs: - targets: - localhost:9090 https://github.com/kubernetes/charts/blob/master/stable/prometheus/values.yaml#L600 So I think should write to this config file by myself to define alert rules and alertmanager here. But don't clear about this block: rule_files: - /etc/config/rules - /etc/config/alerts Maybe it's meaning the path in the container. But there isn't any file now. Should add here: serverFiles: alert: "" rules: "" Edit 2 After set alert rules and alertmanager configuration in values.yaml: ## Prometheus server ConfigMap entries ## serverFiles: alerts: "" rules: |- # # CPU Alerts # ALERT HighCPU IF ((sum(node_cpu{mode=~"user|nice|system|irq|softirq|steal|idle|iowait"}) by (instance, job)) - ( sum(node_cpu{mode=~"idle|iowait"}) by (instance,job) ) ) / (sum(node_cpu{mode=~"user|nice|system|irq|softirq|steal|idle|iowait"}) by (instance, job)) * 100 > 95 FOR 10m LABELS { service = "backend" } ANNOTATIONS { summary = "High CPU Usage", description = "This machine has really high CPU usage for over 10m", } # TEST ALERT APIHighRequestLatency IF api_http_request_latencies_second{quantile="0.5"} >1 FOR 1m ANNOTATIONS { summary = "High request latency on {{$labels.instance }}", description = "{{ $labels.instance }} has amedian request latency above 1s (current value: {{ $value }}s)", } Ran helm install prometheus/ to install it. Start port-forward for alertmanager component: export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9093 Then access browser to http://127.0.0.1:9003, got these messages: Forwarding from 127.0.0.1:9093 -> 9093 Handling connection for 9093 Handling connection for 9093 E0122 17:41:53.229084 7159 portforward.go:331] an error occurred forwarding 9093 -> 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:54 socat[31237.140275133073152] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused Handling connection for 9093 E0122 17:41:53.243511 7159 portforward.go:331] an error occurred forwarding 9093 -> 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:54 socat[31238.140565602109184] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused E0122 17:41:53.246011 7159 portforward.go:331] an error occurred forwarding 9093 -> 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:54 socat[31239.140184300869376] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused Handling connection for 9093 Handling connection for 9093 E0122 17:41:53.846399 7159 portforward.go:331] an error occurred forwarding 9093 -> 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:55 socat[31250.140004515874560] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused E0122 17:41:53.847821 7159 portforward.go:331] an error occurred forwarding 9093 -> 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:55 socat[31251.140355466835712] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused Handling connection for 9093 E0122 17:41:53.858521 7159 portforward.go:331] an error occurred forwarding 9093 -> 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:55 socat[31252.140268300003072] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused Why? When I check kubectl describe po illocutionary-heron-prometheus-alertmanager-587d747b9c-qwmm6, got: Name: illocutionary-heron-prometheus-alertmanager-587d747b9c-qwmm6 Namespace: default Node: minikube/192.168.99.100 Start Time: Mon, 22 Jan 2018 17:33:54 +0900 Labels: app=prometheus component=alertmanager pod-template-hash=1438303657 release=illocutionary-heron Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"illocutionary-heron-prometheus-alertmanager-587d747b9c","uid":"f... Status: Running IP: 172.17.0.10 Created By: ReplicaSet/illocutionary-heron-prometheus-alertmanager-587d747b9c Controlled By: ReplicaSet/illocutionary-heron-prometheus-alertmanager-587d747b9c Containers: prometheus-alertmanager: Container ID: docker://0808a3ecdf1fa94b36a1bf4b8f0d9d2933bc38afa8b25e09d0d86f036ac3165b Image: prom/alertmanager:v0.9.1 Image ID: docker-pullable://prom/alertmanager@sha256:ed926b227327eecfa61a9703702c9b16fc7fe95b69e22baa656d93cfbe098320 Port: 9093/TCP Args: --config.file=/etc/config/alertmanager.yml --storage.path=/data State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Mon, 22 Jan 2018 17:55:24 +0900 Finished: Mon, 22 Jan 2018 17:55:24 +0900 Ready: False Restart Count: 9 Readiness: http-get http://:9093/%23/status delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /data from storage-volume (rw) /etc/config from config-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-h5b8l (ro) prometheus-alertmanager-configmap-reload: Container ID: docker://b4a349bf7be4ea78abe6899ad0173147f0d3f6ff1005bc513b2c0ac726385f0b Image: jimmidyson/configmap-reload:v0.1 Image ID: docker-pullable://jimmidyson/configmap-reload@sha256:2d40c2eaa6f435b2511d0cfc5f6c0a681eeb2eaa455a5d5ac25f88ce5139986e Port: <none> Args: --volume-dir=/etc/config --webhook-url=http://localhost:9093/-/reload State: Running Started: Mon, 22 Jan 2018 17:33:56 +0900 Ready: True Restart Count: 0 Environment: <none> Mounts: /etc/config from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-h5b8l (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: illocutionary-heron-prometheus-alertmanager Optional: false storage-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: illocutionary-heron-prometheus-alertmanager ReadOnly: false default-token-h5b8l: Type: Secret (a volume populated by a Secret) SecretName: default-token-h5b8l Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 29m (x2 over 29m) default-scheduler PersistentVolumeClaim is not bound: "illocutionary-heron-prometheus-alertmanager" Normal Scheduled 29m default-scheduler Successfully assigned illocutionary-heron-prometheus-alertmanager-587d747b9c-qwmm6 to minikube Normal SuccessfulMountVolume 29m kubelet, minikube MountVolume.SetUp succeeded for volume "config-volume" Normal SuccessfulMountVolume 29m kubelet, minikube MountVolume.SetUp succeeded for volume "pvc-fa84b197-ff4e-11e7-a584-0800270fb7fc" Normal SuccessfulMountVolume 29m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-h5b8l" Normal Started 29m kubelet, minikube Started container Normal Created 29m kubelet, minikube Created container Normal Pulled 29m kubelet, minikube Container image "jimmidyson/configmap-reload:v0.1" already present on machine Normal Started 29m (x3 over 29m) kubelet, minikube Started container Normal Created 29m (x4 over 29m) kubelet, minikube Created container Normal Pulled 29m (x4 over 29m) kubelet, minikube Container image "prom/alertmanager:v0.9.1" already present on machine Warning BackOff 9m (x91 over 29m) kubelet, minikube Back-off restarting failed container Warning FailedSync 4m (x113 over 29m) kubelet, minikube Error syncing pod Edit 3 alertmanager config in values.yaml file: ## alertmanager ConfigMap entries ## alertmanagerFiles: alertmanager.yml: |- global: resolve_timeout: 5m smtp_smarthost: smtp.gmail.com:587 smtp_from: [email protected] smtp_auth_username: [email protected] smtp_auth_password: sender_password receivers: - name: default-receiver email_configs: - to: [email protected] route: group_wait: 10s group_interval: 5m receiver: default-receiver repeat_interval: 3h Not work. Got errors above. alertmanagerFiles: alertmanager.yml: |- global: # slack_api_url: '' receivers: - name: default-receiver # slack_configs: # - channel: '@you' # send_resolved: true route: group_wait: 10s group_interval: 5m receiver: default-receiver repeat_interval Works without any error. So, the problem was the email_configs config method.
The alerts and rules keys in the serverFiles group of the values.yaml file are mounted in the Prometheus container in the /etc/config folder. You can put in there the configuration you want (for example take inspiration by the blog post you linked) and it will be used by Prometheus to handle the alerts. For example, a simple rule could be set like this: serverFiles: alerts: | ALERT cpu_threshold_exceeded IF (100 * (1 - avg by(job)(irate(node_cpu{mode='idle'}[5m])))) > 80 FOR 300s LABELS { severity = "warning", } ANNOTATIONS { summary = "CPU usage > 80% for {{ $labels.job }}", description = "CPU usage avg for last 5m: {{ $value }}", }
Grafana
48,374,858
10
I have different metrices in prometheus counter_metrics a, couneter_metrices b and I want a singlestat for the count of all the different request metrics. How am I able to fetch this? (sum(couneter_metrics{instance="a,job="b"}))+
For the singlestat panel, you can just sum the two metrics and then add them together. Here is an example with two different metrics: sum(prometheus_local_storage_memory_series) + sum(counters_logins) Recommended reading, just in case you are doing anything with rates as well: https://www.robustperception.io/rate-then-sum-never-sum-then-rate/
Grafana
45,343,371
10
I have experienced using Kibana before. However this time, I'd like to try using Grafana. Does my experience guarantee that I can learn Grafana easily? Or is it a whole lot different from Kibana? Please correct me if I'm wrong but so far, according to my research, both are for logs. Grafana is more of visualization only, while Kibana is for searching the logs; is this right?
Grafana is a fork of Kibana but they have developed in totally different directions since 2013. 1. Logs vs Metrics Kibana focuses more on logs and adhoc search while Grafana focuses more on creating dashboards for visualizing time series data. This means Grafana is usually used together with Time Series databases like Graphite, InfluxDB or Elasticsearch with aggregations. Kibana is usually used for searching logs. Metric queries tend to be really fast while searching logs is slower so they are usually used for different purposes. For example, I could look at a metric for memory usage on a server over the last 3 months and get an answer nearly instantly. Brian Brazil (Prometheus) has written about logs vs metrics. 2. Data Sources Kibana is for ElasticSearch and the ELK stack. Grafana supports lots of data sources. Even if you are using Grafana with ElasticSearch, you would not usually look at the same data (logs) as in Kibana. It would be data from Logstash or MetricBeat that can be aggregated (grouped by) rather than raw logs. With Grafana you can mix and match data sources in the same dashboard e.g. ElasticSearch and Splunk. Conclusion Kibana and Grafana have some overlap. Kibana has TimeLion for metrics and Grafana has the Table Panel for logs. Lots of companies use both - Kibana for logs and Grafana for visualizing metrics. They are different from each other so there will be a learning curve.
Grafana
42,781,010
10
I want to send log events to Loggly as JSON objects with parameterized string messages. Our project currently has a lot of code that looks like this: String someParameter = "1234"; logger.log("This is a log message with a parameter {}", someParameter); We're currently using Logback as our SLF4J backend, and Logback's JsonLayout to serialize our ILogEvent objects into JSON. Consequentially, by they time our log events are shipped to Loggly, they look like this: { "message": "This is a log message with a parameter 1234", "level": INFO, .... } While this does work, it sends a different message string for every value of someParameter, which renders Loggly's automatic filters next to useless. Instead, I'd like to have a Layout that creates JSON that looks like this: { "message": "This is a log message with a parameter {}", "level": INFO, "parameters": [ "1234" ] } This format would allow Loggly to group all log events with the message This is a log message with a parameter together, regardless of the value of someParameter. It looks like Logstash's KV filter does something like this - is there any way to accomplish this task with Logback, short of writing my own layout that performs custom serialization of the ILogEvent object?
There is a JSON logstash encoder for Logback, logstash-logback-encoder
Loggly
22,615,311
29
I'd like to set up Loggly to run on AWS Elastic Beanstalk, but can't find any information on how to do this. Is there any guide anywhere, or some general guidance on how to start?
This is how I do it, for papertrailapp.com (which I prefer instead of loggly). In your /ebextensions folder (see more info) you create logs.config, where specify: container_commands: 01-set-correct-hostname: command: hostname www.example.com 02-forward-rsyslog-to-papertrail: # https://papertrailapp.com/systems/setup command: echo "*.* @logs.papertrailapp.com:55555" >> /etc/rsyslog.conf 03-enable-remote-logging: command: echo -e "\$ModLoad imudp\n\$UDPServerRun 514\n\$ModLoad imtcp\n\$InputTCPServerRun 514\n\$EscapeControlCharactersOnReceive off" >> /etc/rsyslog.conf 04-restart-syslog: command: service rsyslog restart 55555 should be replaced with the UDP port number provided by papertrailapp.com. Every time after new instance bootstrap this config will be applied. Then, in your log4j.properties: log4j.rootLogger=WARN, SYSLOG log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender log4j.appender.SYSLOG.facility=local1 log4j.appender.SYSLOG.header=true log4j.appender.SYSLOG.syslogHost=localhost log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout log4j.appender.SYSLOG.layout.ConversionPattern=[%p] %t %c: %m%n I'm not sure whether it's an optimal solution. Read more about this mechanism in jcabi-beanstalk-maven-plugin
Loggly
11,254,908
16
I would like find free alternative for loggly for write my logs with application. Maybe there are open source alternative or free but he need deployment to my VM (Azure).
If you are ready to setup your own server, you can use Splunk logstash graylog For me the splunk seems the easiest to setup. If you don't want to setup your server, you need to use free plans in loggly-like services. That usually means maximum monthly limit you can send there and short retention (for how long your data will be stored) Loggly (max 200MB/day, 7 days retention) LogEntries (5GB/month, 7 days retention) PaperTrail (100MB/month, 2 days retention) Our team have chosen LogEntries
Loggly
28,809,095
16
I am finding it hard to understand the process of Naive Bayes, and I was wondering if someone could explain it with a simple step by step process in English. I understand it takes comparisons by times occurred as a probability, but I have no idea how the training data is related to the actual dataset. Please give me an explanation of what role the training set plays. I am giving a very simple example for fruits here, like banana for example training set--- round-red round-orange oblong-yellow round-red dataset---- round-red round-orange round-red round-orange oblong-yellow round-red round-orange oblong-yellow oblong-yellow round-red
The accepted answer has many elements of k-NN (k-nearest neighbors), a different algorithm. Both k-NN and NaiveBayes are classification algorithms. Conceptually, k-NN uses the idea of "nearness" to classify new entities. In k-NN 'nearness' is modeled with ideas such as Euclidean Distance or Cosine Distance. By contrast, in NaiveBayes, the concept of 'probability' is used to classify new entities. Since the question is about Naive Bayes, here's how I'd describe the ideas and steps to someone. I'll try to do it with as few equations and in plain English as much as possible. First, Conditional Probability & Bayes' Rule Before someone can understand and appreciate the nuances of Naive Bayes', they need to know a couple of related concepts first, namely, the idea of Conditional Probability, and Bayes' Rule. (If you are familiar with these concepts, skip to the section titled Getting to Naive Bayes') Conditional Probability in plain English: What is the probability that something will happen, given that something else has already happened. Let's say that there is some Outcome O. And some Evidence E. From the way these probabilities are defined: The Probability of having both the Outcome O and Evidence E is: (Probability of O occurring) multiplied by the (Prob of E given that O happened) One Example to understand Conditional Probability: Let say we have a collection of US Senators. Senators could be Democrats or Republicans. They are also either male or female. If we select one senator completely randomly, what is the probability that this person is a female Democrat? Conditional Probability can help us answer that. Probability of (Democrat and Female Senator)= Prob(Senator is Democrat) multiplied by Conditional Probability of Being Female given that they are a Democrat. P(Democrat & Female) = P(Democrat) * P(Female | Democrat) We could compute the exact same thing, the reverse way: P(Democrat & Female) = P(Female) * P(Democrat | Female) Understanding Bayes Rule Conceptually, this is a way to go from P(Evidence| Known Outcome) to P(Outcome|Known Evidence). Often, we know how frequently some particular evidence is observed, given a known outcome. We have to use this known fact to compute the reverse, to compute the chance of that outcome happening, given the evidence. P(Outcome given that we know some Evidence) = P(Evidence given that we know the Outcome) times Prob(Outcome), scaled by the P(Evidence) The classic example to understand Bayes' Rule: Probability of Disease D given Test-positive = P(Test is positive|Disease) * P(Disease) _______________________________________________________________ (scaled by) P(Testing Positive, with or without the disease) Now, all this was just preamble, to get to Naive Bayes. Getting to Naive Bayes' So far, we have talked only about one piece of evidence. In reality, we have to predict an outcome given multiple evidence. In that case, the math gets very complicated. To get around that complication, one approach is to 'uncouple' multiple pieces of evidence, and to treat each of piece of evidence as independent. This approach is why this is called naive Bayes. P(Outcome|Multiple Evidence) = P(Evidence1|Outcome) * P(Evidence2|outcome) * ... * P(EvidenceN|outcome) * P(Outcome) scaled by P(Multiple Evidence) Many people choose to remember this as: P(Likelihood of Evidence) * Prior prob of outcome P(outcome|evidence) = _________________________________________________ P(Evidence) Notice a few things about this equation: If the Prob(evidence|outcome) is 1, then we are just multiplying by 1. If the Prob(some particular evidence|outcome) is 0, then the whole prob. becomes 0. If you see contradicting evidence, we can rule out that outcome. Since we divide everything by P(Evidence), we can even get away without calculating it. The intuition behind multiplying by the prior is so that we give high probability to more common outcomes, and low probabilities to unlikely outcomes. These are also called base rates and they are a way to scale our predicted probabilities. How to Apply NaiveBayes to Predict an Outcome? Just run the formula above for each possible outcome. Since we are trying to classify, each outcome is called a class and it has a class label. Our job is to look at the evidence, to consider how likely it is to be this class or that class, and assign a label to each entity. Again, we take a very simple approach: The class that has the highest probability is declared the "winner" and that class label gets assigned to that combination of evidences. Fruit Example Let's try it out on an example to increase our understanding: The OP asked for a 'fruit' identification example. Let's say that we have data on 1000 pieces of fruit. They happen to be Banana, Orange or some Other Fruit. We know 3 characteristics about each fruit: Whether it is Long Whether it is Sweet and If its color is Yellow. This is our 'training set.' We will use this to predict the type of any new fruit we encounter. Type Long | Not Long || Sweet | Not Sweet || Yellow |Not Yellow|Total ___________________________________________________________________ Banana | 400 | 100 || 350 | 150 || 450 | 50 | 500 Orange | 0 | 300 || 150 | 150 || 300 | 0 | 300 Other Fruit | 100 | 100 || 150 | 50 || 50 | 150 | 200 ____________________________________________________________________ Total | 500 | 500 || 650 | 350 || 800 | 200 | 1000 ___________________________________________________________________ We can pre-compute a lot of things about our fruit collection. The so-called "Prior" probabilities. (If we didn't know any of the fruit attributes, this would be our guess.) These are our base rates. P(Banana) = 0.5 (500/1000) P(Orange) = 0.3 P(Other Fruit) = 0.2 Probability of "Evidence" p(Long) = 0.5 P(Sweet) = 0.65 P(Yellow) = 0.8 Probability of "Likelihood" P(Long|Banana) = 0.8 P(Long|Orange) = 0 [Oranges are never long in all the fruit we have seen.] .... P(Yellow|Other Fruit) = 50/200 = 0.25 P(Not Yellow|Other Fruit) = 0.75 Given a Fruit, how to classify it? Let's say that we are given the properties of an unknown fruit, and asked to classify it. We are told that the fruit is Long, Sweet and Yellow. Is it a Banana? Is it an Orange? Or Is it some Other Fruit? We can simply run the numbers for each of the 3 outcomes, one by one. Then we choose the highest probability and 'classify' our unknown fruit as belonging to the class that had the highest probability based on our prior evidence (our 1000 fruit training set): P(Banana|Long, Sweet and Yellow) P(Long|Banana) * P(Sweet|Banana) * P(Yellow|Banana) * P(banana) = _______________________________________________________________ P(Long) * P(Sweet) * P(Yellow) = 0.8 * 0.7 * 0.9 * 0.5 / P(evidence) = 0.252 / P(evidence) P(Orange|Long, Sweet and Yellow) = 0 P(Other Fruit|Long, Sweet and Yellow) P(Long|Other fruit) * P(Sweet|Other fruit) * P(Yellow|Other fruit) * P(Other Fruit) = ____________________________________________________________________________________ P(evidence) = (100/200 * 150/200 * 50/200 * 200/1000) / P(evidence) = 0.01875 / P(evidence) By an overwhelming margin (0.252 >> 0.01875), we classify this Sweet/Long/Yellow fruit as likely to be a Banana. Why is Bayes Classifier so popular? Look at what it eventually comes down to. Just some counting and multiplication. We can pre-compute all these terms, and so classifying becomes easy, quick and efficient. Let z = 1 / P(evidence). Now we quickly compute the following three quantities. P(Banana|evidence) = z * Prob(Banana) * Prob(Evidence1|Banana) * Prob(Evidence2|Banana) ... P(Orange|Evidence) = z * Prob(Orange) * Prob(Evidence1|Orange) * Prob(Evidence2|Orange) ... P(Other|Evidence) = z * Prob(Other) * Prob(Evidence1|Other) * Prob(Evidence2|Other) ... Assign the class label of whichever is the highest number, and you are done. Despite the name, Naive Bayes turns out to be excellent in certain applications. Text classification is one area where it really shines.
DataSet
10,059,594
576
DataSet and DataTable both implement IDisposable, so, by conventional best practices, I should call their Dispose() methods. However, from what I've read so far, DataSet and DataTable don't actually have any unmanaged resources, so Dispose() doesn't actually do much. Plus, I can't just use using(DataSet myDataSet...) because DataSet has a collection of DataTables. So, to be safe, I'd need to iterate through myDataSet.Tables, dispose of each of the DataTables, then dispose of the DataSet. So, is it worth the hassle to call Dispose() on all of my DataSets and DataTables? Addendum: For those of you who think that DataSet should be disposed: In general, the pattern for disposing is to use using or try..finally, because you want to guarantee that Dispose() will be called. However, this gets ugly real fast for a collection. For example, what do you do if one of the calls to Dispose() thrown an exception? Do you swallow it (which is "bad") so that you can continue on to dispose the next element? Or, do you suggest that I just call myDataSet.Dispose(), and forget about disposing the DataTables in myDataSet.Tables?
Here are a couple of discussions explaining why Dispose is not necessary for a DataSet. To Dispose or Not to Dispose ?: The Dispose method in DataSet exists ONLY because of side effect of inheritance-- in other words, it doesn't actually do anything useful in the finalization. Should Dispose be called on DataTable and DataSet objects? includes some explanation from an MVP: The system.data namespace (ADONET) does not contain unmanaged resources. Therefore there is no need to dispose any of those as long as you have not added yourself something special to it. Understanding the Dispose method and datasets? has a with comment from authority Scott Allen: In pratice we rarely Dispose a DataSet because it offers little benefit" So, the consensus there is that there is currently no good reason to call Dispose on a DataSet.
DataSet
913,228
218
How do I convert data from a Scikit-learn Bunch object to a Pandas DataFrame? from sklearn.datasets import load_iris import pandas as pd data = load_iris() print(type(data)) data1 = pd. # Is there a Pandas method to accomplish this?
Manually, you can use pd.DataFrame constructor, giving a numpy array (data) and a list of the names of the columns (columns). To have everything in one DataFrame, you can concatenate the features and the target into one numpy array with np.c_[...] (note the []): import numpy as np import pandas as pd from sklearn.datasets import load_iris # save load_iris() sklearn dataset to iris # if you'd like to check dataset type use: type(load_iris()) # if you'd like to view list of attributes use: dir(load_iris()) iris = load_iris() # np.c_ is the numpy concatenate function # which is used to concat iris['data'] and iris['target'] arrays # for pandas column argument: concat iris['feature_names'] list # and string list (in this case one string); you can make this anything you'd like.. # the original dataset would probably call this ['Species'] data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']], columns= iris['feature_names'] + ['target'])
DataSet
38,105,539
172
When using R it's handy to load "practice" datasets using data(iris) or data(mtcars) Is there something similar for Pandas? I know I can load using any other method, just curious if there's anything builtin.
Since I originally wrote this answer, I have updated it with the many ways that are now available for accessing sample data sets in Python. Personally, I tend to stick with whatever package I am already using (usually seaborn or pandas). If you need offline access, installing the data set with Quilt seems to be the only option. Seaborn The brilliant plotting package seaborn has several built-in sample data sets. import seaborn as sns iris = sns.load_dataset('iris') iris.head() sepal_length sepal_width petal_length petal_width species 0 5.1 3.5 1.4 0.2 setosa 1 4.9 3.0 1.4 0.2 setosa 2 4.7 3.2 1.3 0.2 setosa 3 4.6 3.1 1.5 0.2 setosa 4 5.0 3.6 1.4 0.2 setosa Pandas If you do not want to import seaborn, but still want to access its sample data sets, you can use @andrewwowens's approach for the seaborn sample data: iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv') Note that the sample data sets containing categorical columns have their column type modified by sns.load_dataset() and the result might not be the same by getting it from the url directly. The iris and tips sample data sets are also available in the pandas github repo here. R sample datasets Since any dataset can be read via pd.read_csv(), it is possible to access all R's sample data sets by copying the URLs from this R data set repository. Additional ways of loading the R sample data sets include statsmodel import statsmodels.api as sm iris = sm.datasets.get_rdataset('iris').data and PyDataset from pydataset import data iris = data('iris') scikit-learn scikit-learn returns sample data as numpy arrays rather than a pandas data frame. from sklearn.datasets import load_iris iris = load_iris() # `iris.data` holds the numerical values # `iris.feature_names` holds the numerical column names # `iris.target` holds the categorical (species) values (as ints) # `iris.target_names` holds the unique categorical names Quilt Quilt is a dataset manager created to facilitate dataset management. It includes many common sample datasets, such as several from the uciml sample repository. The quick start page shows how to install and import the iris data set: # In your terminal $ pip install quilt $ quilt install uciml/iris After installing a dataset, it is accessible locally, so this is the best option if you want to work with the data offline. import quilt.data.uciml.iris as ir iris = ir.tables.iris() sepal_length sepal_width petal_length petal_width class 0 5.1 3.5 1.4 0.2 Iris-setosa 1 4.9 3.0 1.4 0.2 Iris-setosa 2 4.7 3.2 1.3 0.2 Iris-setosa 3 4.6 3.1 1.5 0.2 Iris-setosa 4 5.0 3.6 1.4 0.2 Iris-setosa Quilt also support dataset versioning and include a short description of each dataset.
DataSet
28,417,293
145
I currently use a DataTable to get results from a database which I can use in my code. However, many example on the web show using a DataSet instead and accessing the table(s) through the collections method. Is there any advantage, performance wise or otherwise, of using DataSets or DataTables as a storage method for SQL results?
It really depends on the sort of data you're bringing back. Since a DataSet is (in effect) just a collection of DataTable objects, you can return multiple distinct sets of data into a single, and therefore more manageable, object. Performance-wise, you're more likely to get inefficiency from unoptimized queries than from the "wrong" choice of .NET construct. At least, that's been my experience.
DataSet
2,250
143
This is possibly a simple question, but I do not know how to order columns alphabetically. test = data.frame(C = c(0, 2, 4, 7, 8), A = c(4, 2, 4, 7, 8), B = c(1, 3, 8, 3, 2)) # C A B # 1 0 4 1 # 2 2 2 3 # 3 4 4 8 # 4 7 7 3 # 5 8 8 2 I like to order the columns by column names alphabetically, to achieve # A B C # 1 4 1 0 # 2 2 3 2 # 3 4 8 4 # 4 7 3 7 # 5 8 2 8 For others I want my own defined order: # B A C # 1 4 1 0 # 2 2 3 2 # 3 4 8 4 # 4 7 3 7 # 5 8 2 8 Please note that my datasets are huge, with 10000 variables. So the process needs to be more automated.
You can use order on the names, and use that to order the columns when subsetting: test[ , order(names(test))] A B C 1 4 1 0 2 2 3 2 3 4 8 4 4 7 3 7 5 8 2 8 For your own defined order, you will need to define your own mapping of the names to the ordering. This would depend on how you would like to do this, but swapping whatever function would to this with order above should give your desired output. You may for example have a look at Order a data frame's rows according to a target vector that specifies the desired order, i.e. you can match your data frame names against a target vector containing the desired column order.
DataSet
7,334,644
121
I've been working for quite a while now with LINQ. However, it remains a bit of a mystery what the real differences are between the mentioned flavours of LINQ. The successful answer will contain a short differentiation between them. What is the main goal of each flavor, what is the benefit, and is there a performance impact... P.S. I know that there are a lot of information sources out there, but I'm looking for a kind of a "cheat sheet" which instructs a newbie where to head for a specific goal.
all of them are LINQ - Language Integrated Query - so they all share a lot of commonality. All these "dialects" basically allow you to do a query-style select of data, from various sources. Linq-to-SQL is Microsoft's first attempt at an ORM - Object-Relational Mapper. It supports SQL Server only. It's a mapping technology to map SQL Server database tables to .NET objects. Linq-to-Entities is the same idea, but using Entity Framework in the background, as the ORM - again from Microsoft, but supporting multiple database backends Linq-to-DataSets is LINQ, but using is against the "old-style" ADO.NET 2.0 DataSets - in the times before ORM's from Microsoft, all you could do with ADO.NET was returning DataSets, DataTables etc., and Linq-to-DataSets queries those data stores for data. So in this case, you'd return a DataTable or DataSets (System.Data namespace) from a database backend, and then query those using the LINQ syntax
DataSet
2,443,836
110
I'm currently learning TensorFlow but I came across a confusion in the below code snippet: dataset = dataset.shuffle(buffer_size = 10 * batch_size) dataset = dataset.repeat(num_epochs).batch(batch_size) return dataset.make_one_shot_iterator().get_next() I know that first the dataset will hold all the data but what shuffle(),repeat(), and batch() do to the dataset? Please help me with an example and explanation.
Update: Here is a small collaboration notebook for demonstration of this answer. Imagine, you have a dataset: [1, 2, 3, 4, 5, 6], then: How ds.shuffle() works dataset.shuffle(buffer_size=3) will allocate a buffer of size 3 for picking random entries. This buffer will be connected to the source dataset. We could image it like this: Random buffer | | Source dataset where all other elements live | | ↓ ↓ [1,2,3] <= [4,5,6] Let's assume that entry 2 was taken from the random buffer. Free space is filled by the next element from the source buffer, that is 4: 2 <= [1,3,4] <= [5,6] We continue reading till nothing is left: 1 <= [3,4,5] <= [6] 5 <= [3,4,6] <= [] 3 <= [4,6] <= [] 6 <= [4] <= [] 4 <= [] <= [] How ds.repeat() works As soon as all the entries are read from the dataset and you try to read the next element, the dataset will throw an error. That's where ds.repeat() comes into play. It will re-initialize the dataset, making it again like this: [1,2,3] <= [4,5,6] What will ds.batch() produce The ds.batch() will take the first batch_size entries and make a batch out of them. So, a batch size of 3 for our example dataset will produce two batch records: [2,1,5] [3,6,4] As we have a ds.repeat() before the batch, the generation of the data will continue. But the order of the elements will be different, due to the ds.random(). What should be taken into account is that 6 will never be present in the first batch, due to the size of the random buffer.
DataSet
53,514,495
102
I have a dataset with 11 columns with over a 1000 rows each. The columns were labeled V1, V2, V11, etc.. I replaced the names with something more useful to me using the "c" command. I didn't realize that row 1 also contained labels for each column and my actual data starts on row 2. Is there a way to delete row 1 and decrement?
Keep the labels from your original file like this: df = read.table('data.txt', header = T) If you have columns named x and y, you can address them like this: df$x df$y If you'd like to actually delete the first row from a data.frame, you can use negative indices like this: df = df[-1,] If you'd like to delete a column from a data.frame, you can assign NULL to it: df$x = NULL Here are some simple examples of how to create and manipulate a data.frame in R: # create a data.frame with 10 rows > x = rnorm(10) > y = runif(10) > df = data.frame( x, y ) # write it to a file > write.table( df, 'test.txt', row.names = F, quote = F ) # read a data.frame from a file: > read.table( df, 'test.txt', header = T ) > df$x [1] -0.95343778 -0.63098637 -1.30646529 1.38906143 0.51703237 -0.02246754 [7] 0.20583548 0.21530721 0.69087460 2.30610998 > df$y [1] 0.66658148 0.15355851 0.60098886 0.14284576 0.20408723 0.58271061 [7] 0.05170994 0.83627336 0.76713317 0.95052671 > df$x = x > df y x 1 0.66658148 -0.95343778 2 0.15355851 -0.63098637 3 0.60098886 -1.30646529 4 0.14284576 1.38906143 5 0.20408723 0.51703237 6 0.58271061 -0.02246754 7 0.05170994 0.20583548 8 0.83627336 0.21530721 9 0.76713317 0.69087460 10 0.95052671 2.30610998 > df[-1,] y x 2 0.15355851 -0.63098637 3 0.60098886 -1.30646529 4 0.14284576 1.38906143 5 0.20408723 0.51703237 6 0.58271061 -0.02246754 7 0.05170994 0.20583548 8 0.83627336 0.21530721 9 0.76713317 0.69087460 10 0.95052671 2.30610998 > df$x = NULL > df y 1 0.66658148 2 0.15355851 3 0.60098886 4 0.14284576 5 0.20408723 6 0.58271061 7 0.05170994 8 0.83627336 9 0.76713317 10 0.95052671
DataSet
7,541,610
102