output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
As far as I can see, four processes get spawned in quick succession, and each of them tries to do * sum += some_value; it is perfectly possible that they all see * sum as being zero before the addition. Let's use an abstract assembler syntax. The C statement *sum = *sum + local_sumis compiled into LOAD *sum into R0 LOAD local_sum into R1 ADD R1 to R0 STORE R0 to *sumThe four processes race to execute this sequence. It is perfectly possible that they all do LOAD *sum into R0 before any of them has had a chance to do STORE R0 to *sum; in fact, given that, as you say, there is a system call (and thus a replanification point) triggered by STORE R0 to *sum, there is a very good chance. You need to synchronize accesses to shared variables, using semaphores for instance.
Context: I am using POSIX shared memory to provide a set of processes with a shared memory space. I have used this scheme for some time now in order to share data, and it's working okay. However, I recently ran into an odd problem with a certain class of programs. Problem: I wrote a program in which each process has to contribute a value to a shared sum in the shared memory space. The sum was initialized to zero when the shared object was mapped into memory earlier. However, when each process tries to add it's part to the shared sum, it can see the latest value, but the result of the addition is always as if it added it's own value with zero. See below: [21017] Adding 6 to 0! [21020] Adding 33 to 0! [21016] Adding 15 to 0! [21018] Adding 24 to 0! [21017] Got access! (0x7fe953fcb000 = 0) [21017] Done (0x7fe953fcb000 = 6) [21016] Got access! (0x7fe953fcb000 = 6) [21016] Done (0x7fe953fcb000 = 15) [21018] Got access! (0x7fe953fcb000 = 15) [21018] Done (0x7fe953fcb000 = 24) [21020] Got access! (0x7fe953fcb000 = 24) [21020] Done (0x7fe953fcb000 = 33) Sum = 33Each process "sees" the latest value written, but somehow, after adding it's own component, seems to have ignored the existing value. You can see that each access is ordered sequentially, as there is an access control system managing who gets to write to the shared memory space. The test program used looks as follows (though I do not expect the reader to run it): int main (void) { int local_sum = 0, gid = -1; volatile int *sum; // Fork for four processes. for (int i = 1; i < 4; i++) { if (fork() == 0) break; } // Initialize the DSM. Set GID. sum = (int *)dsm_init(&cfg); gid = dsm_get_gid(); // Compute range. for (int i = 0; i < 3; i++) { local_sum += array[(gid * 3) + i]; } // Add to local sum. printf("[%d] Adding %d to %d!\n", getpid(), local_sum, *sum); *sum = *sum + local_sum; // Barrier. dsm_barrier(); // Print sum if process zero. if (gid == 0) printf("Sum = %d\n", *sum); // Exit. dsm_exit(); }Why is it that each process can "see" the correct value at the address 0x7fe953fcb000 in the shared space, yet after adding, behaves as if the value at that address during the addition was still zero?Here is what troubles me about this problem:If it was a caching issue, how is it that I can print the correct value before the arithmetic operation, and it is still incorrect? I am adding to a shared value on the process heap. The compiler could not have assumed that the value there would be zero, and optimized anything out.Is there some explanation for why this might happen under the hood? I've attempted to use GDB with my program to see what is going on. But as far as I can tell, it is simply moving the value at memory addresses into the registers. I cannot see any optimization problems yet.
Processor not seeing changes to POSIX shared memory?
Looks like a reboot solved the issue. However I have no idea what happened - and why it only seems to happen to browsers.
Using Manjaro / Arch linux, I wanted to install another browser. However, no matter whether I installed Opera or Chromium (via pacman) I always get an error when executing it (from both Application Launcher and shell). Running Chromium it from the shell I get: $ chromium [11452:11452:0914/225931.419271:ERROR:platform_shared_memory_region_posix.cc(217)] Creating shared memory in /dev/shm/.org.chromium.Chromium.5FNz4h failed: Permission denied (13) [11452:11452:0914/225931.419316:ERROR:platform_shared_memory_region_posix.cc(220)] Unable to access(W_OK|X_OK) /dev/shm: Permission denied (13) [11452:11452:0914/225931.419320:FATAL:platform_shared_memory_region_posix.cc(222)] This is frequently caused by incorrect permissions on /dev/shm. Try 'sudo chmod 1777 /dev/shm' to fix. [0914/225931.424145:ERROR:elf_dynamic_array_reader.h(64)] tag not found [0914/225931.424372:ERROR:elf_dynamic_array_reader.h(64)] tag not found [0914/225931.424389:ERROR:elf_dynamic_array_reader.h(64)] tag not found zsh: trace trap (core dumped) chromiumSimilar error for Opera: $ opera [11882:11882:0914/230407.927506:ERROR:platform_shared_memory_region_posix.cc(217)] Creating shared memory in /dev/shm/.org.chromium.Chromium.VS4HRD failed: Permission denied (13) [11882:11882:0914/230407.927725:ERROR:platform_shared_memory_region_posix.cc(220)] Unable to access(W_OK|X_OK) /dev/shm: Permission denied (13) [11882:11882:0914/230407.927735:FATAL:platform_shared_memory_region_posix.cc(222)] This is frequently caused by incorrect permissions on /dev/shm. Try 'sudo chmod 1777 /dev/shm' to fix. Discarded=1 zsh: illegal hardware instruction (core dumped) opera Discarded=1 The by default installed Firefox however is working fine (so no network issues or such kind). I could imagine it is a configuration / safety issue (like Apparmor etc.). Any ideas?
Browser (Opera, Chromium...) start causing Permission denied (13) error for shared memory
After having failed to improve the situation by adding "memory barriers" at every reasonable point in my code I finally found a workaround that works. The problem does not seem to be linked to the shared memory at all. Instead it seems to be triggered by the scheduler and adding calls to "schedule()" in my long running kthread does seem to avoid the system freezes. Unfortunately this workaround is not a viable solution for me and I've created a separate thread to further explore the direction that this is taking: Is there a way to use a long running kthread without calling schedule()?
I have written a "device driver" (see source code here: https://bitbucket.org/wothke/websid/src/master/raspi/websid_module/ ) that runs fine for most of the time (see https://www.youtube.com/watch?v=bE6nSTT_038 ) but which still seems to have the potential to randomly crash the device occasionally. The "device driver" starts a kthread that performs a simple but timing critical playback loop in which it controls some connected audio chip via multiple GPIO pins. This kthread is run (using kthread_bind) on an "isolated" CPU core which should be largely excempted from regular kernel use (see details on kernel configuration below). The kthread is given high prio via sched_set_fifo. The kthread makes no subroutine calls and does not require any memory that has not already been previously allocated in the kernel. (The thread also temporarily disables anything that might interfer with its timing, using get_cpu, local_irq_save & local_bh_disable. However these do not seem to be the root cause of the sporadic crashes since crashes could be reproduced even when that disabling was not used.) I have compiled a regular "Raspberry OS" "Desktop" kernel but I specifically activated NO_HZ_FULL (i.e. "Full dynaticks system (tickless)"). Also I am specifically isolating core #3 via cmdline.txt with: isolcpus=3 rcu_nocbs=3 rcu_nocb_poll=3 nohz_full=3 (which seems to keep most IRQs off cpu core #3 - as intended, so that my above kthread should be pretty alone on that core #3) The susual suspect might be the "shared kernel memory" buffer that is used for all communication between the above "playback" kthread and the producer of the data which lives in "userland". I already took all the precautions that I could think of to avoid potential race-conditions but maybe there is some kind of CPU cache effect, or something else that I am overlooking.. The "shared buffer" contains 4 page aligned areas that are setup/used in a way that should ensure safe communication/synchronization.the 1st page only contains one 32-bit flag that is accessed as an u32 or uint32_t (this should be naturally atomar). The kthread only updates this flag when it is 0 and it only sets it to something non-0. The userland code only resets this flag to 0 and only if it had some non-0 value - thereby acknowledging that it received the non-0 value set by the kthread.the 2nd page contains a similar flag like 1) but for the opposite direction, i.e. here it is the kthread that will receive something non-0 from "userland".the 3rd(+following) page(s) then contain the 1st buffer that is used for a simple double buffering scheme. This buffer is exclusively written to by the "userland" producer and exclusively read by the kthread. The "ping/pong" protocol implemented via the 2 flags is meant to ensure that the buffer is *never" used concurrently: The kthread initiates a sequence by signalling that one of the buffers can be filled and later the "userland" signals back after it has completed filling the respective buffer, i.e. the kthead only starts reading from a buffer after it has seen the signal from the producer that it is now safe to do so (before the "userland" producer gives that signal it uses msync(start_page, len, MS_INVALIDATE) to report which parts of the shared memory area it has updated.).the n-th(+following) pages(s) then contain the 2nd buffer (everything said in 3) applies here as well)But even if something went wrong in the above, that might then block the kthread or the respective userland process.. but I don't see why that should crash the whole system. The most plausible explanation for me would be if the "shared buffer" got randomly relocated (thus leading to random memory corruption), but I would think that this should not happen to a buffer allocated via: _raw_buffer = kmalloc(AREA_SIZE + 2*PAGE_SIZE, GFP_KERNEL & ~__GFP_RECLAIM & ~__GFP_MOVABLE);Or if there was some kernal function that specifically went into some blocking wait for something from core #3 (which might not be happening due to my kthread starving everything else on that CPU..).. however I'd be surprised why such a problem would then only be striking sporadically instead of crashing the machine all the time.. Any ideas?
Am I making invalid assumptions with regard to my kernel module's shared memory?
On fork() the memory space of the parent process is cloned into the child process. As an optimization, modern operating systems use COW (copy on write), so all private memory is shared with the child process until one of the processes performs a change. Then the affected memory pages get duplicated.The child process and the parent process run in separate memory spaces. At the time of fork() both memory spaces have the same content. Memory writes, file mappings (mmap(2)), and unmappings (munmap(2)) performed by one of the processes do not affect the other."Both memory spaces have the same content" includes memory allocated with mmap(). The memory mappings get cloned and mmap() or munmap() after the fork don't affect the other process anymore. Only memory mapped with MAP_SHARED (or the Linux-specific MAP_SHARED_VALIDATE) before the fork will have changes to the contents propagated between the processes.MAP_SHARED Share this mapping. Updates to the mapping are visible to other processes mapping the same region, and (in the case of file-backed mappings) are carried through to the underlying file. (To precisely control when updates are carried through to the underlying file requires the use of msync(2).)There are some Linux specific mapping flags to modify the behaviour in other ways:Memory mappings that have been marked with the madvise(2) MADV_DONTFORK flag are not inherited across a fork(). Memory in address ranges that have been marked with the madvise(2) MADV_WIPEONFORK flag is zeroed in the child after a fork(). (The MADV_WIPEONFORK setting remains in place for those address ranges in the child.)On exec() the memory image is replaced with the new process, so all memory mappings that got inherited on fork() are removed.All process attributes are preserved during an execve(), except the following: […]Memory mappings are not preserved (mmap(2)). Attached System V shared memory segments are detached (shmat(2)). POSIX shared memory regions are unmapped (shm_open(3)).
Let's say, we are creating a shared memory using mmap(). Let's say the total memory size is 4096. If we use a fork() system call to create children, would the children use the same memory, or will need to have their own memory to work?
How does a process and its children use memory in case of mmap()?
In general Unix Domain Sockets cannot communicate between host OS and guest OS. Unix Domain Sockets are, like e.g. Named Pipes, bound to the OS kernel. If you open the same Unix Domain Socket file node in the host and the guest, you get two different virtual network connections. One in the host kernel and one in the guest kernel. These are completely separate and cannot intercommunicate. This doesn't apply iff host OS and guest OS share the same kernel, e.g. when using Linux namespaces/containers instead of real virtualization. Then it's possible to use Unix Domain Sockets to communicate between the systems. For communication between two different OS kernels you need to use a real network protocol like IPv4/IPv6 or measures specific to the used virtualization software.
Is it a true statement that, shared memory does not work between a host OS and guest OS, but a Unix Domain Socket (specifically udp) can communicate between the two? An in depth explanation would be appreciated, thanks!
Unix Domain Socket with VM
Apologies in advance if that is a silly question, but is there such a thing as a pure shared memory between processes, not backed by a file.Not a silly question! There is, it's the default way of getting it; (SYSV) shmget is the function you use to get these shared memory buffers. You assign a string name to it, a key, and another process can use that key with shmget to get access to the same. The POSIX way is shm_open with very similar semantics: you give your segment a name, which might look a lot like a file name, but isn't backed by some hard drive, and subsequent shm_open calls on the same name (with compatible/no flags) will grant access to the same memory.
I'm curious because today the only way I know how to give two different processes the same shared-memory is through a memory-mapped file, in other words, both processes open the same memory-mapped file and write/read to/from it. That has penalties / drawbacks as the operating system needs to swap between disk and memory. Apologies in advance if that is a silly question, but is there such a thing as a pure shared memory between processes, not backed by a file. If yes, how would the processes get a hold of it if not using a memory-mapped file or /dev/shm file?
Is it possible for two processes to use the same shared-memory without resorting to a file to obtain it, be it a memory-mapped file or /dev/shm file?
From a quick google search you seem to need ulimit. See more on that through man limits.conf. The best way to limit resources is through VMs (XEN/KVM/OpenVZ), but I don't think it's what you asked for.
I have a computing cluster of 44 cores and 256GB memory running Ubuntu and I'd like to limit the number of CPUs and memory used by certain users. Limiting memory usage would be more important. So for example, I'd like to say that user X should only be able to use 10 CPUs and 50GB memory. How can I achieve this?
Limit memory usage of a user
You can see that the SHR column is displaying the same amount of memory as RES - this means that practically 100% of that particular task's resident memory consists of shared memory segments. Even that is not giving you full insight though, as RES is just the amount of memory that is not paged out. To figure out what the actual memory consumption of a process is, try using a more intelligent method such as ps(1) with select fields (look into the -o option and the STANDARD FORMAT SPECIFIERS section of the manual). Do note that apart from the size field (which is a very rough estimate) the total size of a process in terms of memory is difficult to assess at its face value precisely because some of its memory might be shared with other processes and we can't really tell straight away how many of those pages are dirty (see the NOTES section's paragraph on SIZE and RSS in the manual page). To be able to tell that those processes are actually using the same shared memory segments, you need to look at the output of ipcs(1) command and look into shared memory segments that have a suspicious number of nattch processes. Then you need to look at those segments' detailed information using ipcs -m -i <id> that will display the actual processes attached. If this sounds too complicated, look at it as a challenge excercise - write a script that, given a process ID, prints the list of SHM IDs this process is attached to, and a list of PIDs that it is sharing those SHM IDs with. ;)
I just took a look at the output of top and it showed me (amongst other processes) the following:As once can see I have ten processes consuming approx 10GB each so 100GB in total. The computer however has only 64GB of memory as can be seen in the second line from the top. Of which currently about 22GB are used. Now the solution to this puzzle: The test_mpi.out processes use a large amount of shared memory amongst each other. Since I have the source code I know that the actual memory consumption is about 10GB. The computer consumes about 12GB of memory when idle, so this sums up to the reported 22GB. What I don't understand is how top knows that only 22GB of memory are actually used. Based on all the columns displayed (VIRT,RES and SHR) top should not be able to figure this out. It would be awesome if someone could shed some light on this. EDIT: running on redhat linux EDIT: Thanks to Michael Homer I know now that top takes this information from /proc/meminfo. But I was rather hoping for an answer that would explain how I can determine that all the test_mpi.out processes only consume 10GB in total (instead of the 100gb suggested by naively adding the output of top). I tried looking at /proc/PID/status but I didn't find any clues on how I could determine the actual memory usage of several processes that share a large memory segment (if I wouldn't have the source code).
determine the actual memory usage of several processes that share a large memory segment
Root can delete shared memory (or other IPC items) owned by any user. If you need a pragmatic way to do this, do this as root. Otherwise, you will need to possibly alter the permissions on the created items, either as they are created or afterwards. All filesystem entries (including things that aren't files) use the POSIX permissions, So you will need to ensure that the item is writable by the user who is removing it (and the directory above it is writable by that same removing user). The less desirable way to ensure this is to make the item (and its parent directory) world writable. This, of course, opens huge security holes and is not generally recommended. If the process that runs is launched as root, or transitions from a regular user to root and back, then the created IPC items are likely to be owned by root. If the process that does this can be modified, then you may be able to get the process to create the items with the appropriate group permissions to permit removal by a user other than the launching user.
I have a program that creates four shared memory objects. The memory creation routine calls shm_unlink() before attempting to create them, and the program calls another routine to delete them with shm_unlink() at the end of the run. Today I got "permission denied" on objects 2-4 (but not object 1) when attempting to re-create them. Linux fuser showed that they were all owned by the same process, so I couldn't kill that. I went to /dev/shm and did ls -l, and found that 2-4 were owned by root, not by my sudo user. I rarely log on as root so I don't know how they became owned by root. To solve the problem, I logged on as root and deleted them manually. My question is: is there a programmatic way to delete shared memory objects owned by a different user (e.g. root) without logging on as that user? Is there a way to call chown on the shm objects and change the owner?
Delete POSIX shared memory owned by different user
There is no tool do this. Only ipcrm (for deleting presented shared memory objects), ipcmk (for creating shared memory objects) and ipcs (for showing existing shared memory objects) are present (I mean util-linux project). The kernel doesn't provide /proc interface for Sys V Shared Memory Objects instead of POSIX Shared Memory (/dev/shm/<object>). You can write you own tool that using shmctl(2) syscall. Many tutorials and books about Unix IPC have huge count of examples about shmctl.
When I run ipcs -m, I can see a list of the shared memory segments on the system, like ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x00000000 0 user1 664 342110 0 0x00000000 32769 user1 664 28391740 5 0x00000000 65538 user1 664 1929302 4How can I change the owner of a shared memory segment?
Change ownership of shared memory
SHMAX and SHMALL won't constraint the size of your miscellaneous tmpfs.Since tmpfs lives completely in the page cache and on swap, all tmpfs pages will be shown as “Shmem” in /proc/meminfo and “Shared” in free(1).Check with the df utility how many of these filesystem are actually mounted on your system and eventually limit their maximum possible size thanks to the size= parameter of corresponding mount operation. Of course if your application uses this kind of filesystem and no swap space is made available, the application might well block or stop processing because it won't find any space left on device.
We are debugging a situation where the cached/shared memory increase and increase until the system reach OOM-killer. We have set shmax and shmall in sysctl.conf but without any visible effect. Do we need to enable something more for shmax/shmall to work? Or can some part of the system go beyond this limit, how hard is it enforced? Can buggy user space application or only bugs in kernel/drivers cause it? The application that we debug use graphics and video decoding. Can drivers go beyond the max limits? kernel.shmmax = 2147483648 kernel.shmall = 524288Linux kernel is 5.15.71(from Yocto meta-intel). Our system has 4GB ram and no swap (we tried to enable swap but it did not help with the stability of the system). We use Wayland/weston but not systemd. We set the value in sysctl.conf and reboot for it to take effect. We also confirmed the values with ipcs. We tried to set the shared memory to max 2 GB. ipcs -l ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 2097152 max total shared memory (kbytes) = 2097152 min seg size (bytes) = 1Here is some example output from free, meminfo, smem, etc a few minutes before it reaches OOM. free -w total used free shared buffers cache available Mem: 3844036 479428 263444 2711864 11324 3089840 585716 Swap: 0 0 0 #### cat /proc/meminfo MemTotal: 3844036 kB MemFree: 262680 kB MemAvailable: 584940 kB Buffers: 11324 kB Cached: 3055620 kB SwapCached: 0 kB Active: 98764 kB Inactive: 645792 kB Active(anon): 732 kB Inactive(anon): 394288 kB Active(file): 98032 kB Inactive(file): 251504 kB Unevictable: 2708620 kB Mlocked: 100 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 12 kB Writeback: 0 kB AnonPages: 386388 kB Mapped: 162732 kB Shmem: 2711864 kB KReclaimable: 34208 kB Slab: 68656 kB SReclaimable: 34208 kB SUnreclaim: 34448 kB KernelStack: 4640 kB PageTables: 5904 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1922016 kB Committed_AS: 4068728 kB VmallocTotal: 34359738367 kB VmallocUsed: 15104 kB VmallocChunk: 0 kB Percpu: 1040 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Hugetlb: 0 kB DirectMap4k: 72236 kB DirectMap2M: 3938304 kB DirectMap1G: 2097152 kB#### smem PID User Command Swap USS PSS RSS ….. 1306 weston /usr/libexec/wpe-webkit-1.1 0 27192 51419 98928 1379 weston /usr/libexec/wpe-webkit-1.1 0 190268 214958 266040 Area Used Cache Noncache firmware/hardware 0 0 0 kernel image 0 0 0 kernel dynamic memory 3030848 2938432 92416 userspace memory 555656 162732 392924 free memory 257532 257532 0Map PIDs AVGPSS PSS …… /usr/lib/libcrypto.so.3 20 527 10544 /usr/lib/dri/iris_dri.so 5 2196 10982 /usr/lib/dri/iHD_drv_video.so 1 20356 20356 /usr/lib/libWPEWebKit-1.1.so.0.2.6 5 14539 72697 [heap] 45 2060 92700 <anonymous> 45 5970 268688 Edit: Added df info for tmpfs. The tmpfs mounts showed with df does not show any extra ordinary size increase. /# df -h Filesystem Size Used Avail Use% Mounted on /dev/root 9.8G 1.9G 7.4G 21% / devtmpfs 1.9G 2.1M 1.9G 1% /dev tmpfs 1.9G 636K 1.9G 1% /run tmpfs 751M 5.8M 745M 1% /var/volatile tmpfs 40K 0 40K 0% /mnt/.psplash
What shared memory is not controlled by SHMAX/SHMALL?
I figured it out. You need to use -w node0xx or --nodelist=node0xx.
Our nodes are named node001 ... node0xx in our cluster. I wonder, is it possible to submit a job to a specific node using Slurm's sbatch command? If so, can someone post an example code for that?
How to submit a job to a specific node using Slurm's sbatch command?
squeue -u $USER | grep ^197 | awk '{print $1}' | xargs -n 1 scancelCheck the documentation for xargs for details. If scancel accepts multiple job ids (it should), you may omit the -n 1 part.
I submitted lots of SLURM job script with debug time limit (I forgot to change the time for actual run). Now they are all submitted at the same time, so they all start with job ID 197xxxxx. Now, I can do squeue -u $USER | grep 197 | awk '{print $1}' to print the job ID's I want to delete. But how do I use scancel command on all these ID's. The output from the above shell command would look like 19726664 19726663 19726662 19726661 19726660 19726659 19726658 19726657 19726656 19726655 19726654 19726653 19726652 19726651 19726650
Best way to cancel all the SLURM jobs from shell command output
Here is my take away from previous answers%j gives job id %x gives job name I don't know how to get the date in the desired format. Job ID kind of serves as unique identifier across runs and file modified date captures date for later analysis.My SBATCH magic looks like this: #SBATCH --output=R-%x.%j.out #SBATCH --error=R-%x.%j.errI prefer adding R- as a prefix, that way I can easily move or remove all R-*
When running a SLURM job using sbatch, slurm produces a standard output file which looks like slurm-102432.out (slurm-jobid.out). I would like to customise this to (yyyymmddhhmmss-jobid-jobname.txt). How do I go about doing this? Or more generally, how do I include computed variables in the sbatch argument -o? I have tried the following in my script.sh #SBATCH -p core #SBATCH -n 6 #SBATCH -t 1:00:00 #SBATCH -J indexing #SBATCH -o "/home/user/slurm/$(date +%Y%m%d%H%M%S)-$(SLURM_JOB_ID)-indexing.txt"but that did not work. The location of the file was correct in the new directory but the filename was just literal line $(date +%Y%m%d%H%M%S)-$(SLURM_JOB_ID)-indexing.txt. So, I am looking for a way to save the standard output file in a directory /home/user/slurm/ with a filename like so: 20160526093322-10453-indexing.txt
SLURM: Custom standard output name
$ scontrol show config | grep -E 'MaxArraySize|MaxJobCount' MaxArraySize = 1001 MaxJobCount = 1000000Will that be enough for what you're wanting to do? To get only the value for e.g. MaxArraySize: $ scontrol show config | sed -n '/^MaxArraySize/s/.*= *//p'As a shell function: slurm_conf_value () { scontrol show config | sed -n "/^$1/s/.*= *//p" }MaxArraySize="$(slurm_conf_value 'MaxArraySize')"These are not environment variables in the Unix sense, but configuration settings in Slurm. They are variables for configuring the "Slurm environment" though.
How can I programmatically access SLURM environmental variables, like MaxArraySize or MaxJobCount? I would like to partition my job arrays into chunks of the allowed maximum size. Can this information be queried with any of SLURM's commands? So far, I have failed to find relevant information on this over the net. Finding slurm.conf on each machine and extract the relevant line from it is not a very robust solution.
How to check SLURM environmental variables programmatically?
You have to quote the command watch 'squeue -u mnyber004 | grep 4138'
I want to keep monitoring a specific job on a slurm worload like cluster. I tried to use the watch command and grep the specific id. If the job id is4138, I tried $> watch squeue -u mnyber004 | grep 4138 $> squeue -u mnyber004 | watch grep 4138but they doesn't work. The second command works for the first few seconds, but stop working when watch refreshes. A better idea please?
`watch` command with piping `|` [duplicate]
echo yes | your-program yes yes | your-program
Slurm is workload manager. There are two kinds of modes to run job, interactive(srun) and batch mode(sbatch). When using interactive mode, one needs to leave the terminal open which may lead extra burden to the remote terminal(laptop). However, sbatch mode just submit the bash script(*.sh) and can close the remote terminal. All the following steps are handled by the computing nodes. When I run the executable program, it prompts "Are all filenames correct?" and expects "yes". What I want is that the script to automatically input "yes" for this prompt when running *.sh file so that the program can continue without interaction. If you need any further clarification, please let me know.
Automatically input "yes" on the bash file [closed]
I don't think you can. All lines starting with # are ignored by the shell, and the $1 and $2 are shell things. Many job managers, including slurm, have some commands that are written as shell comments, so ignored by the shell, but are read by the job manager. This is what your SBATCH line is: #SBATCH --job-name=blablaSo there is no way of doing this dynamically within the same script. However, you can make a wrapper script that does this. For example: #!/bin/sh cat <<EoF #!/bin/sh # ########## Begin Slurm header ########## # #SBATCH --job-name=blabla$1$2 # ########### End Slurm header ########### Load module module load math/matlab/R2020a# Start a Matlab program matlab -nodesktop -r "program $1 $2"exit EoFIf you now run this script with two parameters, it will print out the script you actually want: $ foo.sh param1 param2 #!/bin/sh # ########## Begin Slurm header ########## # #SBATCH --job-name=blablaparam1param2 # ########### End Slurm header ########### Load module module load math/matlab/R2020a# Start a Matlab program matlab -nodesktop -r "program param1 param2"exitSo you can just do: foo.sh param1 param2 > slurm_script.sh
I am a beginner in the use of .sh scripts so please excuse my ignorance. This is my problem: To submit my jobs to our cluster the corresponding submit file has to contain a "slurm header" and looks something like this. #!/bin/sh # ########## Begin Slurm header ########## # #SBATCH --job-name=blabla # ########### End Slurm header ########### Load module module load math/matlab/R2020a# Start a Matlab program matlab -nodesktop -r "program $1 $2"exitNotice that I am passing two arguments to this .sh file which are then passed on to the matlab program. How can I make the job-name in the Slurm header dynamic according to the input arguments? Simply writing #SBATCH --job-name=blabla$1$2 predictably doesn't work.
Passing Argument to Comment in .sh script
It's been a few years since I ran a slurm cluster, but squeue should give you what you want. Try: squeue --nodelist 92512 -o "%A %j %C %J"(that should give your jobid, jobname, cpus, and threads for your jobs on node 92512) BTW, unless you specifically only want details from one particular node, you might be better off searching by job id rather than node id. There are a lot of good sites with documentation on using slurm available on the web, easily found via google - most universities etc running an HPC cluster write their own docs and help and "cheat-sheets", customised to the details of their specific cluster(s) (so take that into account and adapt any examples to YOUR cluster). There's also good generic documentation on using slurm at https://slurm.schedmd.com/documentation.html
I am working on a cluster machine that uses the Slurm job manager. I just started a multithreaded code and I would like to check the core and thread usage for a given node ID. For example, scoreusage -N 92512 were "scoreusage" is the command that I am unsure of.
Check CPU/thread usage for a node in the Slurm job manager
watch concatenates its command line arguments, joining them with spaces and passes the result as a string to sh -c. So watch ls -l "foo bar" becomes the same as watch ls -l foo bar, and you get a similar problem with squeue. You have two choices:Add explicit quotes for the shell that watch starts. As you actually did in the last example. It's just that when your alias expands the double-quotes are not quoted. The outer quotes just quote the alias when it's defined. They don't act on the command line when the alias is expanded. So, any of these: watch squeue -o '"%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %V %S %Z"'watch squeue -o \"%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %V %S %Z\"alias watch='watch ' alias sq='squeue -o \"%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %V %S %Z\"' watch sqalias sq=\''squeue -o "%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %V %S %Z"'\' watch sqNOTE: You can use set -x and run watch >/dev/null to see the command that the shell actually runs, you'll see there if the quotes actually get passed along to watch. Tell watch to skip the shell, and to run the command directly. That way, the separate command line arguments stay separate. Man page:-x, --exec Pass command to exec(2) instead of sh -c which reduces the need to use extra quoting to get the desired effect.watch -x squeue -o "%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %V %S %Z"alias watchx='watch -x ' alias sq='squeue -o "%.18i %.9P %.8j %.8u %.216t %.10M %.6D %R %V %S %Z"' watchx sq
My questions is similar to the watch question here but with a twist. I need to use quotes, which seem to be stripped by an aliased watch. I want run watch on a custom slurm squeue command: $alias squeue_personal='squeue -o "%.18i %.9P %.8j %.8u %.216t %.10M %.6D %R %V %S %Z"' $alias watch='watch ' NOTE: as per the other watch question But this still doesn't work. Because the aliased watch strips out quotations: Every 2.0s: squeue -o %.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %V %S %Z Fri Jul 6 12:06:57 2018squeue: error: Unrecognized option: %.9P Usage: squeue [-A account] [--clusters names] [-i seconds] [--job jobid] [-n name] [-o format] [-p partitions] [--qos qos] [--reservation reservation] [--sort fields] [--start] [--step step_id] [-t states] [-u user_name] [--usage] [-L licenses] [-w nodes] [-ahjlrsv]If I don't use aliases everything is fine. e.g. the following works: $watch 'squeue -o "%.18i %.9P %.8j %.8u %.2t %.10M %.6D %R %V %S %Z"'I'm sure the solution is some small twist to the other watch question but I don't know what.
Watch-command-alias-expansion AND need to use quotes
All the processing done by SLURM (by sbatch, specifically) is done before bash is invoked, so bash won't help you here. The script could be in any language, it wouldn't matter: the #SBATCH are only coincidentally bash comments, what matters is that they're sbatch directives. Options can be specified in the file so as to provide a convenient way to always use the same parameters for a particular script. If you want to use different options, pass them on the command line of sbatch. You can write a wrapper script that runs sbatch if you want to build up options from certain specific parameters. You can pass the job script as standard input (a here document is convenient) instead of keeping it in a separate file if you prefer. #!/bin/sh sbatch --time=5:00:00 --ntasks="$1" --mem-per-cpu=1024M <<'EOF' #!/bin/sh MyProgram.exe EOF
In my opinion, comments are comments are comments. They should NEVER change program state. Sadly, the people over at SLURM disagree. SLURM requires the following syntax (or something similar) to exist at the front of a .sh file: #!/bin/bash#SBATCH --time=5:00:00 #SBATCH --ntasks=8 #SBATCH --mem-per-cpu=1024MMyProgram.exeThe above will submit a request to run the program MyProgram.exe using 8 processors on my school's super computer. I'm currently benchmarking my code and would like to use a command line argument to change the number of processors. Effectually, I want to do the following #!/bin/bash#SBATCH --time=5:00:00 #SBATCH --ntasks=%1 #SBATCH --mem-per-cpu=1024MMyProgram.exewhere %1 would be a command line argument. I would then call it as sbatch myShScript.sh 123 which would use 123 processors. This, of course, does not work because bash interprets the #SBATCH --ntasks=%1 line as a comment (which it is) and never subs in my command line argument. Is there a way to trick SLURM or BASH into subbing in my command line argument?
Command Line Macros in Comments using SLURM
After a couple of days I managed to answer my own question. In hindsight it was quite simple. Responsiveness: The slurmd daemon can be started with command line arguments, list them with slurmd -h. In particular , slurmd -n 19 sets the highest nice-value (and thus lowest priority) for the daemon and all its subprocesses. On desktop computers, I simplyedited /etc/systemd/system/slurmd.service appending -n 19 to ExecStart, i.e. ExecStart=/usr/local/sbin/slurmd $SLURMD_OPTIONS -n 19 reloaded systemd daemons with systemctl daemon-reload restarted the slurmd daemon, systemctl restart slurmd.serviceMemory reservation: Some memory can be reserved to the system. I leave 8GB to the owner by adding MemSpecLimit=8000 to the node specifications in slurmd.conf. To actually enforce memory limits there were some additional steps:Select Core and Memory as consumable resources, by setting SelectTypeParameters=CR_Core_Memory in slurmd.conf. Add cgroups task plugin by setting TaskPlugin=task/affinity,task/cgroup in slurmd.conf and then setting ConstrainRAMSpace=yes in cgroup.conf. Because we are on Ubuntu, enable memory and swap cgroups by adding the line GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1" to /etc/default/grubWork hours on weekdays Some of my colleagues would like zero distraction during work hours. This is easy to do with scontrol from the head node, to set their node state to "down" during work hours, and to "resume" after work hours. I do this automatically with systemd timers:First, make an executable script that updates the node states of the desktops in question using scontrol:#!/bin/bash # slurm-update.sh - Updates the state on nodes belonging to the work-hour desktops partition. systemctl start slurmd for node in $(sinfo -h --partition=WHdesktops --format="%n"); do state=$(sinfo -h --node=$node --format="%T") echo "Setting node $node to state=$1 with reason=$2" scontrol update NodeName=$node state=$1 reason="$2" || echo "State on $node is already $(sinfo -h --node=$node --format=\"%T\")" doneThis takes two arguments, the new state and a reason for it. Create a service/timer pair of files in the directory /etc/systemd/system to run the script above at certain times. Make one pair per state you want to set (for instance I made 3 pairs, to set down, drain and resume). The pair for setting "down" looks like this# /etc/systemd/system/slurm-down.service: [Unit] Description=Shut down all SLURM desktop nodes [Service] Type=simple ExecStart=/bin/bash /mnt/nfs/slurm_fs/systemd/slurm-update.sh down afterhours StandardError=journaland # /etc/systemd/system/slurm-down.timer: [Unit] Description=Timer for shutting down slurm on desktop nodes on weekdays [Timer] Unit=slurm-down.service OnBootSec=10min # Run hourly on weekdays between 8:05 to 18:05 OnCalendar=Mon..Fri *-*-* 8..18:05:00 [Install] WantedBy=multi-user.targetReload the daemons with systemctl daemon-reload and then enable and start the timer only, not the service: systemctl enable --now slurm-down.timer. Repeat the steps for the resume state after work hours, and optionally a drain state an hour or so before the down state.
In short: On a Slurm cluster, I need some computers to be available and responsive to their respective owners during work hours. Problem: I manage a small (but growing) heterogeneous cluster with around 10 nodes, where some of the nodes are not dedicated. These are desktop computers used by colleagues on the same network during work hours and they would prefer to work on responsive machines. During nights and weekends however, we pool all of our computers and some dedicated nodes together for batch jobs. I recently switched from HT Condor to Slurm because it fits our needs better in all except one aspect: prioritizing the owner of the machine for regular work not related to the cluster. On Condor a node could be configured to suspend, preempt or kill jobs depending on criteria such asTime of day or weekday (machines are used during the day on weekdays) Keyboard activity (some users may be working late) CPU activity from processes other than those spawned by the cluster (users may leave some of their own processes running overnight, that should run without interference)I would like to mimic any of these behaviors when using Slurm, or find a way to not bother the owner using the computer. Additional info: All the nodes use Ubuntu 18.04-19.04 with slurm found in apt, i.e. version 18+. The cluster uses cgroups for limit enforcement and is configured to use cores as consumable resource, as in SelectType=select/cons_res SelectTypeParameters=CR_CoreI do not have sudo-rights on most desktop computers, so either I need a "set and forget" solution that one time when I configure my colleagues PC, or something I can do from the head node where I do have sudo. Attempts: I have considered these options but remain unsatisfied:For time of day/weekday, use either crontab or systemd with OnCalendar events in slurmd.service, to either:start/stop the daemon. This may be the easiest way but kills jobs in a non-clean way. launch a script that sets the node state using scontrol to down/resume/drain/etc, possibly from head node. I haven't tried this one as I can't figure out how to do this outside of the interactive mode of scontrol.For responsiveness, use "systemd edit slurmd.service" to add resource control by setting CpuWeight=5 under [Service]. This should prioritize every other process, but doesn't seem to work as I intended because the jobs make the computers sluggish anyway. I thought jobs would be subprocesses of slurmd and be subject to the same CpuWeight. If this actually worked well, it could solve the whole problem.I feel there should be a better way to achieve what I want. Any help is appreciated.
Slurm on desktop computers, how to prioritize the owner
The problem was resolved in the comments. To summarize: It turns out that I did not properly diagnose the initial problem. SLURM did not modify $BASH_SOURCE or $0. I assumed it simply executed my script, but it actually copied my script to a new location (/cm/local/apps/slurm/var/spool/jobXXXXXX/slurm_script). To get the behavior I expected, I submitted the job with the --wrap parameter which wraps the specified command string in a shell script.
I am using a shared SLURM cluster. I am trying to get the path of the bash script from inside the script itself. There is already an excellent thread here: https://stackoverflow.com/questions/59895/get-the-source-directory-of-a-bash-script-from-within-the-script-itself. Unfortunately, none of those solutions work for me. They work when I execute the bash script normally, but not when I submit the script as a job. I tried $BASH_SOURCE, $0, $_, and readlink /proc/$$/fd/255. They all return something like /cm/local/apps/slurm/var/spool/jobXXXXXX/slurm_script, which is a path auto-generated by SLURM and has nothing to do with the script I submitted. I doubt there are any other solutions that directly answer my question that are not already mentioned, but is there maybe some way to override whatever the cluster is doing? Can I somehow restore standard bash behavior?
restore $0 or $BASH_SOURCE after it is modified by the cluster
There is currently (as of version 15.8) no way to negate a feature in such way. The only way is to define a complementary feature in the following way: NODELIST FEATURES NodeA nospecialfeature NodeB specialfeatureand then submit a job with --constraint=specialfeatures, and another one with --constraint=nospecialfeature. If you cannot change the configuration, another option is to build manually the list of admissible hosts with a construct like the following: sinfo -h --format "%N %f" | grep specialfeature |\ awk '{print $1}' | paste -d, -s | xargs scontrol show hostlistand use the result with the --nodelist option. Feel free to combine grep's to build your list.
Suppose my super computer has the following NODELIST's with the included features: NODELIST FEATURES NodeA (none) NodeB specialfeatureand I am trying to benchmark performance using or not using the specialfeature feature. Measuring the performance of a run using specialfeature is easy. I simply call sbatch --constraint="specialfeature" mybenchmarktest.exeThe above assures that my run happens on a node that contains the specialfeature feature (ie, node B). However, if I want to run the benchmark WITHOUT specialfeature I run in to a problem. Calling sbatch mybenchmarktest.exemeans SLURM might run my task on NodeA or NodeB. Meaning my benchmarking might not be measuring what I want it to. Is there a way to ensure that a process runs without a feature? I am imagining a call something like: sbatch --contraint="!specialfeature" mybenchmarktest.exeNOTE: In this trivial case, I know that I could just request it to run on NodeA (the node without the feature) but I'm looking for a more general way to tell SLURM "don't use such and such".
Using SLURM without a feature
# is for comment in bash, so you should not run my_job.slurm with ./my_job.slurm or bash ./my_job.slurm because all things after # will be ignored. But when you run it with sbatch, it will recognize comments beginning with SBATCH as parameters. https://support.ceci-hpc.be/doc/_contents/QuickStart/SubmittingJobs/SlurmTutorial.html#creating-a-jobA submission script is a shell script, e.g. a Bash script, whose comments, if they are prefixed with SBATCH, are understood by Slurm as parameters describing resource requests and other submissions options.
The question is related to the article Introducing SLURM by Princeton Research Computing. For instance, #SBATCH --job-name=slurm-test # create a short name for your jobSBATCH after the first # will be executed, and create after the second # will be regarded as a comment. Is this because there is a space in the second #? I typically regard # as a comment. But in some cases, it does get executed.
"#" (comments) is SLURM job submissions
[] are special to ZSH, though can be turned off by way of the noglob precommand modifier, so maybe the alias alias scancel='noglob scancel'will do the trick, and also for any other commands that take [] as inputs.
I am able to cancel slurm jobs by typing something like the following: $ scancel 66421802_[11-20]On bash, this works fine. However, on zsh I get the following error: $ scancel 66421802_[11-20] zsh: no matches found: 66421802_[10-20]How can I cancel a job range when using zsh?
Cancel slurm job range in zsh
This is an error with the python code using a library called “tk”. That’s a library usually used for showing a GUI so it expects to be able to access your display (xserver or similar). If you are running your code on a “headless” server then this just won’t work because there’s no monitor and your session can’t talk to an xserver. It looks like this is a known problem with the matplot library. See here https://github.com/matplotlib/matplotlib/issues/7115/#issuecomment-378288788 Apparently it’s as simple as setting an environment variable to change the matplot backend before you run your python script: export MPLBACKEND=aggObviously you could set this in python via os.environ["MPLBACKEND"] = "agg"
My question is related to a python error, but I suspect that it is more a Linux question than a python one. Thus I post it first here. I am running a python script which does a calculation and then produces a plot and saves it in a PDF file. The script runs through on my local machine (Mac OS), but when I run it on the cluster of my workplace (Linux) it crashes when trying to produce the plot on the PDF with the following error: Traceback (most recent call last): File "<my_python_script>.py", line 496, in <module> if __name__ == "__main__": main() File "<my_python_script>.py", line 487, in main plot(model, obsdata, popt, pdf_file) File "<my_python_script>.py", line 455, in plot plt.figure(figsize=(11.69, 8.27)) File "/usr/lib/python3/dist-packages/matplotlib/pyplot.py", line 535, in figure **kwargs) File "/usr/lib/python3/dist-packages/matplotlib/backends/backend_tkagg.py", line 81, in new_figure_manager return new_figure_manager_given_figure(num, figure) File "/usr/lib/python3/dist-packages/matplotlib/backends/backend_tkagg.py", line 89, in new_figure_manager_given_figure window = Tk.Tk() File "/usr/lib/python3.5/tkinter/__init__.py", line 1880, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)_tkinter.TclError: no display name and no $DISPLAY environment variableHere I substituted <my_python_script> to abbreviate the unimportant path and name of my script. It may or may not be related to the problem, but I should also mention, that the script is not run manually from the command line, but submitted to a slurm queue. Unfortunately I really don't know enough about Unix/Linux to make this work on the cluster. Since it runs on through on my local machine, I suspect that it must have to do something with the settings on the cluster, and in particular with the settings related to my used. The latter I know because I have colleagues for which the script also runs on the very same cluster. Does anyone have an idea?
Python error only when I run script on Linux cluster: _tkinter.TclError: no display name and no $DISPLAY environment variable
What you can do is to create SPEC file and make rpmbuild to use it. In this file you can incorporate different parameters in build process. You can check here for example usage of SPEC fileIn a shell prompt, go into the buildroot and create a new spec file for your package. Open the spec file in a text editor. The spec file should be similar to the following example:%build %configure make %{?_smp_mflags} ....Edit the BuildRequires tag with requirements that are needed to build the package. BuildRequires can contain either a list of required packages or files. For example, the eject package requires the gettext and libtool packages. ....
To build a piece of software I normally do rpmbuild -ta slurm*.tar.bz2However I now need to configure the software with the option --with-pmix=/home/user/git/pmix/install/2.1Is this possible using rpmbuild or do I need to go through the standard configure/make/make install proceedure?
Passing configure option to rpmbuild?
grep -l 'pattern' script_folder/* | xargs -n 1 sbatchxargs will by default read as much as can fit on one command line before executing the given utility with all the things that it has read. With -n 1 you limit the number of items that it passes to the utility to a single item per invocation.
I have a large number of scripts that can be run as separate jobs on a computing cluster, which uses slurm. I want to select some of them, based on the contents, to submit. It's easy to identify the filenames of the jobs I want using grep, but I'm struggling to pipe those and submit them. I thought that I could do something like this: grep -l 'regex' script_folder/* | xargs sbatchHowever, this appears to submit the all the scripts as one job, when I want each script to be its own separate job.
Submitting list of jobs to slurm
You should use "Sharding" GRES (gres:shard) instead of gres:GPU, available in 22.05 or newer. https://slurm.schedmd.com/gres.html#Sharding It allows different jobs to share a GPU -- just like oversubscribed Cores and RAM resources. The conventional gres:gpu exclusively allocates a GPU to jobs no matter how much memory is used.
So, I am by no means a sysadmin but I need to use an existing SLURM installation to launch a sizable amount of jobs (around 5000). The cluster is composed of 1 node with 10 GPUs (with 8GB of memory each) and 56 CPUs. Every job is a batch script that I run with sbatch <file> and then I use sview to see what's going on These jobs need to run on a GPU but they have rather low GPU memory requirements (around 2GB) so I figured I could fit 3 of them on a single GPU. I have been pulling my hair trying to find a way to allow 3 jobs to run on the same GPU at the same time but the documentation is so massive that I feel completely lost. I am not sure if I am even using this properly. Jobs look like this: #!/bin/bash #SBATCH --time=00:10:00 #SBATCH -p n1 #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=[nb_cpu] #SBATCH --mem-per-gpu=1 #SBATCH --job-name=[job_name] #SBATCH --mail-user=[list_mail] #SBATCH --mail-type=NONE #SBATCH --gid=dl #SBATCH --output=[folder]/%x.log echo "Current GPU: $CUDA_VISIBLE_DEVICES" # a bunch of initialization module load opencv/4.5.4 deeplearning export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/raid1/CLUSTER/LIB/custom_libs/ export OPENCV_LOG_LEVEL=DEBUG export OPENCV_VIDEOIO_DEBUG=1 run_dl --nn-dir=/home/dl/networks --root-dir=[root_dir] [file] if [ $? -eq 0 ] then mkdir -p [folder]/done echo [id] > [folder]/done/[job_name] else mkdir -p [folder]/fail echo [id] > [folder]/fail/[job_name] fiThese jobs are generated by a script and can be modified easily if needed. Am I even going the right direction by using sbatch ? I am quite overwhelmed.
Running multiple SLURM jobs on the same GPU
You can use sinfo to find maximum CPU/memory per node. To quote from here: $ sinfo -o "%15N %10c %10m %25f %10G" NODELIST CPUS MEMORY FEATURES GRES mback[01-02] 8 31860+ Opteron,875,InfiniBand (null) mback[03-04] 4 31482+ Opteron,852,InfiniBand (null) mback05 8 64559 Opteron,2356 (null) mback06 16 64052 Opteron,885 (null) mback07 8 24150 Xeon,X5550 TeslaC1060 mback[08-19] 8 24151 Xeon,L5520,InfiniBand (null) mback[20-32,34] 8 16077 Xeon,L5420 (null)I'm not sure what you mean by optimum CPU "per task". This depends on the specific program you are attempting to run.
How can I determine the optimum/maximum number of CPUs per task when running a job? Is there a way to display the total available memory on a given CPU as well?
SLURM: How to determine maximum --cpus-per-task and --mem-per-cpu?
Q1 This awk command extract a sorted list of unique computer names from the file, assuming: The source file is much longer, having a block of lines for each switch. An script to get a whole switch block (assuming the switch line is always the first line of a continuous set of lines for each switch) sorted and removing repeated nodes is: awk -v FS='[#"]' ' BEGIN{c=0} $1~/Switch/ {c++; j=0; split($5,arr,"[;:]" ); sw[c,0]=arr[2] } $1~/\[[0-9]+\]/ { j++; split($5,arr," " ); sw[c,j]=arr[1] } END { print("final count of switches=" c) for (i=1; i<=c; i++) { print( "switch=" i, sw[i,0] ) # show switch number. split("", out , ":" ) # delete array "out". split("", indices , ":" ) # delete array "indices". j=0 while (sw[i,++j]) { # for all array elements. if (out[sw[i,j]]++ < 1) { # Is it a new value? indices[sw[i,j]]=j # add to array "indices". } } n=asorti(indices) # sort the keys of indices printf( "%s ", sw[i,0] ) for (k=1; k<=n; k++) { # all values for a switch. printf( "%s ", indices[k] ) } printf( "%s\n", "" ) } } ' infileResults: final count of switches=3 switch=1 ibsw20 ibsw20 Infiniscale-IV compute060 compute061 compute062 compute063 compute064 compute065 compute066 compute067 compute068 compute069 compute070 compute071 compute072 compute073 compute074 compute075 compute076 compute077 switch=2 ibsw21 ibsw21 Infiniscale-IV compute060 compute061 compute062 compute063 compute064 compute065 compute066 compute067 compute068 compute069 compute070 compute071 compute072 compute073 compute074 compute075 compute076 compute077 switch=3 ibsw22 ibsw22 Infiniscale-IV compute060 compute062 compute063 compute074 compute076I am not sure if Infiniscale-IV should be removed, and if you are also asking for the additional processing needed to get: SwitchName=ibsw20 Nodes=compute[060-077] Q2 From the "man awk":If RS is set to the null string, then records are separated by blank lines.That is the "record separator" (RS) set to null: awk -v RS='' 'script to process lines' file
First things first: no knowledge of either slurm or Infiniband is required - this is a purely text processing problem. Second - I'm aware of ib2slurm - the code is somehow broken and quite possibly outdated - it core dumps each time it runs regardless of the existence or format of a map file. I can reduce the output of ibnetdiscover to 37 line blocks each of the form: Switch 36 "S-0002c90200423e70" # "MF0;ibsw20:SX6036/U1" enhanced port 0 lid 3 lmc 0 [1] "H-0002c903000c26f2"[1](2c903000c26f3) # "compute061 HCA-1" lid 49 4xQDR [2] "H-0002c903000bf36e"[1](2c903000bf36f) # "compute060 HCA-1" lid 1 4xQDR [3] "H-0002c903000bf35a"[1](2c903000bf35b) # "compute063 HCA-1" lid 28 4xQDR [4] "H-0002c903000c2646"[1](2c903000c2647) # "compute062 HCA-1" lid 25 4xQDR [5] "H-0002c903000bf35e"[1](2c903000bf35f) # "compute064 HCA-1" lid 31 4xQDR [6] "H-0002c903000c26de"[1](2c903000c26df) # "compute065 HCA-1" lid 47 4xQDR [7] "S-0002c90200423e80"[31] # "Infiniscale-IV Mellanox Technologies" lid 6 4xQDR [8] "S-0002c90200423e80"[32] # "Infiniscale-IV Mellanox Technologies" lid 6 4xQDR [9] "S-0002c90200423e80"[33] # "Infiniscale-IV Mellanox Technologies" lid 6 4xQDR [10] "S-0002c90200423e80"[34] # "Infiniscale-IV Mellanox Technologies" lid 6 4xQDR [11] "S-0002c90200423e80"[35] # "Infiniscale-IV Mellanox Technologies" lid 6 4xQDR [12] "S-0002c90200423e80"[36] # "Infiniscale-IV Mellanox Technologies" lid 6 4xQDR [13] "S-0002c90200423eb8"[35] # "Infiniscale-IV Mellanox Technologies" lid 11 4xQDR [14] "S-0002c90200423eb8"[36] # "Infiniscale-IV Mellanox Technologies" lid 11 4xQDR [15] "S-0002c90200423eb8"[33] # "Infiniscale-IV Mellanox Technologies" lid 11 4xQDR [16] "S-0002c90200423eb8"[34] # "Infiniscale-IV Mellanox Technologies" lid 11 4xQDR [17] "S-0002c90200423eb8"[31] # "Infiniscale-IV Mellanox Technologies" lid 11 4xQDR [18] "S-0002c90200423eb8"[32] # "Infiniscale-IV Mellanox Technologies" lid 11 4xQDR [19] "S-0002c90200423ee0"[31] # "Infiniscale-IV Mellanox Technologies" lid 15 4xQDR [20] "S-0002c90200423ee0"[32] # "Infiniscale-IV Mellanox Technologies" lid 15 4xQDR [21] "S-0002c90200423ee0"[33] # "Infiniscale-IV Mellanox Technologies" lid 15 4xQDR [22] "S-0002c90200423ee0"[34] # "Infiniscale-IV Mellanox Technologies" lid 15 4xQDR [23] "S-0002c90200423ee0"[35] # "Infiniscale-IV Mellanox Technologies" lid 15 4xQDR [24] "S-0002c90200423ee0"[36] # "Infiniscale-IV Mellanox Technologies" lid 15 4xQDR [25] "H-0002c903000c26fa"[1](2c903000c26fb) # "compute046 HCA-1" lid 112 4xQDR [26] "H-0002c903000c26e2"[1](2c903000c26e3) # "compute047 HCA-1" lid 63 4xQDR [27] "H-0002c903000c263a"[1](2c903000c263b) # "compute048 HCA-1" lid 59 4xQDR [28] "H-0002c903000c27c2"[1](2c903000c27c3) # "compute049 HCA-1" lid 117 4xQDR [29] "H-0002c903000c27a6"[1](2c903000c27a7) # "compute051 HCA-1" lid 34 4xQDR [30] "H-0002c903000c2732"[1](2c903000c2733) # "compute050 HCA-1" lid 22 4xQDR [31] "H-0002c903000c265e"[1](2c903000c265f) # "compute052 HCA-1" lid 29 4xQDR [32] "H-0002c903000c266a"[1](2c903000c266b) # "compute055 HCA-1" lid 32 4xQDR [33] "H-0002c903000c264e"[1](2c903000c264f) # "compute054 HCA-1" lid 26 4xQDR [34] "H-0002c903000c26ee"[1](2c903000c26ef) # "compute056 HCA-1" lid 48 4xQDR [35] "H-0002c903000bf246"[1](2c903000bf247) # "compute057 HCA-1" lid 33 4xQDR [36] "H-0002c903000c27ca"[1](2c903000c27cb) # "compute053 HCA-1" lid 44 4xQDRand can extract the node name, e.g. compute061 using awk or sed. I would like to get a single row for each block starting with switch name followed by node names, i.e: ibsw20 compute061 compute060 compute063 compute062 compute064 compute065 compute046 compute047 compute048 compute049 compute051 compute050 compute052 compute055 compute054 compute056 compute057 compute053 I plan to use slurm's scontrol show hostlist "<nodename> <nodename> ..." to compress several nodes into a single entity to push into slurm's topology.conf file which must have the form: SwitchName=ibsw20 Nodes=compute[046-057,060-061] Any ideas? I should mention after all the switch mappings, the ibnetdiscover file continues with the reverse - a node-by-node mapping to switches, in the form: vendid=0x2c9 devid=0x673c sysimgguid=0x2c903000bf371 caguid=0x2c903000bf36e Ca 1 "H-0002c903000bf36e" # "compute060 HCA-1" [1](2c903000bf36f) "S-0002c90200423e70"[2] # lid 1 lmc 0 "MF0;ibsw20:SX6036/U1" lid 3 4xQDREach block separated by empty lines. A reduced question that can get me started - how do I parse several lines of text into a single row, extracting different parts of each row (treating header and body rows differently) and discarding rows which do not contain relevant data? EDIT: The blocks might not be full - if nothing is connected to some of the ports in some of the switches, then the output will skip those line, and can result in something like: Switch 36 "S-0002c90200423e70" # "MF0;ibsw20:SX6036/U1" enhanced port 0 lid 3 lmc 0 [2] "H-0002c903000bf36e"[1](2c903000bf36f) # "compute060 HCA-1" lid 1 4xQDR [3] "H-0002c903000bf35a"[1](2c903000bf35b) # "compute063 HCA-1" lid 28 4xQDR [4] "H-0002c903000c2646"[1](2c903000c2647) # "compute062 HCA-1" lid 25 4xQDR [15] "S-0002c90200423eb8"[33] # "Infiniscale-IV Mellanox Technologies" lid 11 4xQDR [33] "H-0002c903000c264e"[1](2c903000c264f) # "compute074 HCA-1" lid 26 4xQDR [34] "H-0002c903000c26ee"[1](2c903000c26ef) # "compute076 HCA-1" lid 48 4xQDR So I can't simply rely on there being 36 lines following each switch line or that [36] will always be the last line in a switch block.
Text processing - Building a slurm topology.conf file from ibnetdiscover output
It seems like the command send the output to STDERR instead of STDIN. And because the terminal display both of them you get the things on this way. To send STDERR to STDIN and be able to filter the command output you can use this way: module list 2>&1 | grep conda
I am working on a SLURM cluster and there is a command to list all loaded software modules. I want to process the output and i.e. grep it for a certain word. However if I try to use the pipe I get unexpected output which I don't understand. $ module listCurrently Loaded Modules: 1) miniconda3-4.8.2-gcc-8.3.1-altn3cq$ which module module () { eval $($LMOD_CMD bash "$@") && eval $(${LMOD_SETTARG_CMD:-:} -s sh) }$ module list | wc -lCurrently Loaded Modules: 1) miniconda3-4.8.2-gcc-8.3.1-altn3cq0$ module list | grep condaCurrently Loaded Modules: 1) miniconda3-4.8.2-gcc-8.3.1-altn3cq Thus I am wondering why I can't pipe the output of module list
cant pipe SLURM `module list`command
Expand the command into multiple wget commands so you can send them to SLURM as a list: while IFS= read -r url; do printf 'wget "%s"\n' "$url" done < sgdp-download-list.txt > wget.shOr, if your sgdp-download-list.txt is just a list of wget command missing the wget at the beginning (which is what your example suggests), just use: sed 's/^/wget /' sgdp-download-list.txt > wget.shThen, submit the wget.sh as the job.
Hi there I'm trying to download a large number of files at once; 279 to be precise. These are large BAM (~90GB) each. The cluster where I'm working has several nodes and fortunately I can allocate multiple instances at once. Given this situation, I would like to know whether I can use wget from a batch file (see example below) to assign each download to a separate node to carry out independently. batch_file.txt <https_link_1> -O DNK07.bam <https_link_2> -O mixe0007.bam <https_link_3> -O IHW9118.bam . .In principle, this will not only speed up things but also prevent the run from failing since the wall-time for this execution is 24h, and it won't be enough to download all those files on a single machine consecutively. This is what my BASH script looks like: #!/bin/bash # #SBATCH --nodes=279 --ntasks=1 --cpus-per-task=1 #SBATCH --time=24:00:00 #SBATCH --mem=10gb # #SBATCH --job-name=download #SBATCH --output=sgdp.out ##SBATCH --array=[1-279]%279 # #SBATCH --partition=<partition_name> #SBATCH --qos=<qos_type> # #SBATCH --account=<user_account>#NAMES=$1 #d=$(sed -n "$SLURM_ARRAY_TASK_ID"p $NAMES)wget -i sgdp-download-list.txtAs you can see I was thinking to use an array job (not sure whether will work); alternatively, I thought about allocating 279 nodes hoping SLURM would haven been clever enough to send each download to a separate node (not sure about it...). If you are aware of a way to do so efficiently, any suggestion is welcome. Thanks in advance!
wget — download multiple files over multiple nodes on a cluster
The correct way to do this is to add a profile for SNMP to firewalld. Using UDP 161 not TCP vim /etc/firewalld/services/snmp.xml <?xml version="1.0" encoding="utf-8"?> <service> <short>SNMP</short> <description>SNMP protocol</description> <port protocol="udp" port="161"/> </service>Then you should reload your firewall firewall-cmd --reload Then you need to add the service to your public zone firewall-cmd --zone=public --add-service snmp --permanent Then finally reload your firewall again firewall-cmd --reload
How to let the Firewall of RHEL7 the SNMP connection passing? When I did this command on the computer: systemctl stop firewalldAll the SNMP packet are passing well. When I restarted firewalld all the packet arre blocked. I tried several connfigruation with the firewall running of course, like: iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 161 -j ACCEPTor firewall-cmd --zone=public --add-port=161/tcp --permanentI've not get any error message but the SNMP still in TIMEOUT.
How to let the Firewall of RHEL7 the SNMP connection passing?
Nmap doesn't contain much in the way of output filtering options: --open will limit output to hosts containing open ports (any open ports). -v0 will prevent any output to the screen. Instead, the best way to accomplish this is to save the XML output of the scan (using the -oX or -oA output options), which will contain all the information gathered by the scan in an easy-to-parse XML format. Then you can filter that with XML parsing tools to include the information you want. One command-line XML parser is xmlstarlet. You can use this command to filter out only IP addresses for targets that have sysdescr containing the string "example": xmlstarlet sel -t -m "//port/script[@id='snmpsysdescr' and contains(@output,'example')]/../../../address[@addrtype='ipv4']" -v @addr -n output.xmlYou can also do this with Ndiff, which is a tool and Python 2 library distributed with Nmap: #!/usr/bin/env pythonimport ndiffdef sysdescr_contains (value, host): for port in host.ports: for script in filter(lambda x: x.id == u"snmp-sysdescr", port.script_results): if value in script.output: return True return Falsedef usage (): print """Look for <substring> in snmp-sysdescr output and print matching hosts. Usage: {} <filename.xml> <substring>"""if __name__ == "__main__": import sys if len(sys.argv) < 3: usage() exit(1) scan = ndiff.Scan() scan.load_from_file(sys.argv[1]) for host in filter(lambda x: sysdescr_contains(sys.argv[2], x), scan.hosts): print host.format_name()Other Nmap-output parsing libraries are available in most common programming languages.
Nmap scanning network for SNMP enabled devices: sudo nmap -sU -p 161 --script default,snmp-sysdescr 26.14.32.120/24I'm trying figure out how make that nmap return only devices that have specific entries in snmp-sysdescr object: snmp-sysdescr: "Target device name" Is that possible?
Nmap scan for SNMP enabled devices
The command snmpbulkget does allow you to specify arbitrary, non-contiguous OID requests. Getting the non-repeaters and max-repetitions right may require some experimenting. There is a good example here: http://docstore.mik.ua/orelly/networking_2ndEd/snmp/ch02_06.htm
We are trying to monitor our servers mainly with SNMP. Due to performance reasons we are changing this from single requests to snmp-bulk-requests (as allowed in SNMP v2c). In theory (at least to my knowledge) it should be possible to request several branches/values in a single bulk-request, so the number of tcp-sessions needed for this will be minimal. We are currently using CentOS 5 for this - the high-level-command in question is snmpbulkwalk - but this only seems to be able to request a single branch (i.e. subtree) at a time. Am I wrong in theory, or is there a high-level-command out there that can do what I want?
High-level command to request several branches of snmp in one tcp-session?
The com2sec security model is not mandatory anymore. In snmpd.conf it should be enough to do: rocommunity "#randomsometinh$" 2.2.2.2where 2.2.2.2 is the monitoring IP address allowed to connect. I often prefer to assign a single IP, than giving access to a whole /24. So this configuration means the SNMP service will answer requests from the 2.2.2.2 address. You might also have to comment the line that restricts the snmpd daemon to the localhost for security reasons. # agentAddress udp:127.0.0.1:161After changing the configuration file, do: service restart snmpdTo confirm if it is listening locally: $ netstat -lu | grep snmp udp 0 0 *:snmp *:* And from the allowed network/IP, for walking the entire MIB tree. Assuming 2.2.2.1 is the machine being monitored: snmpwalk -c "#randomsometinh$" -v2c 2.2.2.1or for asking for the sysUpTime OID: snmpwalk -c "#randomsometinh$" -v2c 2.2.2.1 1.3.6.1.2.1.1.3 snmpget -c #randomsometinh$ -v2c 2.2.2.1 1.3.6.1.2.1.1.3.0snmpget has to have the 0 for the specific object instance/index. P.S: 2.2.2.2 is the monitoring server, and 2.2.2.1 is the snmpd server/host to be monitored.
I installed snmp on CentOS 7.2, like so: yum -y install net-snmp net-snmp-utilsI made a backup of my snmpd.conf file: cp /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.origthen I cleared the text, with this: echo "" > /etc/snmp/snmpd.confand added to the snmpd.conf, the following:rocommunity "#random$" monitoring_server_ipThe monitoring_server_ip is the server that that is allowed to connect to this snmpd server. Restarted snmpd /bin/systemctl restart snmpdWhen I run snmpwalk on my CentOS 7.2 server snmpwalk -v2c -c public localhost systemI then get:Timeout: No Response from localhostWhich is accurate, because there is only one IP-adres that can do that, as we have defined before. ps shows that snmp is running/usr/sbin/snmpd -LS0-6d -fThis is my first time playing with snmp and any help is greatly appreciated! output of iptables -L -n | grep udp shows this: ... Chain IN_public_allow (1 references) 94 target prot opt source destination 95 ACCEPT udp -- XX.XXX.XXX.XXX 0.0.0.0/0 udp dpt:161 ctstate NEW ...All of the destination was too 0.0.0.0/0? Netstat shows the following port: netstat -ulnp | grep 161 udp 0 0 0.0.0.0:161 0.0.0.0:* 19062/snmpdalso this: netstat -lu | grep snmp udp 0 0 0.0.0.0:snmp 0.0.0.0:*Also, in my firewall, added that only one IP-adres can access my snmp server: firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="XX.XXX.XXX.XX" port protocol="udp" port="161" accept"
How to properly configure snmpd?
You've set snmptrapd to accept traps with community name public only: [root@centos-Main snmp]# cat /etc/snmp/snmptrapd.conf authCommunity log,execute,net publicBut the trap from the Juniper device uses the community name VINOD instead: [root@centos-Main snmp]# tcpdump -i enp0s3 port 162 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on enp0s3, link-type EN10MB (Ethernet), capture size 65535 bytes 16:54:14.521820 IP 192.168.0.91.58376 > centos-Main.snmptrap: C=VINOD V2Trap(217) ... ^^^^^^^You should either configure the Juniper device's trap community name to public, or add another authCommunity line to your snmptrapd.conf file: authCommunity log,execute,net VINOD
I have a CentOS server (7.2). I am trying to configure this as a SNMP trap receiver. In my snmptrapd configuration, I am calling a very basic shell script just to identify if the trap was received: [root@centos-Main snmp]# cat /etc/snmp/snmptrapd.conf authCommunity log,execute,net public traphandle default /etc/snmp/mydummyhandler.shThis is what mydummyhandler.sh looks like: [root@centos-Main snmp]# cat mydummyhandler.sh echo "Trap Received" >> /var/log/snmptraplog.txtBut when traps are sent from the Juniper router, my snmptrapd doesn't process it and does not write anything to /var/log/snmptraplog.txt. I sent few test traps from localhost itself and it is completely working as expected. Further, to check this, I tried sending a test trap from another CentOS server like below: snmptrap -v 2c -c public centos-Main .1.3.6.1.6.3.1.1.5.3 .1.3.6.1.6.3.1.1.5.3 \ > ifIndex i 2 ifAdminStatus i 1 ifOperStatus i 1and it was processed as expected too: [root@centos-Main snmp]# cat /var/log/snmptraplog.txt Trap ReceivedI ran a tcpdump on port 162 of trap receiver server and found that the traps sent from Juniper switch is reaching the server. But for some reason, it is silently ignoring the incoming traps from the Juniper device without further processing. Here is what I see when traps are sent from Juniper device: [root@centos-Main snmp]# tcpdump -i enp0s3 port 162 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on enp0s3, link-type EN10MB (Ethernet), capture size 65535 bytes 16:54:14.521820 IP 192.168.0.91.58376 > centos-Main.snmptrap: C=VINOD V2Trap(217) system.sysUpTime.0=1768202 S:1.1.4.1.0=E:2636.4.13.0.1 E:2636.3.37.1.1.0="'Event-Trap'" E:2636.3.37.1.2.1.2.1="'event'" E:2636.3.37.1.2.1.3.1="''" E:2636.3.37.1.2.1.2.2="'message'" E:2636.3.37.1.2.1.3.1="''" S:1.1.4.3.0=E:2636.1.1.1.2.1Also, for comparison I am pasting the tcpdump output of successful trap reception from another server: [root@centos-Main snmp]# tcpdump -i enp0s3 port 162 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on enp0s3, link-type EN10MB (Ethernet), capture size 65535 bytes 16:53:23.272673 IP 192.168.0.105.49182 > centos-Main.snmptrap: V2Trap(87) S:1.1.4.1.0=S:1.1.5.3 interfaces.ifTable.ifEntry.ifIndex=2 interfaces.ifTable.ifEntry.ifAdminStatus=1 interfaces.ifTable.ifEntry.ifOperStatus=1
How to configure snmptrapd to process incoming traps from a Juniper device?
You can perhaps use SNMP, provided SNMP is enabled/allowed for DHCP service on Windows server. Using SNMP queries, one can build a statistics of the lease information from time to time remotely from the DHCP service. $snmp_address = "1.3.6.1.4.1.311.1.3.2.1.1.1"; $getsubnet = "snmpgetnext -v2c -c public -Oqv win_dhcp_server_ip $snmp_address |";better description is at https://lists.isc.org/pipermail/bind-users/2004-November/054007.html
I have a Windows 2003 server and need to poll the DHCP lease information from it with a perl script that is running on a Ubuntu server. Then I need to analyze & store the information in a mysql database. Is there a way to query the leases from a perl script? I can figure out how to process the info after I get it. Thanks.
Query DHCP server leases from Perl script
The SNMP daemon upon installation in CentOS is configured by default to answer to queries of a restricted MIB tree view using the "public" community for security reasons. As configured by default, the default "public" MIB (sub)tree allowed views are only .1.3.6.1.2.1.1 and .1.3.6.1.2.1.25.1.1 ; if you look closely the IF-MIB address space is .1.3.6.1.2.1.2. So querying objects on that MIB address space is not allowed by default. It also explains why that snmpwalk command of yours shows only a very restricted view. Consequently, to get the SNMP daemon/service answering to your queries, you have firstly to configure a new view (and for security reasons better also a new community) on the configuration file /etc/snmp/snmpd.conf. We shall then configure a "private" community for security reasons, and widen the MIB tree space which can be queried. As such, add to /etc/snmp/snmpd.conf rocommunity private 127.0.0.1 .1Where 127.0.0.1 is the IP address which can make queries, and .1 the whole MIB tree. rocommunity also certifies you can only make read queries, which are more secure. After configuring the snmpd.conf file, you have to restart the SNMP service, as in: sudo service snmpd restartor sudo systemctl restart snmpdNow for the query. If you are not asking for a MIB leaf node, you cannot use snmpget. You have to use snmpwalk for it to walk the MIB tree as in: $ snmpwalk -v 2c -c private 127.0.0.1 .1.3.6.1.2.1.2.2.1.2 IF-MIB::ifDescr.1 = STRING: lo IF-MIB::ifDescr.2 = STRING: eth0 IF-MIB::ifDescr.3 = STRING: eth1 IF-MIB::ifDescr.4 = STRING: eth2On the other hand, if you need to query a leaf node of the MIB tree, for instance, your second interface in the system, you do: $ snmpget -v 2c -c private 127.0.0.1 .1.3.6.1.2.1.2.2.1.2.2 IF-MIB::ifDescr.2 = STRING: eth0PS Obviously in production systems, you call your community name something other than private. PS2. The fact that you install a MIB file, is that you are installing dictionaries that translate numbers to readable text for humans and scripts/network monitoring software alike. Not having a MIB installed does not prevent from querying a specific MIB subtree in numeric form if the security context for accessing that SNMP community allows it
I installed a snmpd into a CentOS 7 minimal installation for system parameters search, for instance: snmpget -v 2c -c public 127.0.0.1 .1.3.6.1.2.1.2.2.1.2for the above command I get the following result: IF-MIB::ifDescr = No Such Object available on this agent at this OIDwhen i execute: snmpwalk -v 2c -c public 127.0.0.1to check if the IF-MIB is loaded by snmpd, i get the following result: SNMPv2-MIB::sysDescr.0 = STRING: Linux vm_test.whatever.com 3.10.0-862.6.3.el7.x86_64 #1 SMP Tue Jun 26 16:32:21 UTC 2018 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (90641) 0:15:06.41 SNMPv2-MIB::sysContact.0 = STRING: Root <root@localhost> (configure /etc/snmp/snmp.local.conf) SNMPv2-MIB::sysName.0 = STRING: vm_test.whatever.com SNMPv2-MIB::sysLocation.0 = STRING: Unknown (edit /etc/snmp/snmpd.conf) SNMPv2-MIB::sysORLastChange.0 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORID.1 = OID: SNMP-MPD-MIB::snmpMPDCompliance SNMPv2-MIB::sysORID.2 = OID: SNMP-USER-BASED-SM-MIB::usmMIBCompliance SNMPv2-MIB::sysORID.3 = OID: SNMP-FRAMEWORK- MIB::snmpFrameworkMIBCompliance SNMPv2-MIB::sysORID.4 = OID: SNMPv2-MIB::snmpMIB SNMPv2-MIB::sysORID.5 = OID: TCP-MIB::tcpMIB SNMPv2-MIB::sysORID.6 = OID: IP-MIB::ip SNMPv2-MIB::sysORID.7 = OID: UDP-MIB::udpMIB SNMPv2-MIB::sysORID.8 = OID: SNMP-VIEW-BASED-ACM-MIB::vacmBasicGroup SNMPv2-MIB::sysORID.9 = OID: SNMP-NOTIFICATION-MIB::snmpNotifyFullCompliance SNMPv2-MIB::sysORID.10 = OID: NOTIFICATION-LOG-MIB::notificationLogMIB SNMPv2-MIB::sysORDescr.1 = STRING: The MIB for Message Processing and Dispatching. SNMPv2-MIB::sysORDescr.2 = STRING: The management information definitions for the SNMP User-based Security Model. SNMPv2-MIB::sysORDescr.3 = STRING: The SNMP Management Architecture MIB. SNMPv2-MIB::sysORDescr.4 = STRING: The MIB module for SNMPv2 entities SNMPv2-MIB::sysORDescr.5 = STRING: The MIB module for managing TCP implementations SNMPv2-MIB::sysORDescr.6 = STRING: The MIB module for managing IP and ICMP implementations SNMPv2-MIB::sysORDescr.7 = STRING: The MIB module for managing UDP implementations SNMPv2-MIB::sysORDescr.8 = STRING: View-based Access Control Model for SNMP. SNMPv2-MIB::sysORDescr.9 = STRING: The MIB modules for managing SNMP Notification, plus filtering. SNMPv2-MIB::sysORDescr.10 = STRING: The MIB module for logging SNMP Notifications. SNMPv2-MIB::sysORUpTime.1 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.2 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.3 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.4 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.5 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.6 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.7 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.8 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.9 = Timeticks: (4) 0:00:00.04 SNMPv2-MIB::sysORUpTime.10 = Timeticks: (4) 0:00:00.04 HOST-RESOURCES-MIB::hrSystemUptime.0 = Timeticks: (872972) 2:25:29.72 HOST-RESOURCES-MIB::hrSystemUptime.0 = No more variables left in this MIB View (It is past the end of the MIB tree)the output tells me that IF-MIB is not being checked, but if execute the command: snmptranslate -Dinit_mib .1.3.2>&1 | grep MIBDIRto check the mibdirs (directories) and MIB's found (Seen MIBS) i get the following result: registered debug token init_mib, 1 init_mib: Seen MIBDIRS: Looking in '/root/.snmp/mibs:/usr/share/snmp/mibs' for mib dirs ... init_mib: Seen MIBS: Looking in ':HOST-RESOURCES-MIB:HOST-RESOURCES- TYPES:UCD-DISKIO-MIB:TCP-MIB:UDP-MIB:MTA-MIB:NETWORK-SERVICES-MIB:SCTP- MIB:RMON-MIB:EtherLike-MIB:LM-SENSORS-MIB:SNMPv2-MIB:IF-MIB:IP- MIB:NOTIFICATION-LOG-MIB:DISMAN-EVENT-MIB:DISMAN-SCHEDULE-MIB:UCD-SNMP- MIB:UCD-DEMO-MIB:SNMP-TARGET-MIB:NET-SNMP-AGENT-MIB:SNMP-MPD-MIB:SNMP- USER-BASED-SM-MIB:SNMP-FRAMEWORK-MIB:SNMP-VIEW-BASED-ACM-MIB:SNMP- COMMUNITY-MIB:IPV6-ICMP-MIB:IPV6-MIB:IPV6-TCP-MIB:IPV6-UDP-MIB:IP-FORWARD- MIB:NET-SNMP-PASS-MIB:NET-SNMP-EXTEND-MIB:UCD-DLMOD-MIB:SNMP-NOTIFICATION- MIB:SNMPv2-TM:NET-SNMP-VACM-MIB' for mib files ... init_mib: Seen PREFIX: Looking in '.1.3.6.1.2.1' for prefix ..and if you look carefully, the IF-MIB is there tagged as Seen Mibs. Why is it not showing up in the snmpwalk command? and why does the OID related to the IF-MIB doesn't exist in this agent? is this something permission related? OS related?
SNMP - No Such Object available on this agent at this OID
The answer is: snmpd.conf engineID aFor test: snmpget -v 2c -c public localhost .1.3.6.1.6.3.10.2.1.1.0 SNMP-FRAMEWORK-MIB::snmpEngineID.0 = Hex-STRING: 80 00 1F 88 04 61Every enginID begins with 080001F. It can't change with snmpset. it should set in config file.
I'm using NET-SNMP 5.7.3 on Freebsd 12.1. I want to chane engineID with snmpset command. snmpd.conf rwcommunity privateI enter this command: snmpset -v 2c -c private localhost e x 800000020109840301The error is: Error in packet. Reason: notWritable (That object does not support modification) Failed object: SNMPv2-SMI::enterprisesI used this command too: snmpset -v 2c -c private localhost 1.3.6.1.6.3.10.2.1.1 x 800000020109840301but this has the above error. How can I solve the problem?
How can I change the net-snmp engineID
Yes, from doing a network capture, it would seem so: SNMP 84 get-next-request 1.3.6.1.2.1Which is: $ MIBS=+all snmptranslate 1.3.6.1.2.1 SNMPv2-SMI::mib-2 $ MIBS=+all snmptranslate -Of 1.3.6.1.2.1 .iso.org.dod.internet.mgmt.mib-2Confirmed by reading the source: oid objid_mib[] = { 1, 3, 6, 1, 2, 1 }; [...] if (arg < argc) { [...] } else { /* * use default value */ memmove(root, objid_mib, sizeof(objid_mib)); rootlen = sizeof(objid_mib) / sizeof(oid); }And the man page:If no OID argument is present, snmpwalk will search the subtree rooted at SNMPv2-SMI::mib-2.
Where in the SNMP OID tree does snmpwalk start if no OID is specified, i.e. snmpwalk is started like snmpwalk -v 2c -c public host? From .1.3.6.1.2.1?
Where does snmpwalk start if no OID is specified?
Could you delete the following rules: -A OUTPUT -p udp -s 0/0 --sport 1024:65535 -d 0/0 --dport 161:162 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p udp -s 0/0 --sport 161:162 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPTAnd substitute this one, just below -A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT -A RH-Firewall-1-INPUT -p udp --sport 161:162 -j ACCEPTAs a start, let's use this rule and validate whether it works. Finally, I am curious... is this related to PCI, HIPAA, or US Govt requirements? :-)
I have a bunch of windows servers configured with the windows SNMP agent. Each server has four IP addresses and SNMP listens on all of them. There is something very odd with my monitoring server (which is Centos 5.5 32 bit with net-snmp 5.3.2.2). If I have iptables turned off then I have no problems performing snmp queries on any IP address on any of these servers. If I turn on IPtables then I am only able to query on what appears to be just one specific IP address on each of these servers. The snmpget just times out with Timeout: No Response from x.x.x.x. There's no pattern to this behaviour with regards to which IP addresses I'm allowed to connect to. But it's one and only one IP address per machine. This is my iptables config: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT-A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT-A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT-A RH-Firewall-1-INPUT -s 172.16.3.0/24 -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -i lo -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT-A RH-Firewall-1-INPUT -s 172.16.3.0/24 -m state --state NEW -m tcp -p tcp --dport 5668 -j ACCEPT -A RH-Firewall-1-INPUT -i lo -m state --state NEW -m tcp -p tcp --dport 5668 -j ACCEPT-A RH-Firewall-1-INPUT -s 172.16.3.0/24 -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT -A RH-Firewall-1-INPUT -i lo -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT-A RH-Firewall-1-INPUT -s 172.16.3.0/24 -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A RH-Firewall-1-INPUT -i lo -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT-A RH-Firewall-1-INPUT -s 172.16.3.0/24 -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT -A RH-Firewall-1-INPUT -i lo -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMITI have no trouble connecting outbound to any other server, for example using wget to grab RPM's. I did try adding this as a last resort, but no joy either: -A OUTPUT -p udp -s 0/0 --sport 1024:65535 -d 0/0 --dport 161:162 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p udp -s 0/0 --sport 161:162 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPTThis is the output from iptables -L:Chain INPUT (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere ACCEPT udp -- anywhere anywhere udp spts:snmp:snmptrap dpts:1024:65535 state ESTABLISHEDChain FORWARD (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhereChain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp spts:1024:65535 dpts:snmp:snmptrap state NEW,ESTABLISHEDChain RH-Firewall-1-INPUT (2 references) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere icmp any ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- 172.16.3.0/24 anywhere state NEW tcp dpt:ssh ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT tcp -- 172.16.3.0/24 anywhere state NEW tcp dpt:5668 ACCEPT udp -- 172.16.3.0/24 anywhere state NEW udp dpt:5668 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:5668 ACCEPT udp -- anywhere anywhere state NEW udp dpt:5668 ACCEPT tcp -- 172.16.3.0/24 anywhere state NEW tcp dpt:http ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http ACCEPT tcp -- 172.16.3.0/24 anywhere state NEW tcp dpt:https ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https ACCEPT tcp -- 172.16.3.0/24 anywhere state NEW tcp dpt:mysql ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:mysql REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Unable to make outbound SNMP connections when IPTables is enabled
The list you're looking for is most probably at http://www.oid-info.com/ Yes, this is some kind of standard: OIDs are objects in the MIB, the global root MIB was defined in RFC 1155. It has since been extended, the SNMP MIB is RFC 1157.
I was looking at this link here: http://www.debianadmin.com/linux-snmp-oids-for-cpumemory-and-disk-statistics.html and noticed that the OIDs are the same ones I see for the same stats for our appliance. Is this some kind of standard with SNMP maybe an RFC or something? Does anyone know where I can find the list that tells me what each OID describes?
Where do I find the OID descriptions for SNMPv2 in Linux?
I think snmpget also supports that. There are some common features described on the snmpcmd(1M) man page that they don't bother repeating on all the individual command pages. Agent Specification The agent specification (see SYNOPSIS) takes the form: [transport-specifier:]transport-address At its simplest, the agent specification consists of a host- name or an IPv4 address in the standard, "dotted quad" nota- tion. In this case, communication will be attempted using UDP/IPv4 to port 161 of the given host. Otherwise, the transport-address part of the specification is parsed according to the following table: <transport-specifier> <transport-address> format udp hostname[:port] Note that transport-specifier strings are case-insensitive so that, for example, "tcp" and "TCP" are equivalent. Here are some examples, along with interpretations: myhost:161 Perform query using UDP/IPv4 datagrams sent to myhost on port 161. The :161 is redundant here because that is the default SNMP port. udp:myhost Identical to the previous specification. The udp: is redun- dant here because UDP/IPv4 is the default transport.
Is it possible to get Net-SNMP running over TCP instead of UDP? The daemon program can be configured at the termiinnal to listen for TCP connections with: snmpd tcp:1161 However, there are no flags for the snmpget to use TCP?
Net-SNMP over TCP?
According to you have to specify udp6. Sending: trap2sink udp6:[::1]:162 in snmpd.conf This will send to localhost IPv6. Receiving: snmptrapd udp6:162
I have IPv6 only (not dual stack) system I wondering how to - send snmptrap from this system? - configure snmpd to be able to access it? I mean, is snmp is ready to use in IPv6 only environment?
snmp/snmptrap support of ipv6
The default fedora config is designed to only let you see the system group for security purposes. You need to replace the config with a better set that lets you access everything on the device. Running snmpconf -g basic_setup can help you with getting started. Or, you can replace the file with the following snippet (based on the above, I believe you aren't using SNMPv3 and only using the insecure SNMPv1 or SNMPv2c, so this is designed for that): rocommunity ChoseACommunityMake sure you set ChooseACommunity to something uniquely yours.
I installed at fedora snmpd package... > view systemview included > .1.3.6.1.2.1.1 view systemview > included .1.3.6.1.2.1.25.1.1 view > all included .1 80 > #### > # Finally, grant the group read-only access to the systemview view. > > # group context sec.model sec.level prefix read > write notif access notConfigGroup "" > any noauth exact systemview > none none access notConfigGroup "" > any noauth exact all none > noneI could get only system information.... using snmpwalk or whatever.. I tried adding RFC 1213 1.3.6.1.2.11 snmp node with no luck.... What am i doing wrong?
SNMPD only system group available!
For restoring the configuration files you can use: sudo apt-get -o Dpkg::Options::="--force-confmiss" install --reinstall packagenameSo in this case the command (for snmpd) would be: sudo apt-get -o Dpkg::Options::="--force-confmiss" install --reinstall snmpdCredits to this site
I wanted to know what should I do to restore the configuration files if I've modified or accidentally deleted a file. In my case, I'm talking about /etc/snmp/snmpd.conf, what command should I use to reinstall it?
How to restore configurations files ? (SNMP)
As far as I know, uname will display the generic name of the operating system. My roommate has the latest (I think) version of OSX, and it displays Darwin when it runs. If you'd like more of an output, uname -a will give you the kernel version, OS version, and a bunch of other information, in addition to the generic name. That said, this doesn't always properly fingerprint an OS. My CentOS VPS displays Linux, or GNU/Linux, depending on what I run. I'm also just a college student who does this kind of stuff for fun (/ on my development server), so there may be a better answer to this.
I want to write a bash script to get some information of Switches through snmpbulkwalk. I would like to use the same script in Linux and OSX environments, so I want to know if there is a way to do a compatible version that identifies the current OS, get the needed SNMP packages for each one and run a bunch of commands. Thanks in advance.
How to find out if the script is running in Linux or OSX
Looking at the post-inst scripts of snmpd, it seems the default Debian user and group of the snmpd package, changed from snmp to Debian-snmp. To correct it it was necessary to edit /etc/default/snmpd and change the following line from: SNMPDOPTS='-Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid'to: SNMPDOPTS='-Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux -p /var/run/snmpd.pid'After the change, the apt-get upgrade finished successfully and the snmpd service went up. Relevant files: /var/lib/dpkg/info/snmpd.postinst in Stretch: SNMP_USER="Debian-snmp"/var/lib/dpkg/info/snmpd.postinst in Jessie: adduser --quiet --system --group --home $SNMPDIR \ --shell /usr/sbin/nologin snmp
During the latest security upgrades, snmpd was upgraded to 5.7.3. Before finishing the procedure, apt-get upgrade starting giving the error: Starting SNMP services::Bad user id: snmpsnmpd is also not running. What is happening?
Debian stretch: upgrade of `snmpd` giving an error
The package monitoring-plugins-basic provide 2 plugins , the check_disk to check the disk usage and the check_load to check the cpu: apt install monitoring-plugins-basicsee: /usr/lib/nagios/plugins/check_disk --help /usr/lib/nagios/plugins/check_load --helpDisk Space Checks Load Checks
I have already installed nagios-plugins-contrib: sudo apt update sudo apt install nagios-plugins-contribHowever, there are no any cpu / hdd management plugin. I am using a Debian VM just in case. How can I get this plugin (if it exists of course)?
Where can I get Nagios plugin for cpu/hdd monitoring?
It can be very tedious to chain it back manually. There are tools such as snmptranslate to do this sort of thing for you. Try snmptranslate -M /path/to/mibs -m ALL -Pu -Tso|grep -B1 sysUpTimeInstance. To see the full details for sysUpTime, use snmptranslate -Td -OS .iso.org.dod.internet.mgmt.mib-2.system.sysUpTime This will show you that it is measured in TimeTicks. But to answer your question, MIB files are written in ASN.1 where each object can build on all previous objects. Among other things, like descriptions and object types, etc, it builds OIDs, aka Object IDenifiers which map OIDs to names such as .iso.org.dod.internet.mgmt.mib-2.system.sysUpTime.sysUpTimeInstance For this entry sysUpTimeInstance OBJECT IDENTIFIER ::= { sysUpTime 0 } roughly translates to "The OID for sysUpTimeInstance is equal to the OID of sysUpTime plus '.0' and the MIB is equal to the MIB of sysUpTime plus '.sysUpTimeInstance' To see what the OID and the rest of the MIB for sysUpTime is, you have to look earlier in the file. Keeping in mind that it may have been imported from another file. In this case, it was, so if you look in IMPORTS section you will see that sysUpTime was imported from SNMPv2-MIB. If you then check that file, you will see sysUpTime defined and in that definition you will find ::= { system 3 }. So far, we have .3.0 and system.sysUpTime.sysUpTimeInstance, which is just the last part of the OID and full MIB name. We could then trace system back to it's definition, etc, etc. By following this chain back through the files you will eventually get the whole OID and full MIB. Of course, you could just use snmptranslate, as shown in the first paragraph, to do all of this for you. It will show you the MIB and OID. For most of us, that is usually as far as we need to go. For more info see: (product pamplet, but also contains a good MIB summary) https://scadahacker.com/library/Documents/ICS_Protocols/Demystifying%20the%20SNMP%20MIB.pdf https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One
As I understand, SNMP Management Information Base databases are used by Network Management Stations to translate data from SNMP agents into understandable form. For example in case of sysUpTimeInstance: $ snmpwalk -v 2c -c public 10.10.10.1 sysUpTimeInstance DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (28267576) 3 days, 6:31:15.76 $ ..I guess it is thanks to MIB that 28267576 is converted into 3 days, 6 hours, 31 minutes, 15.76 seconds? Now I searched for the correct MIB file to see this translation: $ grep -ir sysUpTimeInstance /usr/local/share/smi/mibs/ /usr/local/share/smi/mibs/ietf/DISMAN-EVENT-MIB:sysUpTimeInstance OBJECT IDENTIFIER ::= { sysUpTime 0 } /usr/local/share/smi/mibs/ietf/DISMAN-EVENT-MIB: DEFVAL { sysUpTimeInstance } /usr/local/share/smi/mibs/ietf/DISMAN-EXPRESSION-MIB:sysUpTimeInstance OBJECT IDENTIFIER ::= { sysUpTime 0 } /usr/local/share/smi/mibs/ietf/DISMAN-EXPRESSION-MIB: DEFVAL { sysUpTimeInstance } $ ..and looks like it is defined in /usr/local/share/smi/mibs/ietf/DISMAN-EVENT-MIB file. How should I understand this sysUpTimeInstance OBJECT IDENTIFIER ::= { sysUpTime 0 }?
How to understand SNMP MIB?
OIDs are object identifiers. In numerical form, they are represented as strings of numbers separated by dots. They also have a symbolic form, where the numbers are mapped to keywords according to certain definitions. The OID for iso is just 1, since it identifies the first main branch off the root of the OID tree structure. You can start with any OID in symbolic form and then add numeric suffixes to it, if you don't have the corresponding symbolic information available. Those definitions are stored in a standardized format in MIB files. Your system apparently does not have the standard MIB files loaded, or your snmpwalk just isn't configured to use them. Otherwise, your snmpwalk output would look more like this: SNMPv2-MIB::sysDescr.0 = STRING: "Linux ik1-325-22819 4.15.0-55-generic #60-Ubuntu SMP Tue Jul 2 18:22:20 UTC 2019 x86_64" SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-TC::linux DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (45994) 0:07:39.94 SNMPv2-MIB::sysContact.0 = STRING: "Me <[emailprotected]>" SNMPv2-MIB::sysName.0 = STRING: "ik1-325-22819" SNMPv2-MIB::sysLocation.0 = STRING: "Sitting on the Dock of the Bay" SNMPv2-MIB::sysServices.0 = INTEGER: 72 SNMPv2-MIB::sysORLastChange.0 = Timeticks: (1) 0:00:00.01... and so on. A bit more informative that way, wouldn't you think? A MIB file also defines other things besides names for OIDs: it also contains descriptions of the meanings of various values, and defines groups of OIDs that form information tables. At OID iso.3.6.1.2.1.31.1.1 begins a table of network interface statistics, known as IF-MIB::ifXTable. The OIDs in the form iso.3.6.1.2.1.31.1.1.1.1.N form the first column of the table, identifying the name of each interface. The number of bytes transferred inwards through this interface is located in another column of the table, with OIDs of the form iso.3.6.1.2.1.31.1.1.1.6.N respectively, where N corresponds to the name of the interface. The number of bytes transferred outwards throught the interface are in yet another column, with OIDs of the form iso.3.6.1.2.1.31.1.1.1.10.N. From your commands, I can see that the SNMP table row number (N) of your ens3 interface is 2. Remember this; it will be used later on. With the MIBs loaded and in use, you will be able to use the snmptable command to view such tables all at once in a meaningful way. For example, to view this table of interface statistics, you could run: snmptable -c public -v 2c localhost IF-MIB::ifXTableand have it all displayed in a table form. (This is a rather wide table, so if you want to try this, stretch your terminal window very wide first.) The snmptable command will only work if the MIB file defining the respective table is available.On Debian, and presumably also on Ubuntu, you can sudo apt install snmp-mibs-downloader which will be able to download an extensive set of standard MIBs from the authoritative standard sources on the internet, apply some erratas to them, and store them to /var/lib/snmp/mibs/ for you. It might download them automatically on installing the package, or you might need to use the sudo download-mibs command. To start using the downloaded MIBs, you may need to comment out one line in /etc/snmp/snmp.conf: # As the snmp packages come without MIB files due to license reasons, loading # of MIBs is disabled by default. If you added the MIBs you can reenable # loading them by commenting out the following line. mibs :Just comment out the last line or replace it with mibs ALL, or create ~/.snmp/snmp.conf with a single line mibs ALL and you'll get to use the downloaded MIBs automatically. There are also vendor-specific MIBs, that can describe vendor-specific OIDs with further information specific to a device of particular make and model. Typically, you can get these from the vendor's support download site, often as part of some vendor-specific management package.With check_snmp, you would need to specify the OID using the -o (lower case) option. For example, to monitor the amount of data transferred inbound on your ens3 network interface, you should run: ./check_snmp -H localhost -C public -P 2c -O -o iso.3.6.1.2.1.31.1.1.1.6.2or if you have the MIBs installed: ./check_snmp -H localhost -C public -P 2c -O -o IF-MIB::ifHCInOctets.2And for data transferred outbound: ./check_snmp -H localhost -C public -P 2c -O -o iso.3.6.1.2.1.31.1.1.1.10.2or with the MIBs: ./check_snmp -H localhost -C public -P 2c -O -o IF-MIB::ifHCOutOctets.2
I am setting snmpd and try to check by check_snmp. snmpwalk -c public -v 2c localhost iso.3.6.1.2.1.1.1.0 = STRING: "Linux ik1-325-22819 4.15.0-55-generic #60-Ubuntu SMP Tue Jul 2 18:22:20 UTC 2019 x86_64" iso.3.6.1.2.1.1.2.0 = OID: iso.3.6.1.4.1.8072.3.2.10 iso.3.6.1.2.1.1.3.0 = Timeticks: (45994) 0:07:39.94 iso.3.6.1.2.1.1.4.0 = STRING: "Me <[emailprotected]>" iso.3.6.1.2.1.1.5.0 = STRING: "ik1-325-22819" iso.3.6.1.2.1.1.6.0 = STRING: "Sitting on the Dock of the Bay" iso.3.6.1.2.1.1.7.0 = INTEGER: 72 iso.3.6.1.2.1.1.8.0 = Timeticks: (1) 0:00:00.01 iso.3.6.1.2.1.1.9.1.2.1 = OID: iso.3.6.1.6.3.11.3.1.1 iso.3.6.1.2.1.1.9.1.2.2 = OID: iso.3.6.1.6.3.15.2.1.1 iso.3.6.1.2.1.1.9.1.2.3 = OID: iso.3.6.1.6.3.10.3.1.1 iso.3.6.1.2.1.1.9.1.2.4 = OID: iso.3.6.1.6.3.1 iso.3.6.1.2.1.1.9.1.2.5 = OID: iso.3.6.1.6.3.16.2.2.1 iso.3.6.1.2.1.1.9.1.2.6 = OID: iso.3.6.1.2.1.49 iso.3.6.1.2.1.1.9.1.2.7 = OID: iso.3.6.1.2.1.4 iso.3.6.1.2.1.1.9.1.2.8 = OID: iso.3.6.1.2.1.50 iso.3.6.1.2.1.1.9.1.2.9 = OID: iso.3.6.1.6.3.13.3.1.3 iso.3.6.1.2.1.1.9.1.2.10 = OID: iso.3.6.1.2.1.92 iso.3.6.1.2.1.1.9.1.3.1 = STRING: "The MIB for Message Processing and Dispatching." iso.3.6.1.2.1.1.9.1.3.2 = STRING: "The management information definitions for the SNMP User-based Security Model." iso.3.6.1.2.1.1.9.1.3.3 = STRING: "The SNMP Management Architecture MIB." iso.3.6.1.2.1.1.9.1.3.4 = STRING: "The MIB module for SNMPv2 entities" iso.3.6.1.2.1.1.9.1.3.5 = STRING: "View-based Access Control Model for SNMP." iso.3.6.1.2.1.1.9.1.3.6 = STRING: "The MIB module for managing TCP implementations" iso.3.6.1.2.1.1.9.1.3.7 = STRING: "The MIB module for managing IP and ICMP implementations" iso.3.6.1.2.1.1.9.1.3.8 = STRING: "The MIB module for managing UDP implementations" iso.3.6.1.2.1.1.9.1.3.9 = STRING: "The MIB modules for managing SNMP Notification, plus filtering." iso.3.6.1.2.1.1.9.1.3.10 = STRING: "The MIB module for logging SNMP Notifications." iso.3.6.1.2.1.1.9.1.4.1 = Timeticks: (1) 0:00:00.01 iso.3.6.1.2.1.1.9.1.4.2 = Timeticks: (1) 0:00:00.01 iso.3.6.1.2.1.1.9.1.4.3 = Timeticks: (1) 0:00:00.01 iso.3.6.1.2.1.1.9.1.4.4 = Timeticks: (1) 0:00:00.01 iso.3.6.1.2.1.1.9.1.4.5 = Timeticks: (1) 0:00:00.01 iso.3.6.1.2.1.1.9.1.4.6 = Timeticks: (1) 0:00:00.01 iso.3.6.1.2.1.1.9.1.4.7 = Timeticks: (1) 0:00:00.01 iso.3.6.1.2.1.1.9.1.4.8 = Timeticks: (1) 0:00:00.01 iso.3.6.1.2.1.1.9.1.4.9 = Timeticks: (1) 0:00:00.01 iso.3.6.1.2.1.1.9.1.4.10 = Timeticks: (1) 0:00:00.01 iso.3.6.1.2.1.2.1.0 = INTEGER: 6 iso.3.6.1.2.1.2.2.1.1.1 = INTEGER: 1 iso.3.6.1.2.1.2.2.1.1.2 = INTEGER: 2 iso.3.6.1.2.1.2.2.1.1.3 = INTEGER: 3 iso.3.6.1.2.1.2.2.1.1.4 = INTEGER: 4 iso.3.6.1.2.1.2.2.1.1.5 = INTEGER: 5 iso.3.6.1.2.1.2.2.1.1.6 = INTEGER: 6 iso.3.6.1.2.1.2.2.1.2.1 = STRING: "lo" iso.3.6.1.2.1.2.2.1.2.2 = STRING: "Red Hat, Inc. Device 0001" iso.3.6.1.2.1.2.2.1.2.3 = STRING: "Red Hat, Inc. Device 0001" iso.3.6.1.2.1.2.2.1.2.4 = STRING: "Red Hat, Inc. Device 0001" iso.3.6.1.2.1.2.2.1.2.5 = STRING: "docker0" iso.3.6.1.2.1.2.2.1.2.6 = STRING: "br-6ec47ade44ad" iso.3.6.1.2.1.2.2.1.3.1 = INTEGER: 24 iso.3.6.1.2.1.2.2.1.3.2 = INTEGER: 6 iso.3.6.1.2.1.2.2.1.3.3 = INTEGER: 6 iso.3.6.1.2.1.2.2.1.3.4 = INTEGER: 6 iso.3.6.1.2.1.2.2.1.3.5 = INTEGER: 6 iso.3.6.1.2.1.2.2.1.3.6 = INTEGER: 6 iso.3.6.1.2.1.2.2.1.4.1 = INTEGER: 65536 iso.3.6.1.2.1.2.2.1.4.2 = INTEGER: 1500 iso.3.6.1.2.1.2.2.1.4.3 = INTEGER: 1500 iso.3.6.1.2.1.2.2.1.4.4 = INTEGER: 1500 iso.3.6.1.2.1.2.2.1.4.5 = INTEGER: 1500 iso.3.6.1.2.1.2.2.1.4.6 = INTEGER: 1500 iso.3.6.1.2.1.2.2.1.5.1 = Gauge32: 10000000 iso.3.6.1.2.1.2.2.1.5.2 = Gauge32: 0 iso.3.6.1.2.1.2.2.1.5.3 = Gauge32: 0 iso.3.6.1.2.1.2.2.1.5.4 = Gauge32: 0 iso.3.6.1.2.1.2.2.1.5.5 = Gauge32: 0 iso.3.6.1.2.1.2.2.1.5.6 = Gauge32: 0 iso.3.6.1.2.1.2.2.1.6.1 = "" iso.3.6.1.2.1.2.2.1.6.2 = Hex-STRING: 9C A3 BA 01 DF C3 iso.3.6.1.2.1.2.2.1.6.3 = Hex-STRING: 9C A3 BA 03 66 63 iso.3.6.1.2.1.2.2.1.6.4 = Hex-STRING: 9C A3 BA 04 ED 03 iso.3.6.1.2.1.2.2.1.6.5 = Hex-STRING: 02 42 D7 D1 60 44 iso.3.6.1.2.1.2.2.1.6.6 = Hex-STRING: 02 42 D8 24 17 95 iso.3.6.1.2.1.2.2.1.7.1 = INTEGER: 1 iso.3.6.1.2.1.2.2.1.7.2 = INTEGER: 1 iso.3.6.1.2.1.2.2.1.7.3 = INTEGER: 2 iso.3.6.1.2.1.2.2.1.7.4 = INTEGER: 2 iso.3.6.1.2.1.2.2.1.7.5 = INTEGER: 1 iso.3.6.1.2.1.2.2.1.7.6 = INTEGER: 1 iso.3.6.1.2.1.2.2.1.8.1 = INTEGER: 1 iso.3.6.1.2.1.2.2.1.8.2 = INTEGER: 1 iso.3.6.1.2.1.2.2.1.8.3 = INTEGER: 2 iso.3.6.1.2.1.2.2.1.8.4 = INTEGER: 2 iso.3.6.1.2.1.2.2.1.8.5 = INTEGER: 2 iso.3.6.1.2.1.2.2.1.8.6 = INTEGER: 2 iso.3.6.1.2.1.2.2.1.9.1 = Timeticks: (0) 0:00:00.00 iso.3.6.1.2.1.2.2.1.9.2 = Timeticks: (0) 0:00:00.00 iso.3.6.1.2.1.2.2.1.9.3 = Timeticks: (0) 0:00:00.00 iso.3.6.1.2.1.2.2.1.9.4 = Timeticks: (0) 0:00:00.00 iso.3.6.1.2.1.2.2.1.9.5 = Timeticks: (0) 0:00:00.00 iso.3.6.1.2.1.2.2.1.9.6 = Timeticks: (0) 0:00:00.00 iso.3.6.1.2.1.2.2.1.10.1 = Counter32: 3214378817 iso.3.6.1.2.1.2.2.1.10.2 = Counter32: 1235162258 iso.3.6.1.2.1.2.2.1.10.3 = Counter32: 0 iso.3.6.1.2.1.2.2.1.10.4 = Counter32: 0 iso.3.6.1.2.1.2.2.1.10.5 = Counter32: 0 iso.3.6.1.2.1.2.2.1.10.6 = Counter32: 286316 iso.3.6.1.2.1.2.2.1.11.1 = Counter32: 135588065 iso.3.6.1.2.1.2.2.1.11.2 = Counter32: 60939410 iso.3.6.1.2.1.2.2.1.11.3 = Counter32: 0 iso.3.6.1.2.1.2.2.1.11.4 = Counter32: 0 iso.3.6.1.2.1.2.2.1.11.5 = Counter32: 0 iso.3.6.1.2.1.2.2.1.11.6 = Counter32: 4466 iso.3.6.1.2.1.2.2.1.12.1 = Counter32: 0 iso.3.6.1.2.1.2.2.1.12.2 = Counter32: 0 iso.3.6.1.2.1.2.2.1.12.3 = Counter32: 0 iso.3.6.1.2.1.2.2.1.12.4 = Counter32: 0 iso.3.6.1.2.1.2.2.1.12.5 = Counter32: 0 iso.3.6.1.2.1.2.2.1.12.6 = Counter32: 0 iso.3.6.1.2.1.2.2.1.13.1 = Counter32: 0 iso.3.6.1.2.1.2.2.1.13.2 = Counter32: 0 iso.3.6.1.2.1.2.2.1.13.3 = Counter32: 0 iso.3.6.1.2.1.2.2.1.13.4 = Counter32: 0 iso.3.6.1.2.1.2.2.1.13.5 = Counter32: 0 iso.3.6.1.2.1.2.2.1.13.6 = Counter32: 0 iso.3.6.1.2.1.2.2.1.14.1 = Counter32: 0 iso.3.6.1.2.1.2.2.1.14.2 = Counter32: 0 iso.3.6.1.2.1.2.2.1.14.3 = Counter32: 0 iso.3.6.1.2.1.2.2.1.14.4 = Counter32: 0 iso.3.6.1.2.1.2.2.1.14.5 = Counter32: 0 iso.3.6.1.2.1.2.2.1.14.6 = Counter32: 0 iso.3.6.1.2.1.2.2.1.15.1 = Counter32: 0 iso.3.6.1.2.1.2.2.1.15.2 = Counter32: 0 iso.3.6.1.2.1.2.2.1.15.3 = Counter32: 0 iso.3.6.1.2.1.2.2.1.15.4 = Counter32: 0 iso.3.6.1.2.1.2.2.1.15.5 = Counter32: 0 iso.3.6.1.2.1.2.2.1.15.6 = Counter32: 0 iso.3.6.1.2.1.2.2.1.16.1 = Counter32: 3214378817 iso.3.6.1.2.1.2.2.1.16.2 = Counter32: 2438196185 iso.3.6.1.2.1.2.2.1.16.3 = Counter32: 0 iso.3.6.1.2.1.2.2.1.16.4 = Counter32: 0 iso.3.6.1.2.1.2.2.1.16.5 = Counter32: 0 . . .Then I try to check_snmp. ./check_snmp -H localhost -O No OIDs specifiedBoth are on the same host. How can I find OID?? for check_snmp?I want to know the transfer amount of interface ens3 However, honestly I don't know where to start. I tried this. $ snmpwalk -c public -v 2c localhost | grep ens3iso.3.6.1.2.1.25.3.2.1.3.262146 = STRING: "network interface ens3" iso.3.6.1.2.1.31.1.1.1.1.2 = STRING: "ens3"$ ./check_snmp -H localost -O iso.3.6.1.2.1.31.1.1.1.1.2 No OIDs specified
What is OID,MIB? Check transfer amount by check_snmp
The answer is: Adding two commands in snmpd.conf monitor -r 1s -e linkUpTrap "Generate linkUp" ifOperStatus != 2 monitor -r 1s -e linkDownTrap "Generate linkDown" ifOperS
I'm on Freebsd 12. I installed NET-SNMP version 5.7.3 on my system. The problem is in sending traps. For example the link up/down does not sending trap. The config files are: snmpd.conf view V included .1 view V included .1.3.6.1.2.1.1 view V included .1.3.6.1.2.1.25.1 view V included .1.3.6.1.6.3.1.1.5.3 view V included .1.3.6.1.6.3.1.1.5.4 rocommunity public trapcommunity public trap2sink localhost publicdefaultMonitors yes linkUpDownNotifications yessnmptrapd.conf format print1 %.4y-%.2m-%.2l %.2h:%.2j:%.2k %B [%b] (via %A [%a]): %N\n\t%W Trap (%q) Uptime: %#T\n%v\nformat print2 %.4y-%.2m-%.2l %.2h:%.2j:%.2k %B [%b]:\n%v\ndisableAuthorization yessockstat -4 -l root snmptrapd 635 9 udp4 *:162 *:* root snmpd 540 10 udp4 *:161 *:*I check the packets on wireshark, but I don't receive any packet from snmp. Thanks for your attention. I'm looking forward to your reply.
How can I send snmp trap from freebsd
I found the snmptrapd init file in the dist directory. It is called snmptrapd-init.d
I am packaging an RPM for RHEL6, built from net-snmp-5.7.2.tar.gz. I see that the file /etc/rc.d/init.d/snmpd gets created and packaged but I do not see the init file for /etc/rc.d/init.d/snmptrapd. Is snmptrapd depreciated? I did I forget pass the right switch to ./configure ? Thanks
Why doesn't build of net-snmp 5.7.2. provide /etc/rc.d/init.d/snmptrapd?
Solution found. SNMPv3 on AIX require snmp.crypto and is not installed on my system lslpp -cl snmp.crypto lslpp: Fileset snmp.crypto not installed.
I want to use snmpv3 on AIX, the client is Linux which use snmpwalk command On AIX side I first create the hash of the password pwtokey -p HMAC-SHA -u auth mypass 192.178.0.37the command return this line Display of 20 byte HMAC-SHA localized authKey: 18de41acdd2c8f0a1cb24f875g611198ea23e990Then I edit /etc/snmpdv3.conf cp /etc/snmpdv3.conf /etc/snmpdv3.conf.orig vim /etc/snmpdv3.conf and I add those lines USM_USER u1 - HMAC-SHA 18de41acdd2c8f0a1cb24f875g611198ea23e990 - - L - VACM_GROUP group1 USM u1 - VACM_VIEW group1View interfaces - included - VACM_VIEW group1View tcp - included - VACM_VIEW group1View icmp - included - VACM_VIEW group1View system - included - VACM_VIEW group1View sysObjectID - excluded - VACM_ACCESS group1 - - AuthNoPriv USM group1View - group1View - NOTIFY notify1 traptag trap - TARGET_ADDRESS Target1 UDP 192.178.0.37 traptag trapparms1 - - - TARGET_PARAMETERS trapparms1 SNMPv3 USM u1 AuthNoPriv -I restart all daemons stopsrc -s aixmibd stopsrc -s hostmibd stopsrc -s snmpmibd stopsrc -s snmpd startsrc -s aixmibd startsrc -s hostmibd startsrc -s snmpmibd startsrc -s snmpdNow I run smpwalk from Linux client.. snmpwalk -v3 -u u1 -a SHA -A "mypass" 192.178.0.37 SNMPv2-SMI::mib-2 = No more variables left in this MIB View (It is past the end of the MIB tree)I have tried also snmpwalk -v3 -l authNoPriv -u u1 -a SHA -A "mypass" -x DES -X "mypass" 192.178.0.37 snmpwalk -v3 -l authNoPriv -u u1 -a SHA -A "mypass" -x AES -X "mypass" 192.178.0.37but fail with "snmpwalk: Authentication failure (incorrect password, community or key)"Of course the pass is correct What I miss?
SNMPv3 on AIX from Linux shows authentication failure
Urghh. firewalld was installed and enabled by default on Fedora 33. Running nmap to device C was the pointer that I needed to see something was up. So either disable the firewall if you're on an internal network. Or setup some rules for firewalld to play with.
I have three devices (A,B,C) in my LAN all running snmpd. Apart from the community string they all have the same snmpd configuration. They can all run snmpwalk to the other devices except when trying to connect to Device C. Device C when calling itself, either through 127.0.0.1 or by it's <LAN-IP> address also works. snmpwalk -v 2c -c <community-string> 127.0.0.1 # OK snmpwalk -v 2c -c <community-string> <LAN-IP> # OKWhen Device A or B tries to connect to Device C, a timeout error occurs snmpwalk -v 2c -c <community-string> <DEVICE-C-LAN-IP> Timeout: No Response from <DEVICE-C-LAN-IP>When run with debugging on, I can see 6 blocks of identical lines similar to this: Sending 54 bytes to UDP: [<DEVICE-C-LAN-IP>]:161->[0.0.0.0]:0 ... 0048: 06 01 02 01 05 00 ......Which eventually ends with Timeout: No Response from <DEVICE-C-LAN-IP>Device C does have selinux which I've temporarily disabled with setenforce 0. There are also no firewall rules on Device C (iptables -L) that I can see. The only thing strange that I can see, is when calling service snmpd status on Device C ● snmpd.service - Simple Network Management Protocol (SNMP) Daemon. ...Feb 22 09:47:16 fedora-box systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Feb 22 09:47:16 fedora-box snmpd[25707]: Can't find directory of RPM packages Feb 22 09:47:16 fedora-box snmpd[25707]: NET-SNMP version 5.9 Feb 22 09:47:16 fedora-box systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon..So service is up, but has a simple warning: Can't find directory of RPM packages. Not sure if that is the culprit or not. Any help to track this down would be greatly appreciated. Thanks Update Debugging snmpwalk -D ALL I can make out 6 requests (nothing strange) and the following at the end trace: snmp_synch_input(): snmp_client.c, 178: snmp_synch: Response (ReqID: 366399853 - Cmd 161) trace: snmp_synch_input(): snmp_client.c, 231: snmp_synch: status = 2 errno = -24 Timeout: No Response from 192.168.70.10 trace: netsnmp_transport_cache_remove(): snmp_transport.c, 1237: transport:cache:close: 0x55d19b3ef8e0 trace: _tc_find_transport(): snmp_transport.c, 1210: transport:cache:find_transport: 0x55d19b3ef8e0 trace: netsnmp_transport_cache_remove(): snmp_transport.c, 1245: transport:cache:close: 0x55d19b3ef8e0 not found in cachenetstat netstat -ltup | grep snmp tcp 0 0 localhost:smux 0.0.0.0:* LISTEN 2446/snmpd udp 0 0 0.0.0.0:snmp 0.0.0.0:* 2446/snmpdnmap # nmap -v <DEVICE-C-LAN-IP> Nmap scan report for device-c-hostname (<DEVICE-C-LAN-IP>) Host is up (0.00069s latency). Not shown: 998 filtered ports PORT STATE SERVICE 22/tcp open ssh 9090/tcp open zeus-admin========== EnvironmentDevice A: Ubuntu 20.04 Device B: CentOS 7 (based) Device C: Fedora 33snmpd.conf agentAddress udp:161 rocommunity <community-string> 127.0.0.1 rocommunity <community-string> <LAN-SUB-NET>/24
snmpwalk from remote results in timeout
The way snmptrapd daemon works with traps doesn't allow receiving any SNMPv3 traps without specifying the EngineID of the sender device i.e. if you just do the following in /etc/snmp/snmptrapd.conf: createUser snmpv3USER SHA auth_pass AES priv_pass authUser log,execute snmpv3USER perl do "/usr/bin/zabbix_trap_receiver.pl";You will never get any SNMPv3 traps in your /tmp/zabbix_traps.tmp, because the usmUser created inside /var/lib/net-snmp/snmptrapd.conf is not fulfilling the actual requirements, that's it.. When it comes to SNMPv3 informs, most switches can not send them, so we are stuck with SNMPv3 traps only, and the way snmptrapd is programmed works only with EngineIDs for SNMPv3 traps. Even if I use disableAuthorization yes in snmptrapd.conf, SNMPv3 traps will not work without the EngineID for each sending device i.e. the switch device. Now, in order to receive SNMPv3 traps, you need to get all EngineIDs for all switches, for that I have written a python script in order to build snmptrapd.conf using SNMPv3 which is a big relief. The actual file has a comment before each createUser line which shows the switch name and IP address. This SNMPv3 traps piece of information is actually mentioned in net-snmp documentation, but when I read it first time I could not understand it because it says The difference is that SNMPv3 TRAPs use the engineID of the local application sending the trap rather than the engineID of the remote application so when you read it first time you will be confused, but with a little bit of concentration it becomes very clear:SNMPv3 TRAPs are a bit more complicated in some ways, but it makes sense the protocol works this way if you spend a long time thinking about it. The difference is that SNMPv3 TRAPs use the engineID of the local application sending the trap rather than the engineID of the remote application. This means that you have to create users in your remote user database with a bit more care and need to create one for every engineID you wish to send traps from. This means that if you want to have 100 snmp agents send snmpv3 traps to your trap receiver, you need 100 createUser directives in your /var/net-snmp/snmptrapd.conf file./etc/snmp/snmptrapd.conf file looks like this: createUser -e 0x800007c703609c9f1eff01 snmpv3USER SHA auth_pass AES priv_pass createUser -e 0x800007c703cc4e24e4c6c3 snmpv3USER SHA auth_pass AES priv_pass createUser -e 0x800007c703d4c19e743e86 snmpv3USER SHA auth_pass AES priv_pass createUser -e 0x800007c70378a6e10ca715 snmpv3USER SHA auth_pass AES priv_pass createUser -e 0x800007c70378a6e10c46f0 snmpv3USER SHA auth_pass AES priv_pass createUser -e 0x800007c70378a6e121bacd snmpv3USER SHA auth_pass AES priv_pass createUser -e 0x800007c70378a6e121b7d8 snmpv3USER SHA auth_pass AES priv_pass createUser -e 0x800007c703609c9f5136f4 snmpv3USER SHA auth_pass AES priv_passauthUser log,execute snmpv3USER perl do "/usr/bin/zabbix_trap_receiver.pl";The file is much longer than that, but now I can receive all traps from all switches. I saw on stackoverflow that someone succeeded with DES without using EngineID, but this is real crap, it never worked for me neither for DES nor for AES without EngineID. It's only something that has been done in the background of other monitoring software which do keep a record of EngineIDs of all devices for trap reception task. I also noticed that the traps from the core switch are received from the virtual interface of the vlan on which zabbix resides, so I had to modify zabbix_trap_receiver.pl script as follows: use NetSNMP::TrapReceiver; # should be added in order to get the script to work, otherwise it fails -- elekgeek 2nd DEC 2019 # get the host name $pdu_info{'receivedfrom'} =~ s/192.168.168.254/192.168.168.1/ig; # I added this line to get the core switch IP instead of zabbix's VLAN SVI IP, otherwise trap is not added to the CORE trap.fallback -- elekgeek 2nd DEC 2019 my $hostname = $pdu_info{'receivedfrom'} || 'unknown'; if ($hostname ne 'unknown')When working on traps, I had to see them as history, so I changed the item SNMP traps (fallback) from Type of information=Log to Type of information=Text in template Template Module Generic SNMPv3. Good luck zabbixing!
My question is about zabbix traps with SNMPv3 and snmptrapd service using zabbix_trap_receiver.pl. I have a switch on which I enabled SNMPv3 only, so the switch has no SNMPv1/2c rw or ro communities configured, and I was able to add it as a host on zabbix after a long journey modifying existing SNMPv2 templates. In zabbix documentation in snmptrapd.conf there is a couple of lines added as shown here to enable SNMP traps: authCommunity execute public perl do "/usr/bin/zabbix_trap_receiver.pl";My questions:What is the meaning of authCommunity execute public I don't have public community configured, will snmptrapd service be able to decrypt SNMPv3 traps because on wireshark I had to define SNMPv3 credentials for it to be able to show me traps packets content? If not, how can I configure it in snmptrapd.conf so that the perl script can read these traps from SNMPv3? On wireshark, why does it say SNMPv2c trap, although it is encrypted as SNMPv3, I am lost here? Please explain to me how traps work in when SNMPv2 isenabled and when SNMPv3 is enabled?Edit I also tried the following createUser -e <engineid> <user> SHA <key> AES <key> authUser log,execute <user> perl do "/usr/bin/zabbix_trap_receiver.pl";But no luck
SNMPv3 traps in Zabbix
While your question actually confused me, I believe you should move away from using a php script to handle the snmptrapd service. That file (/etc/snmp/snmptrapd.conf), is used by the snmptrapd service which can be enabled at whatever run level you wish (3, 4, 5) and can be configured to log traps to MySQL, so there is no need to create a separate daemon. On another note perhaps a small shell script would be more efficient than trying to handle the trap by initializing the PHP interpreter, loading the SNMP, MySQL & additional extensions. Here is a simple example, though you would definitely need to add the database handling (which you can use my examples to accomplish). With that being said if you still wish to implement a solution with PHP you should probably research the available SNMP extension for PHP
I have a Ubuntu server which is collecting incoming SNMP traps. Currently these traps are handled and logged using a PHP script. file /etc/snmp/snmptrapd.conf traphandle default /home/svr/00-VHOSTS/nagios/scripts/snmpTrap.phpThis script is quite long and it contains many database operations. Usually the server receives thousands of traps per day and therefore this script is taking too much CPU time. My understand is this is due to high start-up cost of the php script every-time when a trap received. I got a request to re-write this and I was thinking of running this script as a daemon. I can create an Ubuntu daemon. My question is how can I pass trap-handler to this daemon using snmptrapd.conf file? Thank you in advance.
Pass SNMP trap packet to a php daemon on Ubuntu
With help from another colleague, we worked out why it didn't work. 3 things:First, we have agentaddress tcp:x.x.x.x:161 in snmpd.conf, just deleted the line Second, iptables is blocking udp port 161, added rules to allow udp port 161 Third, something wrong with the script as you can see the error message about line 319, changed < to lt
Trying to setup memory usage monitoring for Nagios using the check_snmp_mem.pl from Nagios SNMP plugin. I could not even get it working from the command line, I mean I go to /usr/lib/nagios/plugins and run the script, it gets a "No response from remote host" error. [root@nagios plugins]# ./check_snmp_mem.pl -H rhel01 -C public -N -w 90,20 -c 99,30 Argument "v6.0.1" isn't numeric in numeric lt (<) at ./check_snmp_mem.pl line 319. ERROR: Description table : No response from remote host "rhel01".Any SNMP configurations required on the monitored server?
No response from remote host for Nagios check_snmp_mem.pl plugin
What kind of SNMP-capable device are you going to monitor (with IP address my_ip_address)? The SNMP "community string" is kind of like a password. An SNMP application/MRTG will present the community string to that device when it requests statistics. If the community string is not correct, the device will not respond. The community string is set in the device configuration, in the MRTG configuration file and they must be the same. "Public" is usually the default community string for read-only access. It seems you are missing snmpd daemon running and configured on that machine so you can't get any SNMP response. You will need to install related packages, MIBS and set the community string first. This is summarized in this guide.
I have trying to set up MRTG on my server, following this guide: https://help.ubuntu.com/community/MRTG I followed it as far as, cfgmaker <snmp_community_string>@<ip_address_of_device_to_be_monitored> > /etc/mrtg.cfg at this point I have no idea what my snmp community string is, I've looked around to try to find out, some places said this is issued from my ISP, so I asked the datacenter that hosts my server, they said I need to generate my own string, so I google'd that, but still nothing. I also tried public@my_ip, but then I receive this error: cfgmaker public@<my_ip_address> --base: Get Device Info on public@<my_ip_address>: SNMP Error: no response received SNMPv1_Session (remote host: "<my_ip_address>" [<my_ip_address>].161) community: "public" request ID: 1795983726 PDU bufsize: 8000 bytes timeout: 2s retries: 5 backoff: 1) at /usr/share/perl5/SNMP_util.pm line 629 SNMPWALK Problem for 1.3.6.1.2.1.1 on public@<my_ip_address>::::::v4only at /usr/bin/cfgmaker line 960 WARNING: Skipping public@<my_ip_address>: as no info could be retrievedUse of uninitialized value $comment_sysdescr in substitution (s///) at /usr/bin/cfgmaker line 384. Use of uninitialized value $sysname in concatenation (.) or string at /usr/bin/cfgmaker line 417. Use of uninitialized value $comment_sysdescr in concatenation (.) or string at /usr/bin/cfgmaker line 417. Use of uninitialized value $syscontact in concatenation (.) or string at /usr/bin/cfgmaker line 417. Use of uninitialized value $syslocation in concatenation (.) or string at /usr/bin/cfgmaker line 417.Could anyone please tell me how I generate my snmp community string for use with MRTG? I am using Ubuntu Server 13.04 with kernel 3.2.0-51-generic.
How do I generate an SNMP community string for MRTG?
As I thought the apt error message was caused by an error in the OpenNMS repository: two deb packages with libraries needed, jicmp and jicmp6, weren't available in the repository for i386 architectures. After reporting the issue the packages are now available and the opennms package can be installed fulfilling the dependencies.
Trying to install the last version of OpenNMS in the last stable version of Debian, following the official installation instructions leads to an apt error with the repositories: root@triplecero:~# apt update Ign:1 http://nightly.odoo.com/12.0/nightly/deb ./ InRelease Ign:2 https://debian.opennms.org stable InRelease Hit:3 http://nightly.odoo.com/12.0/nightly/deb ./ Release Hit:4 https://debian.opennms.org stable Release Hit:7 http://ftp.debian.org/debian buster-backports InRelease Hit:8 http://deb.debian.org/debian buster InRelease Hit:9 http://repo.mysql.com/apt/debian stretch InRelease Hit:10 http://deb.debian.org/debian-security buster/updates InRelease Hit:11 http://deb.debian.org/debian buster-updates InRelease Hit:12 http://linorg.usp.br/debian-marillat stable InRelease Reading package lists... Done Building dependency tree Reading state information... Done All packages are up to date. W: Conflicting distribution: https://debian.opennms.org stable Release (expected stable but got opennms-24) root@triplecero:~#That error can be fixed just changing the stable keyword in /etc/apt/sources.list.d/opennms.list repository file with opennms-24. root@triplecero:~# apt update Hit:1 http://repo.mysql.com/apt/debian stretch InRelease Hit:2 http://deb.debian.org/debian buster InRelease Hit:3 http://deb.debian.org/debian-security buster/updates InRelease Hit:4 http://ftp.debian.org/debian buster-backports InRelease Ign:5 http://nightly.odoo.com/12.0/nightly/deb ./ InRelease Hit:6 http://deb.debian.org/debian buster-updates InRelease Hit:7 http://nightly.odoo.com/12.0/nightly/deb ./ Release Hit:9 http://linorg.usp.br/debian-marillat stable InRelease Ign:10 https://debian.opennms.org opennms-24 InRelease Get:11 https://debian.opennms.org opennms-24 Release [11.2 kB] Get:12 https://debian.opennms.org opennms-24 Release.gpg [224 B] Get:13 https://debian.opennms.org opennms-24/main Sources [3,696 B] Get:14 https://debian.opennms.org opennms-24/main i386 Packages [25.8 kB] Get:15 https://debian.opennms.org opennms-24/main i386 Contents (deb) [97.6 kB] Fetched 138 kB in 18s (7,643 B/s) Reading package lists... Done Building dependency tree Reading state information... Done All packages are up to date. root@triplecero:~#But after that, I get dependency errors from apt when I'm trying to install OpenNMS (trying to install those packages leads to another dependency package errors). root@triplecero:~# apt install opennms Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:The following packages have unmet dependencies: opennms : Depends: opennms-server (= 24.1.2-1) but it is not going to be installed Depends: opennms-webapp-jetty (= 24.1.2-1) but it is not going to be installed Recommends: opennms-source (= 24.1.2-1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. root@triplecero:~#Changing the repository from opennms-24 to opennms-23 or opennms-22 leads to another unmet dependency errors with PostgreSQL. Seems obvious the Debian OpenNMS repository has broken dependency packages. There's any other alternative to this or just have to wait to be fixed...?
OpenNMS 24 installation in Debian Buster: unmet dependencies
I figured it out, so here's an answer so I'm not denvercoder9. Adding a new config to the snmpd.conf seems to have both trimmed the interface march as well as prevented complaints. interface_replace_old yesI think that's the one. It could be that simple. Try it yourself if you run into the same problem, and let me know if I've got it wrong.
Sorry if this is a repeat. I searched, but with no luck. I'm using SNMPd on an openwrt/wr host with some ppp and tun connections. These connections get IDs in the if table, and will actually get a new ID whenever the tunnels reconnect. Nagios (check_mk), when that happens, complains that an interface went down; oh, and a different one with the same name came up right afterward. In the meantime, it's iterating over so many interfaces that the reports are of 'interface 4933 down'; and an snmpwalk shows close to 4932 datapoints before it. How are the helpful folks here handling a monitoring situation like that?
Nagios/SNMP - devices alerting when ppp/tun connections cycle
Sorry, it is a snmp' extensions newbie question. To use variables and stdin stdout redirection, you have to prepend the command with the interpreter, like this :extend .1.3.6.1.4.1.2021.7890.2 purpose "/bin/cat /sys/devices/virtual/dmi/id/product_name"
On our servers ( debian, centos and ubuntu ) we set in snmpd.conf extend .1.3.6.1.4.1.2021.7890.1 distro "/bin/cat /etc/debian_version"This way a centralized monitor ( Observium ) read the OS distro. Fine. I read http://net-snmp.sourceforge.net/docs/man/snmpd.conf.html but ... The OID .1.3.6.1.4.1.2021.7890.1 is a de-facto "standard" ? Why python doesn't work ? extend .1.3.6.1.4.1.2021.7890.2 purpose "/usr/bin/python /root/snmp_helper.py --product"I cannot spot error in /var/log/messages26 oct : I tried with another command :Host $HOSTNAMEnot working. Withsnmpwalk -v 2c -c public localhost .1.3.6.1.4.1.2021.7890I found this erroriso.3.6.1.4.1.2021.7890.3.3.1.1.6.115.101.114.105.97.108 = STRING: "Host \$HOSTNAME not found: 3(NXDOMAIN)"Can I use bash variables and stdout rediretion ? Which is the right syntax for the extensions ?
setting snmp extensions in snmpd.conf
Replace ARRAY=($OUTPUT)by eval ARRAY=($OUTPUT)
I have a bash script that is running an SNMPGET of two values. I want to take the results and put them in an array. Here is the code: OUTPUT=`snmpget -v2c -c public -Oqv 192.168.0.33' \ ' sysName'\ ' SysLocation' echo $OUTPUT ARRAY=($OUTPUT) echo ${ARRAY[0]}echo $OUTPUT returns "Private Network" "Server 4 ". When I put it in an array and do: echo ${ARRAY[0]}it Returns "Private How do I alter my script so that the qualifier for the array, is not the space in between words so for echo ${ARRAY[0]} it Returns "Private Network"?
Setting Qualifiers for Bash Array
The information you indicate you have is not enough to calculate disk utilization %. Disk utilization % is calculated as disk_time_spent_in_io / elapsed_time. For example, if your disk spends 0.25 seconds performing IO in a 1 second period, then your disk is 25% utilized. The number of operations is meaningless when it comes to utilization %. Depending on your disk, and the type of IO you're performing (bulk vs random), it could be 100% utilized at 10 IOPS, or 10000 IOPS. The only way to know is by how long the disk is taking to perform those IOPs.
The following is net-snmp output and as you see, diskIOLA is not availabe: SNMP table: UCD-DISKIO-MIB::diskIOTablediskIOIndex diskIODevice diskIONRead diskIONWritten diskIOReads diskIOWrites diskIOLA1 diskIOLA5 diskIOLA15 diskIONReadX diskIONWrittenX 25 sda 845276160 2882477056 576632 42597061 ? ? ? 5140243456 883350772736According to the definitions here http://www.net-snmp.org/docs/mibs/ucdDiskIOMIB.html:diskIOLAx means the x minute average load of disk (%).The other values in the table are:diskIONRead - The number of bytes read from this device since boot. diskIONWritten - The number of bytes written to this device since boot. diskIOReads - The number of read accesses from this device since boot. diskIOWrites - The number of write accesses to this device since bootSo, how does this load can be calculated manually, as it is not collected in the server? In the end, we want to show graphs to users where they can find if a disk IO is heavy or not. We can either display this using Read/write bytes/sec or Read/write requests/sec. If we display Read/write requests/sec alone, we can know that there is heavy I/O going on. But we won't be knowing if the disk R/W speed is effected by this. And displaying R/W speed alone can't tell us why the speed is effected - whether it is because of too many I/O operations or not enough buffer memory for asynchronous writes. Hence, we need to display both. But, what is the other value disk IOLoad means and how can we calculate it and why is it not being collected in snmp. Does it cause huge load if enable this? If it cause heavily load collecting this value, then we can calculate it manually. But, what's the formula?
How to calculate disk IO load percentage?
No! I haven't checked but maybe cups can deliver values by snmp, but it certainly doesn't need it.
I am trying to secure my latest hosting server, and realized that cupsd is running. After checking, it's running on all of my servers. In the name of security, I decided to permanently disable this service as I don't have the needs for printing services. Before I do this however, I want to make sure the my SNMP service won't be effected negatively. Does snmp rely on or need CUPS to function?
Does snmp rely on or need CUPS to function?
The Net-SNMP package supplied with RedHat is actually a very flexible monitoring agent, which will get values for all of the metrics you listed by default out of the box. However, it's old: the SNMP protocol itself has been around for over two decades, with significant improvements made over that span. The learning curve for it is fairly steep, as well. Which is directly related to the 26+ years of development made on the protocol base. The Net-SNMP project was pretty much there for all of that (first as a Carnegie-Mellon implementation and then as "ucd-snmp" from the University of California at Davis, which led to the current "Net-SNMP" code fork), so there is a fair bit of information to get a handle on, but they have great documentation. http://net-snmp.sourceforge.net/docs/man/ is the basic manual pages for the distribution. The Wiki has good "quick setup" guides and can be found at http://net-snmp.sourceforge.net/wiki/ So I'd start there to get up and running quickly. But read on... Net-SNMP collects it's data from the Linux kernel, using various sources ( the /proc filesystem and the lm-sensors packages to name a few). It can also be extended to report on just about anything you want, but that's going to take some significant investment of time and knowledge in order to do so. In order to understand what is happening behind the scenes, the first concept you need to take a look at is the structure and availability of SNMP MIBs (Management Information Bases), which control what information you can query. I wouldn't spend a lot of time on it, but knowing which MIBs are available on your system and the structure of some of the most common MIB objects like TABLEs, STRINGs, INTs and INFORMs will allow you to select appropriate objects for your monitoring needs. The second piece you need is an understanding of the Net-SNMP configuration file: snmpd.conf. This is a complex piece of of configuration, so read the man pages thoroughly to understand why things are set up the way they are in the defaults. Also, from a default implementation, you will need to select the protocol version you'll be supporting/querying with. Please DON'T use version 1. Your choice, really, is between versions 2c and 3 for support of rudimentary security and 64-bit counter support. Good Luck! Your adventure awaits!
I am new to SNMP and I can not find any clear article if we can add a data node in SNMP. And also how does SNMP collect data? I want to monitor the following resources, which can be obtained from SAR report. So, please tell me how to add this under SNMP or at least how does SNMP collect data, so that I will try to figure out if I can add these:RAM and SWAP Used (without buffer/cache) & Total Load (1min, 5min, 15min) Iowait cpu idle pagein/outs Swap in/outs I/O read blocks/bytes/sectors per second I/O write blocks/bytes/sectors per second I/O requests per second Network Interface SpeedI can find a few in SNMP tree, but not all. Can I add any data node under SNMP is it possible? We need this as a part of monitoring few 100s of servers in cacti.
How to use SNMP to get any information that we need in Redhat?
In the comments you said you want to develop a time tracking app, for tracking application usage. I guess you might do it by tracking which window is the active one at any given time. To do that, you would need to get access to the user's X11 session, and then repeatedly query its X11 property named _NET_ACTIVE_WINDOW. This code example might be helpful to you: https://github.com/UltimateHackingKeyboard/current-window-linux/blob/master/get-current-window.c If the system uses Wayland instead of classic X11, unfortunately Wayland might require its own solution; I simply don't know enough about that one.
I'm trying to know if some GUI process is idle o minimized in Linux, using Net-SNMP. I've been doing research and as far as I know, SNMP seems to be designed for monitoring services, not processes run by regular users. I've found just one MIB object, hrSWRunStatus (RFC 2790), which has only four running statuses: running(1), runnable(2), notRunnable(3) and invalid(4), but testing by maximizing and minimizing some GUI applications don't display any changes in their respective statuses, in fact, every process listed with snmpwalk has runnable(2) status, except one: snmpd which is listed as running(1). # snmpwalk -v 2c -c public localhost .1.3.6.1.2.1.25.4.2.1.7 | grep "running(1)" HOST-RESOURCES-MIB::hrSWRunStatus.920 = INTEGER: running(1) # snmpwalk -v 2c -c public localhost .1.3.6.1.2.1.25.4.2.1.2 | grep 920 HOST-RESOURCES-MIB::hrSWRunName.920 = STRING: "snmpd"Even using ps I don't see a change in the status of a process I'm using at the moment (except for htop). If htop is running in a terminal console, like konsole and I'm writing a text with kate, none of those processes has the status "R" (running or runnable), just "S" (interruptible sleep), which I found weird, but is supposed to be in that way... https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk112953 So, how can I know, in Linux, if some process is idle or minimized...?
Identify idle or minimized process
Centreon can collect information on various systems using SNMP, that's true. But Centreon plugins can add other methods for collecting information. In particular, App-Protocol-HTTP adds the ability to make HTTP queries and check for:response time presence of a specific string in a HTTP response presence of specific SOAP or JSON contentThis has nothing to do with SNMP. With plugins, Centreon can and will do more than just SNMP.
I have just started to learn snmp protocols and notably centreon program and I need to find out how can a verify a web-page response or a web-service (in HTTP simple mode). The only sourse I have found for a moment is this one https://documentation-fr.centreon.com/docs/plugins-packs/fr/1.x/catalog.html where it says "App-Protocol-HTTP, a model to supervise an HTTP server". I wonder what that suppose to mean? Is it somehow linked to a snmp protocol or not at all?
How can I verify the web page response using snmp? [closed]
snmptrap doesn't require any specific configurations. To validate whether your script is sending traps, you can use tcpdump to watch traffic. SNMP traps are UDP and usually destined for port 162, so this will work: tcpdump -i <interface> udp dst port 162Then, in another screen or terminal, test your snmptrap command natively at first, then embedded in your script: snmptrap -v 2c -c public ov.example.com '' 1.3.6.1.4.1.6032.1 1.3.6.1.4.1.6032.1 s "this is just a generic message" You should see output similar to: 14:52:32.966387 IP server.example.com.43066 > ov.example.com.snmp-trap: V2Trap(114) system.sysUpTime.0=31777102 S:1.1.4.1.0=E:6032.1 E:6032.1="this is just a generic message"If you see the output in tcpdump, then you know snmptrap is working on the local host, as the packet is leaving the systemm. If that's the case, run the same tcpdump command on the receiver to see if they traps are being received. Filtering by host in this case can be handy if you're receiving traps from multiple sources: tcpdump -i <interface> udp and src host <sender name/ip> dst port and dst port 162If you don't see the packets on the receiver, but you see them leaving the sender, then there's something blocking them in the network.
I'm trying to get my Ubuntu LTS 16.04 server to send SNMPTraps to my HP OVO server. The reason for this is that there are legacy devices on the network that cannot send an SNMP warning upon failure, but can still be accessed through a network-connected card. Due to this, my Ubuntu server connects to that card to assess the failed nodes. I've achieved this already using Bash and Expect - so now what I want to do is send an snmptrap as part of my script. My question is: Question Does the snmptrap command require underlying configuration? Or does it work from the commandline the way we (or maybe just I) take the telnet command for granted? I ask because I'm trying to achieve this on a corporate network, and currently my OVO Server doesn't receive the trap. So I want to narrow down if it's one of the several firewalls or if the issue is, as mentioned, an underlying configuration problem. Context: My expect script basically returns the status of each card managed by the network connected card back to bash as an array - 0 for OK 1 for didn't respond. So Bash does: errors=$((expect ~/assets/connect)) for i in ${!errors[@]}: do if [[ ${errors[$i]} -eq 1 ]]: then: snmptrap #stuff goes here; the relevant source IP is saved elsewhere & available - I want the SNMP trap to send a "node $i" down. fi done
snmptrap - underlying config?
If you are running this on your client machine ,You can use example below tcpdump dst 10.xx.xx.xx and port 162 and not arpyou can replace the IP suites to you and not arp part for exclude the arp. If you are running this on server side you can replace dst with src and client ip instead server ip
We have a process(should be a client program) on RHEL 7.4 that send snmp traps to a Solaris server that has trap receiver process(should be a server program listening on 162/1691) on other machine(with IP 10.xx.xx.xx) I have ssh access to RHEL boxCan I install some tracing tool on RHEL 7.4 to trace these snmp traps? if yes, what is the process to trace the trap packets?
How to trace snmp trap sent by client process?
There is a configuration file that is maintained by net-snmp itself. On Debian Linux, it's at /var/lib/snmp/snmpd.conf; I don't know exactly where net-snmp puts it on Solaris. But that file contains user definitions as long lines, starting with the usmUser keyword. The user's password will be stored in encrypted/hashed form, so you cannot read it. But the username will be plainly readable on the line. You should not edit the file while snmpd is running, as snmpd will then simply overwrite your changes. But when snmpd is stopped, you can edit the file to add a createUser line into it to create a new user - when snmpd is restarted, it will read the file, detect any createUser lines, encrypt the plain-text passwords on those lines, and then replace those createUser lines with corresponding usmUser lines. That way, you could add a new user with a known password and full admin access without knowing the old password, as long as you have write access to that file and can stop snmpd temporarily. Also, if you remove a user's usmUser line (while snmpd is stopped), then as far as snmpd is concerned, that user no longer exists. So that's one way to delete net-snmp users without knowing the net-snmp admin password.
I created an SNMP initial user several years ago for a project and have forgotten the password. Is there a way to reset net-snmp back to default, no users, and recreate the initial user and subsequent users? This is on Solaris 10 and Solaris 11.3.
Reset net-snmp? I've forgotten the initial user password
It depends. For ifOutOctets and ifInOctets, snmpd probably gets them by querying the kernel (either directly via a syscall or perhaps by examining /proc/net/dev). For other OIDs, it may get them by running an external command to extract and process the data before returning it. e.g. see Extending snmpd using shell scripts for examples.
If I want to find out network traffic on my Linux Servers using SNMP. I use the ifOutOctets. and ifInOctets. OIDs in an snmpget request. Where do these OIDs get the data from? I tried looking at the rfc for these OIDs but I'm still none the wiser. https://www.rfc-editor.org/rfc/rfc3635#section-3.2.5
Where does SNMP OIDs get the data from?
You've got a non-standard PHP install on your system - either back it out and replace with the CENTOS distribution then install the php-snmp rpm or keep your existing PHP and yum install php56w-snmp (note the 'w')
I'm currently on php 5.6 on CentOS 6.7. I'm trying to install yum install php-snmp I keep getting Loaded plugins: fastestmirror, refresh-packagekit, security Setting up Install Process Loading mirror speeds from cached hostfile * base: mirror.atlanticmetro.net * epel: mirror.math.princeton.edu * extras: mirror.5ninesolutions.com * updates: mirrors.seas.harvard.edu * webtatic: us-east.repo.webtatic.com https://us-east.repo.webtatic.com/yum/el6/x86_64/repodata/repomd.xml: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://uk.repo.webtatic.com/yum/el6/x86_64/repodata/repomd.xml: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://sp.repo.webtatic.com/yum/el6/x86_64/repodata/repomd.xml: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. Resolving Dependencies --> Running transaction check ---> Package php-snmp.x86_64 0:5.3.3-47.el6 will be installed --> Processing Dependency: php-common(x86-64) = 5.3.3-47.el6 for package: php-snmp-5.3.3-47.el6.x86_64 --> Running transaction check ---> Package php-common.x86_64 0:5.3.3-47.el6 will be installed --> Processing Conflict: php56w-common-5.6.22-1.w6.x86_64 conflicts php-common < 5.5.0 --> Restarting Dependency Resolution with new changes. --> Running transaction check ---> Package php56w-common.x86_64 0:5.6.22-1.w6 will be updated --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-mbstring-5.6.22-1.w6.x86_64 --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-opcache-5.6.22-1.w6.x86_64 --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-fpm-5.6.22-1.w6.x86_64 --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-pdo-5.6.22-1.w6.x86_64 --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-cli-5.6.22-1.w6.x86_64 --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-xml-5.6.22-1.w6.x86_64 --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-5.6.22-1.w6.x86_64 ---> Package php56w-common.x86_64 0:5.6.23-1.w6 will be an update --> Running transaction check ---> Package php56w.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w.x86_64 0:5.6.23-1.w6 will be an update ---> Package php56w-cli.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w-cli.x86_64 0:5.6.23-1.w6 will be an update ---> Package php56w-fpm.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w-fpm.x86_64 0:5.6.23-1.w6 will be an update ---> Package php56w-mbstring.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w-mbstring.x86_64 0:5.6.23-1.w6 will be an update ---> Package php56w-opcache.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w-opcache.x86_64 0:5.6.23-1.w6 will be an update ---> Package php56w-pdo.x86_64 0:5.6.22-1.w6 will be updated --> Processing Dependency: php56w-pdo(x86-64) = 5.6.22-1.w6 for package: php56w-pgsql-5.6.22-1.w6.x86_64 ---> Package php56w-pdo.x86_64 0:5.6.23-1.w6 will be an update ---> Package php56w-xml.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w-xml.x86_64 0:5.6.23-1.w6 will be an update --> Running transaction check ---> Package php56w-pgsql.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w-pgsql.x86_64 0:5.6.23-1.w6 will be an update --> Processing Conflict: php56w-common-5.6.23-1.w6.x86_64 conflicts php-common < 5.6 --> Finished Dependency Resolution Error: php56w-common conflicts with php-common-5.3.3-47.el6.x86_64 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigestHow do I avoid that ?Updated @symcbean : Thanks for your suggestions. I've tried yum install php56w-snmp -y Loaded plugins: fastestmirror, refresh-packagekit, security Setting up Install Process Loading mirror speeds from cached hostfile * base: mirror.atlanticmetro.net * epel: mirror.math.princeton.edu * extras: mirror.stjschools.org * updates: mirrors.seas.harvard.edu * webtatic: us-east.repo.webtatic.com https://us-east.repo.webtatic.com/yum/el6/x86_64/repodata/repomd.xml: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://uk.repo.webtatic.com/yum/el6/x86_64/repodata/repomd.xml: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://sp.repo.webtatic.com/yum/el6/x86_64/repodata/repomd.xml: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. Resolving Dependencies --> Running transaction check ---> Package php56w-snmp.x86_64 0:5.6.23-1.w6 will be installed --> Processing Dependency: php56w-common(x86-64) = 5.6.23-1.w6 for package: php56w-snmp-5.6.23-1.w6.x86_64 --> Running transaction check ---> Package php56w-common.x86_64 0:5.6.22-1.w6 will be updated --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-mbstring-5.6.22-1.w6.x86_64 --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-opcache-5.6.22-1.w6.x86_64 --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-fpm-5.6.22-1.w6.x86_64 --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-pdo-5.6.22-1.w6.x86_64 --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-cli-5.6.22-1.w6.x86_64 --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-xml-5.6.22-1.w6.x86_64 --> Processing Dependency: php56w-common(x86-64) = 5.6.22-1.w6 for package: php56w-5.6.22-1.w6.x86_64 ---> Package php56w-common.x86_64 0:5.6.23-1.w6 will be an update --> Running transaction check ---> Package php56w.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w.x86_64 0:5.6.23-1.w6 will be an update ---> Package php56w-cli.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w-cli.x86_64 0:5.6.23-1.w6 will be an update ---> Package php56w-fpm.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w-fpm.x86_64 0:5.6.23-1.w6 will be an update ---> Package php56w-mbstring.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w-mbstring.x86_64 0:5.6.23-1.w6 will be an update ---> Package php56w-opcache.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w-opcache.x86_64 0:5.6.23-1.w6 will be an update ---> Package php56w-pdo.x86_64 0:5.6.22-1.w6 will be updated --> Processing Dependency: php56w-pdo(x86-64) = 5.6.22-1.w6 for package: php56w-pgsql-5.6.22-1.w6.x86_64 ---> Package php56w-pdo.x86_64 0:5.6.23-1.w6 will be an update ---> Package php56w-xml.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w-xml.x86_64 0:5.6.23-1.w6 will be an update --> Running transaction check ---> Package php56w-pgsql.x86_64 0:5.6.22-1.w6 will be updated ---> Package php56w-pgsql.x86_64 0:5.6.23-1.w6 will be an update --> Finished Dependency ResolutionDependencies Resolved=================================================================================================================================================================================== Package Arch Version Repository Size =================================================================================================================================================================================== Installing: php56w-snmp x86_64 5.6.23-1.w6 webtatic 32 k Updating for dependencies: php56w x86_64 5.6.23-1.w6 webtatic 2.7 M php56w-cli x86_64 5.6.23-1.w6 webtatic 2.6 M php56w-common x86_64 5.6.23-1.w6 webtatic 1.2 M php56w-fpm x86_64 5.6.23-1.w6 webtatic 1.3 M php56w-mbstring x86_64 5.6.23-1.w6 webtatic 944 k php56w-opcache x86_64 5.6.23-1.w6 webtatic 100 k php56w-pdo x86_64 5.6.23-1.w6 webtatic 94 k php56w-pgsql x86_64 5.6.23-1.w6 webtatic 95 k php56w-xml x86_64 5.6.23-1.w6 webtatic 148 kTransaction Summary =================================================================================================================================================================================== Install 1 Package(s) Upgrade 9 Package(s)Total download size: 9.2 M Downloading Packages: https://us-east.repo.webtatic.com/yum/el6/x86_64/php56w-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://uk.repo.webtatic.com/yum/el6/x86_64/php56w-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://sp.repo.webtatic.com/yum/el6/x86_64/php56w-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://us-east.repo.webtatic.com/yum/el6/x86_64/php56w-cli-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://uk.repo.webtatic.com/yum/el6/x86_64/php56w-cli-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://sp.repo.webtatic.com/yum/el6/x86_64/php56w-cli-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://us-east.repo.webtatic.com/yum/el6/x86_64/php56w-common-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://uk.repo.webtatic.com/yum/el6/x86_64/php56w-common-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://sp.repo.webtatic.com/yum/el6/x86_64/php56w-common-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://us-east.repo.webtatic.com/yum/el6/x86_64/php56w-fpm-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://uk.repo.webtatic.com/yum/el6/x86_64/php56w-fpm-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://sp.repo.webtatic.com/yum/el6/x86_64/php56w-fpm-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://us-east.repo.webtatic.com/yum/el6/x86_64/php56w-mbstring-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://uk.repo.webtatic.com/yum/el6/x86_64/php56w-mbstring-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://sp.repo.webtatic.com/yum/el6/x86_64/php56w-mbstring-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://us-east.repo.webtatic.com/yum/el6/x86_64/php56w-opcache-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://uk.repo.webtatic.com/yum/el6/x86_64/php56w-opcache-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://sp.repo.webtatic.com/yum/el6/x86_64/php56w-opcache-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://us-east.repo.webtatic.com/yum/el6/x86_64/php56w-pdo-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://uk.repo.webtatic.com/yum/el6/x86_64/php56w-pdo-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://sp.repo.webtatic.com/yum/el6/x86_64/php56w-pdo-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://us-east.repo.webtatic.com/yum/el6/x86_64/php56w-pgsql-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://uk.repo.webtatic.com/yum/el6/x86_64/php56w-pgsql-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://sp.repo.webtatic.com/yum/el6/x86_64/php56w-pgsql-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://us-east.repo.webtatic.com/yum/el6/x86_64/php56w-snmp-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://uk.repo.webtatic.com/yum/el6/x86_64/php56w-snmp-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://sp.repo.webtatic.com/yum/el6/x86_64/php56w-snmp-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://us-east.repo.webtatic.com/yum/el6/x86_64/php56w-xml-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://uk.repo.webtatic.com/yum/el6/x86_64/php56w-xml-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror. https://sp.repo.webtatic.com/yum/el6/x86_64/php56w-xml-5.6.23-1.w6.x86_64.rpm: [Errno 14] Peer cert cannot be verified or peer cert invalid Trying other mirror.Error Downloading Packages: php56w-common-5.6.23-1.w6.x86_64: failure: php56w-common-5.6.23-1.w6.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php56w-mbstring-5.6.23-1.w6.x86_64: failure: php56w-mbstring-5.6.23-1.w6.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php56w-snmp-5.6.23-1.w6.x86_64: failure: php56w-snmp-5.6.23-1.w6.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php56w-xml-5.6.23-1.w6.x86_64: failure: php56w-xml-5.6.23-1.w6.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php56w-pgsql-5.6.23-1.w6.x86_64: failure: php56w-pgsql-5.6.23-1.w6.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php56w-cli-5.6.23-1.w6.x86_64: failure: php56w-cli-5.6.23-1.w6.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php56w-fpm-5.6.23-1.w6.x86_64: failure: php56w-fpm-5.6.23-1.w6.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php56w-5.6.23-1.w6.x86_64: failure: php56w-5.6.23-1.w6.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php56w-pdo-5.6.23-1.w6.x86_64: failure: php56w-pdo-5.6.23-1.w6.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try. php56w-opcache-5.6.23-1.w6.x86_64: failure: php56w-opcache-5.6.23-1.w6.x86_64.rpm from webtatic: [Errno 256] No more mirrors to try.
Error installing php-snmp in CentOS
First, refactoring your question:How can I construct a perl script that inputs data from snmpwalk for each IP/Host and outputs a table for each OID. Second, your example snmpwalk commands make no sense. It might make sense if OID were a variable. You problably mean to use snmpwalk -v2 -c public ${ip} ${OID} You could do something like this: #!/usr/bin/env perl -w # Invoke via $0 <ip-address-file> <OIDs ... > $input_file=shift @ARGV; $oids=join(" ",@ARGV); $oids gt '' || die "Please provide at least one OID for snmpget to fetch"format STDOUT_TOP = @<<<<<<<<<<< @>>>>>>>>>>>>>>>>>>> $ip,$hostname . format STDOUT = @<<<<<<<<<<<<<<<<<<<<< @||||||||||||| @>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> $oid,$type,$value . $^L="\n\n\n";# parse inputfile open(IPFILE,$input_file) or die "Cannot open $input_file"; while (my $host=<IPFILE>) { chomp $host; my ($hostname,$ip) = split(";",$host,2); open(SNMP,"snmpget -v2 -cpublic $ip $oids |") or die "Cannot run snmpwalk" print "$hostname => $ip :\n"; while ($_ = <SNMP>) { if ( ($oid,$type,$value) = /^(.*?) = ([^:]+): (.*)$/ ) { write } else { print STDERR "Could not parse this output:\n $_"; } } $- = 0; }snmpget will output one line for each OID. See http://linux.die.net/man/1/perlform for more on the Perl "format" and the associated write command. The $^L sets the page-delimiter as three line-feeds instead of the traditional page-eject character (which is CTRL-L). The $- = 0; bit essentially forces a new page (with a new header) the next time "write" is called. And the next time it's called, $hostname and $ip will have new values.
if it possible to do a snmpwalk in a perl script and put the output in a table to make a sort of association like for each hostname i have in the same line if index and desc i have the script in bash but the output that i have don't give the association that i want so i need your help #!/bin/bash Rep_Scripts='/home/scripts' out_file='/home/scripts/out_file' rm -rf $Rep_Scripts/out_filefor i in `cat $Rep_Scripts/IP_ALU_LIST.txt | awk '{print}'` do read hostname ip <<< $(echo $i |sed 's/;/ /') echo "${hostname} ==> ${ip} If_Name" >> out_file snmpwalk -v2c -c ${ip} OID>> out_file echo "${hostname} ==> ${ip} Global_If_Index" >> out_file snmpwalk -v2c -c ${ip} OID >> out_file echo "${hostname} ==> ${ip} If_Statut" >> out_file snmpwalk -v2c -c ${ip} OID >> out_filedonei have in IP_ALU_LIST.txt router2;89.100.12.100 router3;100.100.100.100
make a table output
Youtube-dl doesn't support a SOCKS proxy. There's a feature request for it, with links to a couple of working proposals. Youtube-dl supports HTTP proxies out of the box. To benefit from this support, you'll need to run a proxy on myserver.com. Pretty much any lightweight proxy will do, for example tinyproxy. The proxy only needs to listen to local connections (Listen 127.0.0.1 in tinyproxy.conf). If the HTTP proxy is listening on port 8035 (Port 8035), run the following ssh command: ssh -L 8035:localhost:8035 [emailprotected]and set the environment variables http_proxy and https_proxy: export http_proxy=http://localhost:8035/ https_proxy=http://localhost:8035/ youtube-dl youtube.com/watch?V=3XjwiV-6_CA
I connect to a remote ssh server by running this command: ssh -D 12345 [emailprotected]This creates a socks proxy that I can use with Firefox to bypass censorship in my country. However, I can't take advantage of it to in the command line. Let's say my country blocks access to youtube. How can I use the ssh connection to run a command such as: youtube-dl "youtube.com/watch?v=3XjwiV-6_CA"Without being blocked by the government? How I can set a socks proxy for all terminal commands?
How to use socks proxy for commands in Terminal such as youtube-dl?
Here is how it does it: static int getdestaddr_iptables(int fd, const struct sockaddr_in *client, const struct sockaddr_in *bindaddr, struct sockaddr_in *destaddr) { socklen_t socklen = sizeof(*destaddr); int error; error = getsockopt(fd, SOL_IP, SO_ORIGINAL_DST, destaddr, &socklen); if (error) { log_errno(LOG_WARNING, "getsockopt"); return -1; } return 0; }iptables overrides the original destination address but it remembers the old one. The application code can then fetch it by asking for a special socket option, SO_ORIGINAL_DST.
There are two SOCKS proxies that I know about that support transparent proxying for any outgoing TCP connection: Tor and redsocks. Unlike HTTP proxies, these SOCKS proxies can transparently proxy any outgoing TCP connection, including encrypted protocols and protocols without metadata or headers. Both of these proxies require the use of NAT to redirect any outgoing TCP traffic to the proxy's local port. For instance, if I am running Tor with TransPort 9040 on my local machine, I would need to add an iptables rule like this: iptables -t nat -A OUTPUT -p tcp -j REDIRECT --to-port 9040To my knowledge, this would replace the original destination IP and port with 127.0.0.1 and 9040, so given that this is an encrypted stream (like SSH) or one without headers (like whois), how does the proxy know the original destination IP and port?
How does a transparent SOCKS proxy know which destination IP to use?
Can /usr/bin/ssh really not accept systemd-passed sockets?I think that's not too surprising, considering:OpenSSH is an OpenBSD project systemd only supports the Linux kernel systemd support would need to be explicitly added to OpenSSH, as an optional/build-time dependency, so it would probably be a hard sell.Can only units of root use the BindTodevice option?User systemd instances are generally pretty isolated, and e.g. can not communicate with the main pid-0 instance. Things like depending on system units from user unit files are not possible. The documentation for BindToDevice mentions:Note that setting this parameter might result in additional dependencies to be added to the unit (see above).Due to the above-mentioned restriction, we can imply that the option doesn't work from user systemd instances.Why is my proxy service not respawning correctly on first new connection after the old tunnel dies?As I understand, the chain of events is as follows:SocksProxyHelper.socket is started. A SOCKS client connects to localhost:8118. systemd starts SocksProxyHelper.service. As a dependency of SocksProxyHelper.service, systemd also starts SocksProxy.service. systemd-socket-proxyd accepts the systemd socket, and forwards its data to ssh. ssh dies or is killed. systemd notices, and places SocksProxy.service into a inactive state, but does nothing. SocksProxyHelper.service keeps running and accepting connections, but fails to connect to ssh, as it is no longer running.The fix is to add BindsTo=SocksProxy.service to SocksProxyHelper.service. Quoting its documentation (emphasis added):Configures requirement dependencies, very similar in style to Requires=. However, this dependency type is stronger: in addition to the effect of Requires= it declares that if the unit bound to is stopped, this unit will be stopped too. This means a unit bound to another unit that suddenly enters inactive state will be stopped too. Units can suddenly, unexpectedly enter inactive state for different reasons: the main process of a service unit might terminate on its own choice, the backing device of a device unit might be unplugged or the mount point of a mount unit might be unmounted without involvement of the system and service manager. When used in conjunction with After= on the same unit the behaviour of BindsTo= is even stronger. In this case, the unit bound to strictly has to be in active state for this unit to also be in active state. This not only means a unit bound to another unit that suddenly enters inactive state, but also one that is bound to another unit that gets skipped due to a failed condition check (such as ConditionPathExists=, ConditionPathIsSymbolicLink=, … — see below) will be stopped, should it be running. Hence, in many cases it is best to combine BindsTo= with After=.Is this the right way to set-up an "on-demand ssh socks proxy"? If, not, how do you do it?There's probably no "right way". This method has its advantages (everything being "on-demand") and disadvantages (dependency on systemd, the first connection not getting through because ssh hasn't begun listening yet). Perhaps implementing systemd socket activation support in autossh would be a better solution.
To reach an isolated network I use an ssh -D socks proxy. In order to avoid having to type the details every time I added them to ~/.ssh/config: $ awk '/Host socks-proxy/' RS= ~/.ssh/config Host socks-proxy Hostname pcit BatchMode yes RequestTTY no Compression yes DynamicForward localhost:9118Then I created a systemd-user service unit definition file: $ cat ~/.config/systemd/user/SocksProxy.service [Unit] Description=SocksProxy Over Bridge Host[Service] ExecStart=/usr/bin/ssh -Nk socks-proxy[Install] WantedBy=default.targetI let the daemon reload the new service definitions, enabled the new service, started it, checked its status, and verified, that it is listening: $ systemctl --user daemon-reload $ systemctl --user list-unit-files | grep SocksP SocksProxy.service disabled$ systemctl --user enable SocksProxy.service Created symlink from ~/.config/systemd/user/default.target.wants/SocksProxy.service to ~/.config/systemd/user/SocksProxy.service.$ systemctl --user start SocksProxy.service $ systemctl --user status SocksProxy.service ● SocksProxy.service - SocksProxy Over Bridge Host Loaded: loaded (/home/alex/.config/systemd/user/SocksProxy.service; enabled) Active: active (running) since Thu 2017-08-03 10:45:29 CEST; 2s ago Main PID: 26490 (ssh) CGroup: /user.slice/user-1000.slice/[emailprotected]/SocksProxy.service └─26490 /usr/bin/ssh -Nk socks-proxy$ netstat -tnlp | grep 118 tcp 0 0 127.0.0.1:9118 0.0.0.0:* LISTEN tcp6 0 0 ::1:9118 :::* LISTENThis works as intended. Then I wanted to avoid having to manually start the service, or running it permanently with autossh, by using systemd socket-activation for on-demand (re-)spawning. That didn't work, I think (my version of) ssh cannot receive socket file-descriptors. I found the documentation (1,2), and an example for using the systemd-socket-proxyd-tool to create 2 "wrapper" services, a "service" and a "socket": $ cat ~/.config/systemd/user/SocksProxyHelper.socket [Unit] Description=On Demand Socks proxy into Work[Socket] ListenStream=8118 #BindToDevice=lo #Accept=yes[Install] WantedBy=sockets.target$ cat ~/.config/systemd/user/SocksProxyHelper.service [Unit] Description=On demand Work Socks tunnel After=network.target SocksProxyHelper.socket Requires=SocksProxyHelper.socket SocksProxy.service After=SocksProxy.service[Service] #Type=simple #Accept=false ExecStart=/lib/systemd/systemd-socket-proxyd 127.0.0.1:9118 TimeoutStopSec=5[Install] WantedBy=multi-user.target$ systemctl --user daemon-reloadThis seems to work, until ssh dies or gets killed. Then it won't re-spawn at the next connection attempt when it should. Questions:Can /usr/bin/ssh really not accept systemd-passed sockets? Or only newer versions? Mine is the one from up2date Debian 8.9. Can only units of root use the BindTodevice option? Why is my proxy service not respawning correctly on first new connection after the old tunnel dies? Is this the right way to set-up an "on-demand ssh socks proxy"? If, not, how do you do it?
On-demand SSH Socks proxy through systemd user units with socket-activation doesn't restart as wished
First, you need tun2socks (often a part of the 'badvpn' package). tun2socks sets up a virtual interface which you can route traffic through, and that traffic will get sent through the target socks proxy. Setting it up gets a little tricky as you only want to route certain traffic through the tunnel. This script should do what you want: #!/bin/bash socks_server=127.0.0.1:8080id="$RANDOM" tun="$(printf 'tun%04x' "$id")" ip tuntap add dev $tun mode tun ip link set $tun up ip addr add 169.254.1.1/30 dev $tun sysctl -w net.ipv4.conf.$tun.forwarding=1 ip rule add fwmark $id lookup $id ip route add default via 169.254.1.2 table $id iptables -t mangle -I PREROUTING -i eth1 -p tcp -j MARK --set-mark $id iptables -t mangle -I PREROUTING -i eth2 -p tcp -j MARK --set-mark $id badvpn-tun2socks --tundev $tun --netif-ipaddr 169.254.1.2 --netif-netmask 255.255.255.252 --socks-server-addr $socks_serveriptables -t mangle -D PREROUTING -i eth2 -p tcp -j MARK --set-mark $id iptables -t mangle -D PREROUTING -i eth1 -p tcp -j MARK --set-mark $id ip route del default via 169.254.1.2 table $id ip rule del from fwmark $id lookup $id ip tuntap del dev $tun mode tunExplanation: socks_server=127.0.0.1:8080This is the socks server we will use.id="$RANDOM" tun="$(printf 'tun%04x' "$id")"These generate a random ID to use for the tunnel. Since you may have other tunnels on the system, we can't just use tun0 or tun1. 99% of the time this will work fine. Adjust accordingly though.ip tuntap add dev $tun mode tun ip link set $tun up ip addr add 169.254.1.1/30 dev $tun sysctl -w net.ipv4.conf.$tun.forwarding=1These set up the tunnel interface tun2socks will use.ip rule add fwmark $id lookup $id ip route add default via 169.254.1.2 table $idThese create a routing table with a single rule which sends any traffic with firewall mark $id (covered next) through the tunnel.iptables -t mangle -I PREROUTING -i eth1 -p tcp -j MARK --set-mark $id iptables -t mangle -I PREROUTING -i eth2 -p tcp -j MARK --set-mark $idThese set firewall mark $id on any TCP packets coming in eth1 or eth2. We only want to match TCP. Socks can't handle UDP or ICMP (tun2socks does have a way to forward UDP, but it's more complicated, and so I'm leaving it out).badvpn-tun2socks --tundev $tun --netif-ipaddr 169.254.1.2 --netif-netmask 255.255.255.252 --socks-server-addr $socks_serverThis starts tun2socks up. It'll sit in the foreground until terminated.iptables -t mangle -D PREROUTING -i eth2 -p tcp -j MARK --set-mark $id iptables -t mangle -D PREROUTING -i eth1 -p tcp -j MARK --set-mark $id ip route del default via 169.254.1.2 table $id ip rule del from fwmark $id lookup $id ip tuntap del dev $tun mode tunThese tear down everything we created during the setup process. They will only run once badvpn-tun2socks exits.
Is there a way to redirect all traffic, UDP and TCP, coming to and from eth1 and eth2 through a SOCKS proxy (Tor) which then passes it through eth0? eth0: Internet in - leads to the main router, then the cable modem eth1: A USB Ethernet port setup as a modem (I think that's the word I'm looking for, right?) eth2: A USB WiFi antenna setup as a WiFi hotspot Could I use something like iptables to directly route it through Tor or would I need an adapter like Privoxy?
Redirect ALL packets from eth1 & eth2 through a SOCKS proxy
It sounds to me like you need a socks client, or a ssh client that understand socks. -D is for ssh to be a socks server/proxy. You could use ssh under tsocks, or another SOCKS wrapper. Or use ssh's ProxyCommand in conjunction with socat or nc -X: ssh -o ProxyCommand='socat - socks:B:%h:21,socksport=1080' CTo have a HTTP proxy that uses the SOCKS server to send HTTP requests, you can run a small proxy (like tinyproxy) under tsocks. Note that not all applications play nicely with tsocks or any similar wrapper that relies on LD_PRELOAD, but tinyproxy does. Also note that you may have issues with domain name resolution (depending on whether you want the names to be resolved on either side of the tunnel). tsocks doesn't work well for resolving names remotely. The only way it can work is when your nameserver (in /etc/resolv.conf) is reachable from the other end and you're using TCP for domain resolution (which tsocks can attempt to enforce but generally fails in my experience). socksify from dante's SOCKS client/server works better in that instance as it also wraps resolving functions and can also use a nice little trick to fake name resolution so that SOCKS by-name connections can be used.
I have a list of SOCKS proxy servers from this site. I've read about creating a dynamic tunnels with ssh -D and to be honest i've tried that already. Unfortunately for some reason I cannot connect to any of the proxy servers from the list. I am using OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k on BackTrack. If anyone would be willing to help or point to step by step guide which will work for me I would be very grateful. What I want to achieve is Connect from Machine A through ssh tunnel on Machine B to Machine C. Lets say sshd on Machine C is listening on port 21 and Machine B proxy is listening on port 1080. Additional Question: I would also like to create a tunnel to socks proxy server which would allow me to setup localhost:8080 in firefox as a proxy and allow me to browse the net from the Machine B with Machine B IP address.
SSH over Socks proxy without username or password
SOCKS5 is a protocol (i.e. in the application layer of OSI), so plain network-routing (e.g. via iptables) alone won't do. (It's probably necessary, but not sufficient.) What you need is a proxifier. Without having tried it, tun2socks, allowing you to "socksify TCP at the network layer", looks promising (as does proxychains, without iptables but prefixing programs explicitly). (Could socat be used here? This mentions socat for socksifying, but I'm unsure at the moment.)
I connect to my Server on the internet using ssh -D 3128 [emailprotected]. If I am right I thereby open a SOCKS v5 Proxy to my Server. Using Firefox and FoxyProxy I can now add this to my proxys and tunnel my HTTP-Traffic over it. Howsoever I'd like to use this SOCKS Proxy for all my traffic. Friends told me that they've seen this by modifying your route or using iptables howsoever I can't find anything about it.
System wide SOCKS5 Proxy
Just found a new solution for this recently that is REALLY neat. Take a look at sshuttle. https://github.com/apenwarr/sshuttle/
Socksify like program for Fedora? Socksify, tsocks, proxycommand in ssh, they're great softwares/solutions that can be used to "constrain" given apps to use a SOCKS5 proxy (what was created with an SSH TUNNEL). Even if the given apps doesn't support to use SOCKS5. But: Are there any solutions to use an SSH Tunnel to "constrain" all the traffic on a PC to it? So it would be like some "kind of a VPN solution". So all the traffic from the PC would go through only in the SSH Tunnel. Are there any solutions for this? It would be a very-very-very good feature. p.s.: because installing OpenVPN vs. Installing SSHD+restrict a ssh user to only have "ssh tunneling" permissions is much simpler... Thank you.
VPN like solution for SSH Tunneling?
With: socat tcp-listen:12345,reuseaddr,fork,bind=127.1 socks:218.62.97.105:11.11.11.11:3128,socksport=1080you will have a socat waiting for TCP connections on port 12345 on the loopback interface, and forward them to 11.11.11.11:3128 by way of the socks server on 218.62.97.105:1080 You can then use that to connect to D: ssh -o ProxyCommand='socat - socks:127.1:%h:%p,socksport=12345' -p 8080 55.55.55.55(untested)
I got a great answer for my previous question about connecting from Machine A to Machine C via Socks proxy located on Machine B. Say Machine B Ip is 218.62.97.105 and it is listening on port 1080 The command for that: ssh -o ProxyCommand='socat - socks:218.62.97.105:HOST_C:21,socksport=1080' I wonder if it is possible to make a chain from the socks proxies. Consider scenario: Machine A -> Machine B (Socks proxy 1) -> Machine C (Socks proxy 2) -> Machine D (destination) Machine B IP: 218.62.97.105 PORT 1080 Machine C IP: 11.11.11.11 PORT 3128 Machine D IP: 55.55.55.55 PORT 8080 I hope there is someone out there experienced with socat or any other tool as it seems pretty complicated for me at this point. +100 for anyone who could give me the working answer. debug of socat 1 command: 2012/10/02 20:45:00 socat[15641] D read -> 8 2012/10/02 20:45:00 socat[15641] D received socks4 reply data (offset 0): 00 5c 00 50 c1 6b 90 17 2012/10/02 20:45:00 socat[15641] D received all 8 bytes 2012/10/02 20:45:00 socat[15641] I received socks reply VN=0 CD=92 DSTPORT=80 DSTIP=193.107.144.23 2012/10/02 20:45:00 socat[15641] E socks: ident refused by client 2012/10/02 20:45:00 socat[15641] N exit(1) 2012/10/02 20:45:00 socat[15641] I shutdown(4, 2) 2012/10/02 20:45:00 socat[15641] D shutdown() -> 0 2012/10/02 20:45:00 socat[15641] I shutdown(3, 2) 2012/10/02 20:45:00 socat[15641] D shutdown() -> 0 2012/10/02 20:45:00 socat[15638] I childdied(signum=17) 2012/10/02 20:45:00 socat[15638] D waitpid(-1, 0xbfbea3fc, 1) 2012/10/02 20:45:00 socat[15638] D waitpid(, {256}, ) -> 15641 2012/10/02 20:45:00 socat[15638] I childdied(17): cannot identify child 15641 2012/10/02 20:45:00 socat[15638] D saving pid in diedunknown1 2012/10/02 20:45:00 socat[15638] W waitpid(): child 15641 exited with status 1 2012/10/02 20:45:00 socat[15638] D waitpid(-1, 0xbfbea3fc, 1) 2012/10/02 20:45:00 socat[15638] D waitpid(, {256}, ) -> -1 2012/10/02 20:45:00 socat[15638] I waitpid(-1, {}, WNOHANG): No child processes 2012/10/02 20:45:00 socat[15638] I childdied() finishedsocat second command: 2012/10/02 20:44:38 socat[15640] D socket(2, 1, 6) 2012/10/02 20:44:38 socat[15640] I socket(2, 1, 6) -> 3 2012/10/02 20:44:38 socat[15640] D fcntl(3, 2, 1) 2012/10/02 20:44:38 socat[15640] D fcntl() -> 0 2012/10/02 20:44:38 socat[15640] D connect(3, {2,AF=2 127.0.0.1:22222}, 16) 2012/10/02 20:44:38 socat[15640] D connect() -> 0 2012/10/02 20:44:38 socat[15640] D getsockname(3, 0xbf8111cc, 0xbf811058{112}) 2012/10/02 20:44:38 socat[15640] D getsockname(, {AF=2 127.0.0.1:40843}, {16}) -> 0 2012/10/02 20:44:38 socat[15640] N successfully connected from local address AF=2 127.0.0.1:40843 2012/10/02 20:44:38 socat[15640] I sending socks4 request VN=4 DC=1 DSTPORT=21 DSTIP=xx.xxx.xxx.xxx USERID=mnmnc 2012/10/02 20:44:38 socat[15640] D malloc(42) 2012/10/02 20:44:38 socat[15640] D malloc() -> 0x8f1ec80 2012/10/02 20:44:38 socat[15640] D sending socks4(a) request data 04 01 00 15 3e f4 9f 9a 6d 6e 6d 6e 63 00 2012/10/02 20:44:38 socat[15640] D write(3, 0xbf811304, 14) 2012/10/02 20:44:38 socat[15640] D write -> 14 2012/10/02 20:44:38 socat[15640] I waiting for socks reply 2012/10/02 20:44:38 socat[15640] D read(3, 0xbf811234, 8) 2012/10/02 20:45:00 socat[15640] D read -> 0 2012/10/02 20:45:00 socat[15640] E read(): EOF during read of socks reply, peer might not be a socks4 server 2012/10/02 20:45:00 socat[15640] N exit(1) 2012/10/02 20:45:00 socat[15640] I shutdown(3, 2) 2012/10/02 20:45:00 socat[15640] D shutdown() -> 0 ssh_exchange_identification: Connection closed by remote host
SSH jumping over socks(4/5) proxy chain. Host -> socks proxy -> socks proxy -> destination
I am showing you a very basic way to do it. Here I am assuming that B is directly accessible from A. There may be variations according to various situations. On A: ssh -D socks_port BThis will open up the port socks_port on A as a SOCKS proxy. On your system: ssh -L local_port:localhost:socks_port AThis will forward local_port on your system to port socks_port on A. Then you can configure your browser to use SOCKS proxy on socket localhost:local_portA one-liner would look like this: ssh -t -L 1234:localhost:5678 FIRSTHOST ssh -D 5678 SECONDHOSTwhere FIRSTHOST and SECONDHOST have to be replaced by your hosts’ names or IP addresses. In your browser you have to enter a socks proxy as: localhost:1234
I have 2 servers, A and B. I want to create a tunnel from my system to server B but I have some limits to do this. So I have to first tunnel to server A and from server A to server B. My goal is to have a SOCKS Proxy to browse the web. How can I do this?
How to create a SSH tunnel over 2 servers?
You can test the availability of a socks proxy by trying to load a website through the tunnel. curl -sSf --socks5-hostname localhost:8888 www.google.com > /dev/nullIn the above command, curl will be silent, unless an error occurs. You can wrap this command in a for loop within a script. The return value of curl is zero if the socks proxy is alive (and google.com is not down). #!/bin/bashset -o errexit set -o nounset #set -o xtracePROGNAME=$(basename $0) die () { echo "${PROGNAME}: ${1:-"Unknown Error, Abort."}" 1>&2 exit 1 }status=0 while [[ 1 ]]; do curl -sSf --socks5-hostname localhost:8888 www.google.com > /dev/null || status=$? if [[ $status -ne 0 ]]; then echo "Trying to reconnect .." # kill proxy # reconnect-cmd || die "$LINENO: reconnecting failed" fi sleep 100 done
Say I have a SOCKS connection at local (established by ssh -D8888). I use this to do many things, like bypass the internet censorship. But sometimes the ssh would unexpectedly broken. Then the socks is down. Is there anything that I can used to check whether the local SOCKS proxy is still alive? Thanks,
is it possible to check whether a local socks proxy works with shell script?
I fixed this by running ssh-keygen as root user (not sudo), not specifying any name for the key (e.g. use default name and location) and not providing any passphrase. Then I made sure the permissions for all files in /root/.ssh were set to 600 and the .ssh folder was set to 700. Then just add the command from the question to run the proxy to your /etc/rc.local and you're done. As far as I understand this it is not a security risk to have no passphrase in this case as the public and private key only reside in the root folder of the local machine.
I want to setup a linux server (Ubuntu Server 14.04 in this case) to be used as a SOCKS5 proxy by software on another client. Now this is pretty easy by running ssh -f -N -D 0.0.0.0:1080 localhost as explained by this guide. This works perfectly. Now what the guide isn't telling me is how do I automatically start the proxy on system startup? Just entering this command into rc.local isn't working because I have to provide a passphrase. Generating a ssh key with ssh-keygen and putting it into /root/.ssh/ doesn't work either. Be noted that I don't really understand how the whole ssh public key thing works, so my error might lie there. Anything special I have to do when my public key is supposed be used for the local client and not a remote client?
Run SSH SOCKS5 proxy on system startup
So, if I understand correctly, you can ssh from machine 1 to machine 2 but not from your laptop (from which you can ssh to machine 1). So you'd like to have a socks server on machine 1 and use it from your laptop? So looks like all you need is port forward that 8080: run on your laptop: ssh -nL 8080:localhost:8080 machine1 ssh -C2qnN -D 8080 username@machine2
I have seen questions on tunnelling SSH through multiple machines but I want to tunnel a SOCKS connection. Normally I would use something like ssh -C2qTnN -D 8080 username@remote_machine to make the local port 8080 a SOCKS tunnel through the remote machine. I would like to open a socks connect from my laptop on machine2, that can only be reached via SSH from machine1. So rather than tunnelling SSH connections in the normal sense, I suppose I wish to run that command on machine1 so the local port 8080 is redirected to machine2 and somehow pipe 8080 on machine1 back to a local port on laptop.
Tunneling SSH through multiple machines (for SOCKS)
Do you know flossmanuals.net? They've got a great manual on How to Bypass Internet Censorship (also as epub and pdf for offline use -- and note the translations, among others in Farsi). Among many tools and methods, they cover SOCKS proxies. But given a VPS somewhere, the other ways they mention should be considered, too. (For example how to use ssh to act as a SOCKS server, or use OpenVPN). Beside guiding you through configuring Firefox to use SOCKS for everything (i.e. not forget DNS), they also mention tools to socksify any application (we recently had this issue coming up here). Do not skip the chapter on Circumvention and Safety.In particular, using a machine myvps that you can ssh into; setting up a local (say, on laptop) SOCKS5 Proxy (laptop --> myvps --> internet) can be done via running ssh -D 1080 accountname@myvpson the laptop. After the connection is established, you can use 127.0.0.1:1080 on laptop as a SOCKS5 server. (I'm referring to OpenSSH here, but PuTTY can do this, via Connection, SSH, Tunnels in the configuration.) In Firefox you can tunnel all traffic through the SOCKS server via Preferences - Advanced - Network - Settings...: set this to manual, SOCKS host 127.0.0.1, port 1080; and the network.proxy.socks_remote_dns setting in about:config (just enter this as the URL), set this to true.
I live in Iran and due to its restrict censorship, We all have extreme difficulties accessing normal sites and services. So I thought this would be a great idea to setup a proxy server on my VPS, so that I myself could find a way around this censorship. Can anyone show me a step by step working tutorial on this matter? Or guide me in the right direction? I will be grateful.
How can I setup a SOCKS4 or 5 proxy on CentOS 5.8?
That's about it, but you've inverted home and office. The point is that the office firewall rejects outgoing connections other than web traffic. But since HTTPS traffic and SSH traffic are both encrypted, it can't easily distinguish between them, so the firewall just blocks outgoing connections to ports other than 443 (the standard HTTPS port) and probably has an HTTP proxy that only allows HTTP traffic on port 80 and possibly other ports. So the idea is to run an SSH server at home, listening on port 443, and have it relay the data to anywhere. SOCKS is a generic protocol to relay TCP connections. It can relay web traffic, SSH connections, and many other protocols. You would configure your web browser to use a SOCKS proxy, not an HTTP proxy. Major web browsers support SOCKS. If you want to use an application that doesn't support SOCKS proxying, you can run it under tsocks. It's actually possible to distinguish SSH from HTTPS, because the first few bytes (before the traffic can be encrypted, while the two endpoints are still negociating the options and the session key) are distinguishable. Not all corporate firewalls do that. You can use stunnel to embed the SSH session in HTTPS.
I was going through this post and couldn't quiet follow what is being implied there. A great "feature" I use every day at work: The ability to have SSH listen on port 443 so I can create a tunnel which bypasses my work firewall, allowing me to run a local SOCKS proxy tunneled through SSH to my internet facing Linux server. I can completely ignore my corporate firewall.From what I understood, port 22 is blocked for SSH access in the office machine. How will I be connecting to the remote office machine from my home machine? From my home machine, will I be using the command as something like, ssh -D 1337 -f -C -q -N user@office -p 443And then change the proxy settings in my home machine's firefox to access the office network from my home machine?
run a local SOCKS proxy tunneled through SSH
The best I can think of is running pgrep in a loop. If you have more than one chromium running, you can isolate your script in a separate PID namespace with unshare or firejail, for instance.
I'm trying to set up a script that I can call up easily in a WM. The idea is to establish a socks tunnel via ssh to a known-good server and then start chromium with the appropriate environment variables...Then, wait until that instance of chromium exits and then unbind the port. The last part is important, because if left open, the script will fail to run again because the tunnel is still up. However, when I run it, it just establishes the connection, starts chromium and then closes the connection...It doesn't wait for Chromium to close before continuing. I've tried adding &, && and the wait function, but I think Chromium returns something after successful execution, which is triggering my script to continue. ssh -f -N -M -S ~/socket -L 4771:localhost:4771 -p 1234 foo.bar.com chromium --proxy-server="socks://localhost:4771" && wait ssh -S socket -O exit foo.bar.com I'm using Arch and ZSH as my shell.
wait for chromium to exit before continuing a shell script
Op (I, that is) didn't take this OpenVPN FAQ seriously enough: One of the most common problems in setting up OpenVPN is that the two OpenVPN daemons on either side of the connection are unable to establish a TCP or UDP connection with each other.This is almost [always] a result of: ... A software firewall running on the OpenVPN server machine itself is filtering incoming connections on port 1194 [here 5000-5007]. Be aware that many OSes will block incoming connections by default, unless configured otherwise.There's no problem with OpenVPN. I just neglected to create a firewall rule for WAN in the pfSense VM that's running the OpenVPN servers, to provide access for the hidden-service proxy in the Tor-gateway pfSense VM. How embarrassing. But this question should remain, I think, in case others make the same dumb mistake that I did.
I'm experimenting with OpenVPN connections routed through Tor, using pairs of Tor gateway and OpenVPN-hosting VMs. On the server side, link local ports are forwarded to Tor hidden-service ports on the associated gateway VM. On the client side, OpenVPN connects through socks proxies on the associated Tor gateway VM. The above setup works using Debian 7 for all Tor gateway and OpenVPN-hosting VMs. I'm using Whonix, which has been updated to OpenVPN 2.3.2 (built on 2013-09-12). Server-client ping is about 1200 msec. However, the setup does not work using pfSense 2.1 as Tor gateway and OpenVPN-hosting VMs on the server side. pfSense 2.1 also has OpenVPN 2.3.2 (built on 2013-07-24). For both Debian and pfSense clients, I see: TCP connection established with [AF_INET]192.168.0.10:9152 recv_socks_reply: TCP port read timeout expired: Operation now in progress (error ...)This is the same error reported in Debian bug #657964 for openvpn version 2.2.1-3: "openvpn: Can't connect to a VPN using SOCKS proxy". It's also been reported in OpenVPN bug #328 for openvpn version 2.3.2: "openvpn client gives up instead of retrying when proxy server is slow". However, this may not be the same bug. The problem here may be latency in forwarding OpenVPN server ports through Tor hidden services, rather than latency in Tor SOCKS proxies on the client side. Or it may be both. In any case, I find that OpenVPN 2.3.2 servers fail with this client error in pfSense 2.1, but not in Debian 7. Perhaps the latest package in the Debian 7 repository includes bug fixes that were issued since the pfSense 2.1 build. How can I configure OpenVPN to wait for slow SOCKS proxies?
How can I configure OpenVPN to wait for slow SOCKS proxies?
How about using two different configuration files for tsocks? According to this manpage, tsocks will read its configuration from the file specified in the TSOCKS_CONF_FILE environment variable. So you could split your tsocks.conf to tsocks.1081.conf and tsocks.1082.conf and then do something like this (bash syntax): $ TSOCKS_CONF_FILE=/path/to/tsocks.1081.conf tsocks wget http://www.google.com $ TSOCKS_CONF_FILE=/path/to/tsocks.1082.conf tsocks wget http://www.google.comNote: The manpage has a typo and lists the environment variable as TSOCKS_CONFFILE - missing an underscore.
According to the answer of this question, I have my /etc/tsocks.conf containing these lines: path { server = localhost server_port = 1081 reaches = <ip-address-of-server-b>/32 } path { server = localhost server_port = 1082 reaches = <ip-address-of-server-d>/32 }and I have run these two commands: ssh -fND :1081 server-a ssh -fND :1082 server-bNow I want to use tsocks to do a wget of a page first using the socks' service listening to 1081 port and then using the other one listening to 1082 port. If I had only one service I know that I can do that through this command: tsocks wget http://www.google.comNow that I have more than one tsocks services how can I do that? Can I provide for example the forwarding port through an option? I can't find something in the man pages of tsocks. I want something like that: tsocks --forwarding_port=1081 wget http://www.google.com tsocks --forwarding_port=1082 wget http://www.google.com
how to specify the forwarding port when using multiple tsocks services?
I recommend changing your ProxyCommand from using nc to use -W. For example: ProxyCommand ssh -l USERNAME BASTIONHOST1 -W %h:%pThat has fewer requirements for the bastion host, so it is less likely to break in case the administrator decides to change how the bastion host is configured. I don't think there is any way to fall through between different pac files. But you can more or less just concatenate the two pac files - with one minor caveat. The entry point would exist in both files. You would need to rename the entry point in the original pac file and in your own additions you can make calls to the renamed version of the function. Historically the entry point was a function named FindProxyForURL. Some newer browsers have extensions which are activated by using a differently named entry point.
Background I work on a corporate network that is behind a proxy server. I also work with some remote sites that I am able to access via a bastion / jump host ssh proxy. In my ~/.ssh/config I have a proxy configuration for my SSH tunnels that allows the jumping through our bastion hosts in order to reach the remote labs Host *.remoteLab1 ProxyCommand ssh -l USERNAME BASTIONHOST1 nc %h %pHost *.remoteLab2 ProxyCommand ssh -l USERNAME BASTIONHOST2 nc %h %pI use both OSX and Linux so I assume the commands are more or less the same Current Solution My current solution is less than ideal. I basically make a socks connection to one of the remote labs such as: ssh -D 1080 remoteLab1 ssh -D 1081 remoteLab2 Then in both realvnc and chrome I change the proxy server to localhost:1080 / localhost:1081. In chrome I have a plugin that allows me to do this and in VNC its manual. As both of these remote labs have a unique domain I was wondering if there is an easier way playing with routing tables to send all traffic through these socks proxies based on ip address Requested Solution GIVEN: A socks5 tunnel is open on port 1080 GIVEN: A socks5 tunnel is open on port 1081 Requirementsaddresses of domain1.org go through 1080 addresses of domain2.org go through 1081 fall through case - everything else goes through standard proxy serverNice-to-have'sThe solution is not permanent - it is enabled by a script in conjunction with turning on one of the socks tunnelsIs it possible to also map specific ip addresses as opposed to domains through one of the proxy servers Ideas I'm not really sure at all where to start with all of this. One solution I saw somewhere was to use a proxy.pac however the corporate network already has a proxy.pac and I wasn't sure if there is a way to do a fall-through pac where if not in my custom .pac then use the settings in the corporate .pac Using the iptables or route command - however both of those are a little out of my knowledge zone Setting up local loopbacks or something
How to route traffic through different proxy servers based on destination
I found an answer to my question and for a visibility purpose I think responding is better than editing. So I wanted to use Tor and a SOCKS5 proxy at the same time using proxychains. There are two ways to achieve that : With dante server Dante server is a SOCKS5 server (and client) with lots of options I don't know yet but will learn soon I hope. So first you install dante-server : wget https://www.inet.no/dante/files/dante-1.4.1.tar.gz tar xvf dante-1.4.1.tar.gz cd dante-1.4.1 ./configure make && make install#This is my launch script you can use yours obviously wget https://dl.dropboxusercontent.com/u/71868038/sockd mv sockd /etc/init.d/sockd chmod +x /etc/init.d/sockd update-rc.d sockd defaultswget https://dl.dropboxusercontent.com/u/71868038/sockd.conf mv sockd.conf /etc/You can edit your conf as you want, for example to block all the requests except from your IP address. More info here. Don't forget to change the IP address of your server in the config file ! Now that your SOCKS5 server is ready and works, you can use it along with tor thanks to proxychains. Just add your server in the config file : strict_chain proxy_dns tcp_read_time_out 15000 tcp_connect_time_out 8000 socks4 127.0.0.1 9050 socks5 1.2.3.4 1080Start tor and enjoy : service tor start proxychains iceweaselWith an SSH tunnel Simpler solution. You will need tor, torsocks and ssh apt-get install torsocks service tor start torsocks ssh -NfD 1080 1.2.3.4 proxychains iceweaselConfiguration of proxychains : strict_chain proxy_dns tcp_read_time_out 15000 tcp_connect_time_out 8000 socks5 127.0.0.1 1080What you do is you tunnel an SSH connection to your server after going through tor service (torsocks do that, I don't really know how it works yet. I'll edit if I figure out). And then : proxychains iceweaselIf someone needs more in-depth explanations just ask ;)
I'm trying to setup proxychains on Kali like this : User > Tor > SOCKS5 > OutI've created my SOCKS5 server with danted running on port 1080. I setup an SSH connection on my Kali distrib : ssh -NfD 1080 user@addressAnd I'm able to connect to the SOCKS5 server without trouble. Same when I'm trying to connect to Tor network. But when I try to connect to Tor AND to the SOCKS5 server, I get a denied error : |S-chain|-<>-127.0.0.1:9050-<>-127.0.0.1:1080-<--deniedSo I tried to allow connections from any IP address in dante, I'm not sure if it's right : logoutput: /var/log/dante.loginternal: 127.0.0.1 port = 1080 external: venet0 method: username none user.notprivileged: nobodyclient pass { from: 0.0.0.0/0 port 1-65535 to: 0.0.0.0/0 protocol: tcp udp } pass { from: 0.0.0.0/0 to: 0.0.0.0/0 protocol: tcp udp }Any idea where it could come from ?
Proxychains, Tor, SSH and Danted. Connection denied
I was answering similar question not a long time ago. I didn't try it, but this one should work for you: sshfs -o ProxyCommand="/bin/nc.openbsd --proxy localhost:3128 \ --proxy-type socks5 %h %p" sergius@$host: /home/sergius/work/SSHFS/$host/The SSHOPT=VAL just the format of option you want to use. You need to replace it with the specific key-value pair. Also you need to tell the nc what type of proxy is that
I have some bridge host, which allows access to protected network. I connect to it using this command: ssh sergius@bridge_host -D 3128Thus, I can turn on socks proxy in browser and it works. I can login to hosts on that network with this command: ssh -o 'ProxyCommand /bin/nc.openbsd -x localhost:3128 %h %p' sergius@any_internal_hostIt works properly, but I can't mount via sshfs any of these hosts. Probably, I can't get how to use this ssh options in sshfs command. I tried even so silly tricks: sshfs -o "ssh_command=\"ssh -o 'ProxyCommand /bin/nc.openbsd -x localhost:3128 %h %p'\"" sergius@$host /home/sergius/work/SSHFS/$host/ sshfs -o 'SSHOPT=ProxyCommand /bin/nc.openbsd -x localhost:3128 %h %p' sergius@$host: /home/sergius/work/SSHFS/$host/ sshfs -o 'port=3128' sergius@$host: /home/sergius/work/SSHFS/$host/One command returns "Connection reset by peer", another - unknown option `SSHOPT=ProxyCommand /bin/nc.openbsd -x localhost:3128 %h %p' I didn't manage to find any info on the web. Please, help. === I feel so stupid, but still can't understand why I got such error: sshfs -d -o sshfs_debug -o LogLevel=DEBUG3 -o ProxyCommand="/bin/nc.openbsd --proxy localhost:3128 --proxy-type socks5 %h %p" sergius@$host:~ /home/sergius/work/SSHFS/$host/ SSHFS version 2.4 FUSE library version: 2.9.0 nullpath_ok: 0 nopath: 0 utime_omit_ok: 0 executing <ssh> <-x> <-a> <-oClearAllForwardings=yes> <-oLogLevel=DEBUG3> <-oProxyCommand=/bin/nc.openbsd --proxy localhost:3128 --proxy-type socks5 %h %p> <-2> <sergius@dev-host003> <-s> <sftp> debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Executing proxy command: exec /bin/nc.openbsd --proxy localhost:3128 --proxy-type socks5 dev-host003 22 debug1: permanently_drop_suid: 1000 debug1: identity file /home/sergius/.ssh/id_rsa type -1 debug1: identity file /home/sergius/.ssh/id_rsa-cert type -1 debug1: identity file /home/sergius/.ssh/id_dsa type -1 debug1: identity file /home/sergius/.ssh/id_dsa-cert type -1 debug1: identity file /home/sergius/.ssh/id_ecdsa type -1 debug1: identity file /home/sergius/.ssh/id_ecdsa-cert type -1 /bin/nc.openbsd: invalid option -- '-' usage: nc [-46bCDdhjklnrStUuvZz] [-I length] [-i interval] [-O length] [-P proxy_username] [-p source_port] [-q seconds] [-s source] [-T toskeyword] [-V rtable] [-w timeout] [-X proxy_protocol] [-x proxy_address[:port]] [destination] [port] ssh_exchange_identification: Connection closed by remote host read: Connection reset by peer================= Wow! I managed to make it work!!! Great thanks for clarification with options. I read all allowed options and make it via "-x proxy_adress[:port]": sshfs -o ProxyCommand="/bin/nc.openbsd -x localhost:3128 %h %p" sergius@$host:/home/sergius /home/sergius/work/SSHFS/$host/
HOWTO: sshfs via socks proxy
Seems like if you can (from A) ssh to C, you can do: user@Alpha:~$ ssh -L1234:localhost:1234 Charlie user@Charlie:~$ ssh -Dlocalhost:1234 Delta... and at that point, you can have Firefox use localhost:1234 as a SOCKS proxy. The local SSH will proxy that over the tunnel to C, where that ssh is listening as a SOCKS server and will forward it to D. But really, I'd push back on your client: if they expect you to access sites from A, they ought to provide a sane method to do so. Like a VPN, for example.
This is slightly different than the other SSH questions I have seen on here so here it goes. I have a complex setup for accessing a web application and unfortunately there is no way around it. Here is the scenario and systems involved (IP addresses anonymized for obvious reasons): System Alpha System Bravo System Charlie System DeltaI need to communicate to a website available to System Delta from System Alpha. The problem is, I cannot talk to System Delta directly. Currently, I can SSH in to System Bravo, and I can SSH into System Charlie utilizing a proxy command which utilizes System Bravo as the proxy. To illustrate this, here is my SSH Config: Host system.bravo User bob Hostname 1.1.1.1 ForwardAgent yesHost system.charlie User bob ForwardAgent yes ProxyCommand ssh -q system.bravo nc localhost 1081System Charlie is accessible to System Alpha in this way because it has created a reverse SSH tunnel to System Bravo on port 1081. This is done because the firewall sitting between System Bravo and System Charlie ONLY allows SSH outbound to System Bravo. I know this is annoying, but it is a client requirement. System Charlie can SSH into System Delta without issue. My problem is, how do I go about setting up a usable SOCKS proxy with SSH so that System Alpha can access a website only accessible from System Delta. Normally I would do it like this: ssh -2 -C -D 4001 -L 4002:localhost:4002 -L 4003:localhost:4003 [emailprotected] ssh -2 -C -D 4002 -L 4003:localhost:4003 [emailprotected] ssh -2 -C -D 4003 [emailprotected]Then I would simply point Firefox to localhost:4003 and that would be my working proxy to access the website. However, because of the weird setup between System Bravo and System Charlie this will not work. ***It was pointed out to me a diagram displaying communications would make this easier. So here it is: A full list of possible connections: -----> --X--> -----> Alpha Bravo Charlie Delta <--X-- <----- <--X--The connection between Alpha and Bravo and Charlie as it is currently configured: ProxyCommand ssh -q system.bravo nc localhost 1081 --------------------------------------------------> Alpha Bravo <---------------------------------------- Charlie AUTOSSH -R 1081:localhost:22 -----> = An allowed connection via SSH --X--> = A blocked connection, only reverse SSH possibleDoes anyone have a solution on how I can accomplish this keeping in mind the required proxy command and reverse SSH tunnel between System Bravo and System Charlie? Thanks so much PS: I hope my diagram helps, sorry that it is not a real image
SSH Tunnel Between Multiple Hosts
The -D option enables a SOCKS4/5 server. It is not identical to an normal HTTP/FTP-proxy and need therefore to be interfaced differently. A lot of browser support SOCKS proxies, but usually not via a http_proxy/ftp_proxy environment variable. You can wrap programs, which do not support SOCKS directly, with tsocks. See also https://superuser.com/questions/262956/how-to-invoke-a-command-using-specific-proxy-server and How to download a file through an SSH server?
I have this situation: server 1: public ip x.x.x.x private ip 192.168.0.1 server 2: private ip 192.168.0.10 The server 1 can reach the internet with both interfaces: ping -i x.x.x.x www.google.com www.google.com is aliveping -i 192.168.0.1 www.google.com www.google.com is aliveThe server 2 can reach only the server 1 via ssh. I need to update some packages and install some new ones on this server (all shell commands). I tried already using: ssh -D 9001 root@server1and exporting http_proxy/ftp_proxy with localhost:9001 on server 2, but it seems to not work. How can I use the public connection of server 1?
Tunnel HTTP traffic using another machine via SSH
You can write a proxy.pac (Proxy Auto Configure) file/script and configure that in your browser to direct what proxy (if any) to use when. It would look something like: function FindProxyForURL(url, host) { var socksProxy = "SOCKS ip.of.sock.proxy:port"; var workProxy = "PROXY ip.of.work.proxy:port"; var noProxy = "DIRECT"; if (shExpMatch(host, "*.mybank.example.com")) return noProxy; if (shExpMatch(host, "*.work.example.net")) return workProxy; if (host == "other.work.example.net"") return workProxy; return socksProxy; }Of course enter the real IPs and ports of the proxies, and use the correct hostnames. You can specify the path the this file by starting e.g. Google Chrome with --proxy-pac-url=file:///path/to/proxy.pac; Chrome on Linux doesn't allow you to directly enter this in the configuration.
I use different proxies for different purposes. I use shadowsocks proxy for my general purpose web surfing. For going to bank websites, I disable proxy. For accessing some websites related to my work, I should use an ssh tunnel proxy.So I have a Network Proxy settings GUI opened always and constantly changing between different socks ports when I want to use different websites. Now I want to define some rules to make proxy switching seamless. Is there any straightforward way to do it?
managing multiple proxies in linux