id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_codereview.140071 | In an attempt to hide from the sun outside, I gave the community challenge Rainfall a try.Code here, specific questions below.import java.util.Scanner;import java.util.ArrayList;import java.util.HashMap;import java.util.stream.Collectors;public class RainfallChallenge{ public static void main(String[] args) { Scanner input = new Scanner(System.in); // building the farm int size = input.nextInt(); Farm farm = new Farm(size); // setting heights, creating fields for (int x = 0; x < size; x++) for (int y = 0; y < size; y++) farm.putField(x, y, new Field(input.nextInt())); // skynet addon: knows every field as key and maps to the list of all fields of its basin HashMap<Field, ArrayList<Field>> basins = new HashMap<Field, ArrayList<Field>>(); for (int x = 0; x < size; x++) for (int y = 0; y < size; y++) groupFields(basins, farm.getField(x, y)); // from lists of fields to sorted list of basin sizes System.out.println( new StringBuilder( basins.values() .stream() .distinct() // set of field lists .mapToInt(b -> b.size()) // to their number of fields .sorted() .boxed() // in bubble wrap .map(String::valueOf) // to String stream .collect(Collectors.joining( ))).reverse()); // desired order } private static ArrayList<Field> groupFields(HashMap<Field, ArrayList<Field>> basins, Field field) { ArrayList<Field> basin; // end recursion: revisit field if(basins.containsKey(field)) { return basins.get(field); } // end recursion: new basin else if(field.getLowestNeighbor() == field) { basin = new ArrayList<Field>(); } // recursion: find basin of lowest neighbor else { basin = groupFields(basins, field.getLowestNeighbor()); } basin.add(field); basins.put(field, basin); return basin; }}class Farm{ private Field[][] fields; public Farm(int size) { fields = new Field[size][size]; } public void putField(int x, int y, Field field) { fields[x][y] = field; // check all 4 neighbors field.setNeighbor(getField(x + 1, y)); field.setNeighbor(getField(x - 1, y)); field.setNeighbor(getField(x, y + 1)); field.setNeighbor(getField(x, y - 1)); } public Field getField(int x, int y) { try { return fields[x][y] != null ? fields[x][y] : NullField.instance; } catch (IndexOutOfBoundsException e) { return NullField.instance; } }}class Field{ private int height; private Field lowestNeighbor; public Field(int height) { this.height = height; lowestNeighbor = this; } public void setNeighbor(Field neighbor) { if (neighbor.height < lowestNeighbor.height) { lowestNeighbor = neighbor; } else if (neighbor.lowestNeighbor.height > height) { neighbor.lowestNeighbor = this; } } public Field getLowestNeighbor() { return lowestNeighbor; }}class NullField extends Field{ public static final NullField instance = new NullField(); private NullField() { super(Integer.MAX_VALUE); }}I got lazy and put everything into one file. General reviews are appreciated, as always.However, there are two things I want to specifically ask about:I'm new to Stream. // from lists of fields to sorted list of basin sizesSystem.out.println( new StringBuilder( basins.values() .stream() .distinct() // set of field lists .mapToInt(b -> b.size()) // to their number of fields .sorted() .boxed() // in bubble wrap .map(String::valueOf) // to String stream .collect(Collectors.joining( ))).reverse()); // desired orderI think this code does quite a lot (values of a HashMap as a Set, to integers, sorted, to String) in a compact yet readable way and I see the value of this language feature. Is there anything bad? I had to use StringBuilder to reverse the result. This seems bad. The Stream cannot be reversed as it's considered endless in general. There are workarounds for when I know it's not, but they aren't very readable and not self documenting at all. Ideally, I want to tell .sorted() if I want the order to be ascending or descending. But there's no parameter for that. How can this be done more elegantly?The use of null object pattern to build the grid. The problem I'm trying to solve here is that not every Field has all 4 neighbors. The first one at (0/0) has none at (-1/0) and (0/-1). Also, during construction, future fields are not yet available. The idea behind the pattern (as far as I understand it) is to always have an object, no matter what. If the logic of the program does not allow it, return a special object that acts neutrally in the calculation of the program.public Field getField(int x, int y){ try { return fields[x][y] != null ? fields[x][y] : NullField.instance; } catch (IndexOutOfBoundsException e) { return NullField.instance; }}This way, there's always an object returned by getField and never null. Does it make sense to apply this pattern here? I only heard of use cases where it might substitute a an uninitialised Object to prevent checks for null in the actual code. I thought the fields out of bounds or not yet created are good examples for that. Is the implementation with exceptions ok? This will occur quite often during execution.class NullField extends Field{ public static final NullField instance = new NullField(); private NullField() { super(Integer.MAX_VALUE); }}I made this a singleton because I only want to create one such object. There's no second null either. Other than that I could have provided @Overrides for the methods of the Field class, for example:public void setNeighbor(Field neighbor){ // pffft, whatever}But I never call those on the nullobject, so I omitted them. Is this bad practise? | Tomorrow: rainy with a chance of null objects | java;object oriented;stream;null;community challenge | null |
_unix.295414 | I'm trying to SSH from Debian (Jessie, installed as chroot environment in Chrome OS) to Arch on the same network, and I keep getting an error saying the connection was refused because of a public key error. I have absolutely no experience with SSH. I have a suspicion that it's a firewall rejection issue, but I really don't know what to do. | Completely new to SSH | debian;ssh;arch linux;iptables;openssh | Option 1Enable password authentication on Server (your computer running arch-linux)Edit the SSH configuration file on the server to allow password authentication.[user@arch]$ sudo nano /etc/ssh/sshd_configPasswordAuthentication yesChallengeResponseAuthentication yesUse your favorite text editor instead of nano, such as vim. To use vim, once it opens the file, hit i to switch to insert mode make your edits then press esc to leave insert mode and type :wq and hit enter to write changes and quit.Note: sudo is not part of the initial installation of arch-linux. As an alternative login to root with su.Now if key authentication fails you will be prompted to enter your password for the user account on the arch computer.Option 2Generate key and send to Server[you@debian]$ ssh-keygen[you@debian]$ ssh-copy-id user@arch-hostname |
_softwareengineering.243096 | When using Akka, CQRS style, is still there a place for Entities?Or does everything now go to Aggregates, implemented as Actors + Value Objects.I notice that most entities are written as mutable objects with side effects - this doesn't seem to jive well with Actor based style. | Entities (DDD) when using CQRS on Akka | domain driven design;cqrs;akka | null |
_unix.252369 | I start most of my virtual machines using virtual box.Most of the time I configure them with bridge networking. But this time I would like to set it up with nat network.I added a virtual network on virtual box: 10.0.2.0/24 and launch the box. Now I would like to connect to it from my host system. But I do not know how to add the virtual interface and take my IP with the ip command (Arch Linux).A similar interrogation is how can I set up the virtual machine network (still in virtual box) so that the box can only be contacted from my host system. | How to connect to virtual network on linux | networking;arch linux;virtualbox | null |
_unix.367590 | I am trying to Build and install a Raspberry Pi RT Preempt Linux Kernel.These are the steps that I had followed:I installed the pre-compiled kernel kernel-4.4.9-rt17.tgzI downloaded Raspberry Pi kernel sources and applied the Real Time patch-4.4.9-rt17.patch.gz.(I followed this link for installation )I configured my kernel for Raspberry Pi3,Model B using :export KERNEL=kernel7make bcm2709_defconfigI configured the kernel to support Fully Preemptible Kernel (RT) usingmake -j$(nproc) menuconfigI build the kernel usingmake -j$(nproc) zImage but I received the error:In file included from arch/arm/kernel/asm-offsets.c:14:0:./include/linux/sched.h:2040:32: error: expected identifier or ( before & tokendefine tsk_cpus_allowed(tsk) (&(tsk)->cpus_allowed) ^ ./include/linux/sched.h:3679:37: note: in expansion of macro tsk_cpus_allowedstatic inline const struct cpumask *tsk_cpus_allowed(struct task_struct *p) ^ In file included from arch/arm/kernel/asm-offsets.c:14:0:./include/linux/sched.h:3687:19: error: redefinition of tsk_nr_cpus_allowed static inline int tsk_nr_cpus_allowed(struct task_struct *p) ^ In file included from arch/arm/kernel/asm-offsets.c:14:0: ./include/linux/sched.h:2042:19: note: previous definition of tsk_nr_cpus_allowed was hereDo you have any idea to solve this? I don't know what I am doing wrong. I would really appreciate any help. | Raspberry Pi RT Preempt Linux Kernel Build Error | linux kernel;compiling;raspberry pi;patch;real time | null |
_codereview.155009 | I wrote the following to output, at the type level, the sum, Nat, of an input HList:trait HListSum[L] { type Out}object HListSum { type Aux[L, O] = HListSum[L] { type Out = O } def apply[L <: HList](implicit ev: HListSum[L]): ev.type = ev implicit def hListSumInductive[H <: Nat, L <: HList, S <: Nat, T <: Nat]( implicit rest: HListSum.Aux[L, T], all: Sum.Aux[H, T, S]): HListSum.Aux[H :: L, S] = new HListSum[H :: L] { type Out = S } implicit val hlistSumHNil: HListSum.Aux[HNil, _0] = new HListSum[HNil] { type Out = _0 }}Testing:import net.HListSumimport shapeless._import nat._scala> HListSum[_1 :: _2 :: _3 :: HNil]res0: net.HListSum.Aux[shapeless.::[shapeless.nat._1,shapeless.::[shapeless.nat._2,shapeless.::[shapeless.nat._3,shapeless.HNil]]],this.Out] = net.HListSum$$anon$4@132c4879scala> val expected: res0.Out = _6expected: res0.Out = Succ()scala> HListSum[_0 :: _0 :: _1 :: HNil]res1: net.HListSum.Aux[shapeless.::[shapeless.nat._0,shapeless.::[shapeless.nat._0,shapeless.::[shapeless.nat._1,shapeless.HNil]]],this.Out] = net.HListSum$$anon$4@a7b83a8scala> val expected2: res1.Out = _1expected2: res1.Out = Succ()scala> val expected2: res1.Out = _3<console>:19: error: type mismatch; found : shapeless.nat._3 (which expands to) shapeless.Succ[shapeless.Succ[shapeless.Succ[shapeless._0]]] required: res1.Out (which expands to) shapeless.Succ[shapeless._0] val expected2: res1.Out = _3 ^Please critique my code. Also, am I using induction to determine the sum for the non-HNil case? | Sum of HList (where all elements are Nat's) | scala | First for some little things. I'd put bounds on the L type parameter and the Out type member to capture the facts that you know will always be true about them:trait HListSum[L <: HList] { type Out <: Nat}This makes the intent clearer to readers and allows you to use Out in places you couldn't otherwisee.g. this simple example wouldn't compile without the bound:scala> def foo[N <: Nat]: Unit = ()foo: [N <: shapeless.Nat]=> Unitscala> def bar[L <: HList](implicit sum: HListSum[L]): Unit = foo[sum.Out]bar: [L <: shapeless.HList](implicit sum: HListSum[L])UnitI'd also change the return type of apply to be a little more specific (or rather less specific, I guessmore focused on what's relevant, in any case):def apply[L <: HList](implicit ev: HListSum[L]): Aux[L, ev.Out] = evThis is mostly for legibilityapart from how the type is printed in the console I'm not sure off the top of my head whether there's any real difference between this and the ev.type version.I'd make two changes to the hListSumInductive implementation (apart from changing the case of the l in the name to be consistent with hlistSumHNil):implicit def hlistSumInductive[H <: Nat, T <: HList, TS <: Nat](implicit rest: HListSum.Aux[T, TS], all: Sum[H, TS]): HListSum.Aux[H :: T, all.Out] = new HListSum[H :: T] { type Out = all.Out}The first is that I've renamed L to T and T to TS, since using H and T to name the head and tail types of an hlist is a pretty standard convention. More significantly, I've dropped the S type parameter (representing the total sum) altogether, since we don't need a type parameter to refer to that type (all.Out works just fine, since we don't need to refer to it in other implicit parameters).One other note about this method: I'm not sure whether you chose Sum[H, TS] over Sum[TS, H] intentionally, but it's the right thing to do if you care about compile times (and you should in a case like this), since the instance will be resolved more quickly when the larger number is on the right, and in this operation the larger number will be TS more often.One other tiny thingI'd probably rename O in Aux:type Aux[L <: HList, Out0] = HListSum[L] { type Out = Out0 }Mostly because O is easy to confuse with 0, and because I personally tend to use the 0 suffix as a convention in these cases. That's entirely a matter of taste, though.Lastly, if I were writing this for a library I'd probably use a sealed abstract class instead of a trait and would make the object final, but neither change is terribly important. |
_unix.272621 | Every evening cron hibernates my PC for the night. Now, since there is a chance that I won't be the person turning the computer on next morning, I would like to make sure that all the active sessions are properly secured behind screensavers prior to hibernation. Regular su nor similar methods of running programs with other users' permissions do not seem to work: apparently the programs or scripts do not have connections with the proper KDE sessions. I remember to manage it under KDE3 (although I can't recall the solution details), but since I migrated to KDE4 years ago the method did not work any more.Would anybody put me in the right direction? KDE 4.14.9NAME=openSUSEVERSION=13.2 (Harlequin)VERSION_ID=13.2PRETTY_NAME=openSUSE 13.2 (Harlequin) (x86_64)ID=opensuseANSI_COLOR=0;32CPE_NAME=cpe:/o:opensuse:opensuse:13.2BUG_REPORT_URL=https://bugs.opensuse.orgHOME_URL=https://opensuse.org/ID_LIKE=suseThank you very much for your help. | How can root lock KDE sessions of all the other users | linux;kde;opensuse;kde4 | null |
_datascience.18898 | I have created chatbot on Cornell movie dataset and it's working fine. I have trained chatbot application up to global step 330000. I am using tensorflow library. I am using ||source|| chatbot. I am also receiving output fine. Then I created this new dataset file by modifying original dataset and i wish to train chatbot application with updated files. Now should I delete previously saved checkpoints and saved data and start training from zero or should i train from 330000 onwards without worrying changes in dataset. I wish to modify database daily basis and train on modified database. Please kindly let me know if you have a suggestion for me as I am stuck on this issue. I will really appreciate if you help me with this issue. | how to train tensorflow chat application on updated dataset everyday | machine learning;tensorflow | The distribution of your data won't change too much over a smaller period of time, retraining from scratch seems like a waste. Just add the new data to your set and do one or more epochs on all your data, not just the new ones. It can readjust the weights a little bit for changes in the distribution which is exactly what you want. Make sure you are not overreliant on early stopping as regularization method against overfitting because you continuously train like this, use some other regularizer. I would suggest periodically to retrain from scratch but this doesn't have to happen very regularly, maybe once every two months. |
_softwareengineering.263187 | In the introduction to K&R there is the following text:Similarly, C offers only straightforward, single-thread control flow: tests, loops, grouping, and subprograms, but not multiprogramming, parallel operations, synchronization, or coroutines.What does grouping refer to? It is not in the index, and searches have been unfruitful so far. I've programmed some C (though I'm far from an expert), and haven't heard the term before. | What is grouping? | c | C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements. (Wikipedia) Blocks don't look like control flow, but they are; without the curly braces only the next line would be controlled by a loop keyword, but with the braces the entire block is looped over. |
_codereview.135009 | Before I get into describing by problem I'd like to point out I found this question under c++ tag. But the solution of that question is already implemented in my code.I am solving a problem from hackerrank. My code for that problem is logically correct but in some of the cases time limit exceeds.Problem StatementGiven a string S , of lowercase letters, determine the index of the character whose removal will make a S palindrome. If is already a palindrome or no such character exists, then print -1 . There will always be a valid solution, and any correct answer is acceptable. For example, if S = bcbc, we can either remove 'b' at index 0 or 'c' at index 3.Input Format:The first line contains an integer T , denoting the number of test cases. Each line i of the T subsequent lines describes a test case in the form of a single string,Si.Constraints:Length of the string can be 100005.Output Format:Print an integer denoting the zero-indexed position of the character that makes S not a palindrome; if S is already a palindrome or no such character exists, print -1 . As a solution my code is as following:import java.io.*;import java.util.*;public class Solution {public static boolean isPalindrom(String s){ int n = s.length(); for (int i=0;i<(n / 2);++i) { if (s.charAt(i) != s.charAt(n - i - 1)) { return false; } } return true;}public static void main(String[] args) throws IOException{ BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); int t = Integer.parseInt(br.readLine()); while(t-->0){ int flag = 0; String sb = br.readLine().toString(); if(isPalindrom(sb)){ System.out.println(-1); flag = 1; } for(int i=0;i<sb.length()&&flag==0;i++){ StringBuffer s = new StringBuffer(sb); s.deleteCharAt(i); if(isPalindrom(s.toString())){ System.out.println(i); break; } } }}}I've used best palindrome checker algorithm as described here. and also used BufferedReader as described here. But time limit exceeds in some taste cases. How can I improve my code further? Thanks in advance! | Finding the index of the character whose removal will make a palindrome | java;strings;programming challenge;time limit exceeded;palindrome | Import on demandInstead of import java.io.*; it is advised to explicitly list all the classes you use:import java.io.SomeClass1;import java.net.SomeClass2;// And so on...IO facilitiesInstead ofBufferedReader br = new BufferedReader(new InputStreamReader(System.in));you could have used Scanner scanner = new Scanner(System.in);It's so much easier to use Scanner (try it!)Performancefor(int i=0;i<sb.length()&&flag==0;i++){ StringBuffer s = new StringBuffer(sb); s.deleteCharAt(i); if(isPalindrom(s.toString())){ System.out.println(i); break; } }The above is a show-stopper: you keep populating an entire string builder for each index of the char to ignore; see below for a faster implementation.API It would be so much better, if you dedicated a (static) method for running the entire algorithm; again, see below.Summa summarumAll in all, I had this in mind:import java.util.Scanner;import java.util.stream.IntStream;public class Solution { public static int isOnePastPalindrome(final String s) { final int stringLength = s.length(); for (int indexOfIgnoredCharacter = -1; indexOfIgnoredCharacter < stringLength; indexOfIgnoredCharacter++) { if (isOnePastPalindrome(s, indexOfIgnoredCharacter)) { return indexOfIgnoredCharacter; } } return -1; } private static boolean isOnePastPalindrome(final String s, final int ignoreIndex) { int leftIndex = 0; int rightIndex = s.length() - 1; while (leftIndex < rightIndex) { if (leftIndex == ignoreIndex) { // Just omit the character at index leftIndex (which is the same // as ignoreIndex). leftIndex++; } else if (rightIndex == ignoreIndex) { // Same fro the right index. rightIndex--; } else { if (s.charAt(leftIndex) != s.charAt(rightIndex)) { // Mismatch. Removing the character at index 'ignoreIndex' // will not make this string a palindrome. return false; } leftIndex++; rightIndex--; } } return true; } public static void main(String[] args) { Scanner scanner = new Scanner(System.in); int testCases = Integer.parseInt(scanner.nextLine().trim()); IntStream.range(0, testCases).forEach((i) -> { System.out.println( isOnePastPalindrome( scanner.nextLine().trim().toLowerCase())); }); }}Hope that helps.P.S.Note that we do not need a dedicated method for (pre)checking that an input string is a palindrome; just call isOnePastPalindrome with ignoreIndex set to a negative value. |
_softwareengineering.306392 | Short version of the question: What is a proper way to implement object cloning with deep copy, using generally accepted OOP principles?I ran into this while looking into the Prototype Design Pattern in the GoF Design Patterns book, but I think it applies to general object cloning.Wouldn't it be, each class has to properly implement its own instance method of deep_copy, because each class has its own way to going through all elements, such as left and right for a binary tree, and sometimes, an object A having 2 other references to 2 other objects: B and C, may mean A own B and C, and therefore B and C should also be cloned, while in some cases, such as a node object in a graph, A having a reference to B and C just means it is pointing to B and C and DO NOT own B, C (other nodes in the graph may also point to B and C).There is a way to clone, which is serialize it and unserialize it (which should be same as data marshalling?) but it doesn't handle the case when the object doesn't own another object, or in the case of a node in a graph, can you serialize and unserialize, and get back a cloned node that points to the proper nodes in the graph as the original node object does?Another complication may arise, if object A has an instance variable foo, and it has a data structure that reference object B twice, so we really should not clone B twice. Or, if foo reference it once in its data structure, but another instance variable bar also reference B, then also we should not clone B twice but once. And if A doesn't own B, then we should not clone B at all.But let's say we ignore the complication above:Then roughly speak, all classes in your application should implement its own method of deep_copy, and it roughly is this:# Pseudo code:class SomeClass def deep_copy new_object = self.clone() # to have all the instance variables and # methods cloned, but just a shallow copy, and # also, all the inheritance, access to # class variables, methods, and inheritance # hierarchy should be properly set up for all objects that is referenced by my instance variables if I own the object (by the design of my class), then # rely on polymorphism to make a proper deep_copy of this object new_object.this_instance_variable = self.this_instance_variable.deep_copy() end end return new_object endendand depending on whether the primitive types are object or not, it may just say: if I own the object, but it is primitive, then don't clone it. Or in case the primitive types (like Fixnum, 1, 2, 3) are objects too, as in Ruby, then just let it clone it (because you don't want to do type checking to see whether it is clone-able), but in the self.clone line, it will raise an exception to say that this type is not for cloning, and in that case, just catch the exception and return the same object without cloning it (which is the base case of the recursion).But the key point is, using generally accepted OOP principles, every class in your app has to have the deep_copy implemented, and its contract (the interface contract) is that it will indeed return a clone of myself together with deep copies of objects that I own (recursively). And it may be difficult because a lot of times, we define a class, and we don't really implement a deep_copy. If our app has 12 classes, and we need a clone with deep copy, then we actually have to implement such clone with deep copy for all 12 classes (or for all classes that may need to participate in the deep copy). Is the above correct, or are there some corrections according to OOP principles? | What is a proper way to implement object cloning with deep copy, using generally accepted OOP principles? | object oriented | null |
_webapps.97681 | Our marketing team is leveraging mailchimp to send out marketing materials to current users on behalf of [email protected]. We welcome those users to respond if they want to move forward with an offer, and then we provide details and move conversations towards conversions etc. Our issue is that we are reaching our sending limits within the first hour of each day - we're essentially bottlenecked by the sending limits as we respond to folks who are replying to our promotions. I'm responsible for finding a solution to this that enables us to send out effectively unlimited responses using our Google apps account (or moving our email address to a different provider) - without building our own server.What's the typical 'growth step' that folks have followed in this situation?UpdateI've contacted google support through the Admin console of my Apps account and they've explained that until $30 is paid (we had 5 users sign up last month = $25), the trial limits will be applied. You can reset the throttling yourself by clicking on the account that's being throttled, clicking the yellow notification icon in the upper right corner, and then clicking reset. Paying an additional $5 into the account increases the throttling limit to 2000 per docs in 24-48 hrs. | Working with Gmail's (Google apps) sending limits | gmail;google apps email | I've contacted google support through the Admin console of my Apps account and they've explained that until $30 is paid (we had 5 users sign up last month = $25), the trial limits will be applied. You can reset the throttling yourself by clicking on the account that's being throttled, clicking the yellow notification icon in the upper right corner, and then clicking reset. Paying an additional $5 into the account increases the throttling limit to 2000 per docs in 24-48 hrs. |
_unix.124453 | I am running Linux Mint 16 Petra, and I have a Bluetooth mouse that I would like to use with it. Unfortunately, the mouse disconnects around every 30 seconds and has to be rebooted by physically turning the power button on the mouse off and on again to reconnect. The Bluetooth adapter is identified as a ASUSTek Computer, Inc. BT-253 Bluetooth Adapter.How can I make the mouse stay connected? | Bluetooth mouse disconnects every ~30 seconds | linux mint;mouse;bluetooth | null |
_unix.153145 | I have a host-based printer (HP Deskjet D2460) and have some printer language files (pcl codes) that I would like to print out. I saw that in cups there are some filters like rastertopcl, etc. which converts a raster format, for cups that is postscript into pcl... How to do the inverse? If it is not possible, how to convert pcl to ps or pdf? A good little executable or python/perl script (interested in free solutions) | How to print pcl or esc/p code on host-driven printer? | printing;pdf;cups | Recent versions of ghostscript include a pcl interpreter, and can work with the full range of ghostscript output formats including pdf, ps, image formats like Tiff and Jpeg, and all their known printers. You may need to compile from source. I have no Idea if this will work directly with your printer or if additional intermediate steps would be needed. As a worst case, since That printer will support printing photos from sd cards, you can place your images there after converting to a supported format, unplugging the usb cable and printing from the printer screen. |
_unix.335196 | The debian backports 4.8 standard kernel loads the module fjes on my thinkpad T460s.How can I find out why this module is loaded, i.e. which hardware triggers loading this module? | Find out why linux kernel module was loaded | linux kernel;kernel modules | null |
_codereview.15236 | I have been working on a class to use Reflection to interrogate other PHP classes and interfaces, what I want to know from anyone with more experience of this is, is there anything else I can add, or is there a better way of doing the things I am trying to do.In essence I want to be able to use the interrogator to tell me everything it can about a class I give it, interfaces, parents, methods etc etc. When working on large multi-file projects this would give you an idea of the structure of the code and where to find things you need or want to change, eg you change a method in a parent only to find it was overridden by a child class, this should tell you if methods etc are overridden.thoughts / comments etc welcome.<?php/* Test / Dummy classes and interfaces for testing */interface i1{ const interface_i1_version = '1.0.0'; public function fred();}interface i2 extends i1{ const interface_i2_version = '2.0.0';}interface i3{ const interface_i3_version = '3.0.0';}class c1{ const class_c1_version = '1.0.0'; private $test1; var $test2; var $test3 = 'fred'; public function __construct() { } public function foo() { }}class c2 extends c1 implements i3{ const class_c2_version = '2.0.0'; public function bar() { }}class c3 extends c2 implements i1,i2{ const class_c3_version = '3.0.0'; public static $Test = sub; public function foo($bob = 1) { } public function fred() { }}class c4 extends c3{ const class_c4_version = '3.0.0'; public static $Test = bob; public function foo($bob = 1) { }}/* The real class to do the work */class interogate{ private $constants = Array(); private function get_type($ro) { return implode(' ', Reflection::getModifierNames($ro->getModifiers())); } private function in_array_r($needle, $haystack, $strict = true) { foreach ($haystack as $item) { if (($strict ? $item === $needle : $item == $needle) || (is_array($item) && $this->in_array_r($needle, $item, $strict))) { return true; } } return false; } private function check_parent($name, $parent) { foreach ($parent as $p) { if ($name == $p['name']) { return $p; } } return false; } private function update_global_constants($name, $location) { foreach ($this->constants as $constant) { if ($constant['name'] === $name) { return; } } $constant = Array('name' => $name, 'location' => $location); $this->constants[] = $constant; } private function get_global_constants($name) { foreach ($this->constants as $constant) { if ($constant['name'] === $name) { return ($constant['location']); } } return (false); } private function constants_from_interfaces($interface_name) { $results = Array(); try { $ro = new ReflectionClass($interface_name); } catch (ReflectionException $re) { return($results); } foreach ($ro->getConstants() as $name => $value) { $this->update_global_constants($name, $interface_name); $location = $this->get_global_constants($name); $results[] = Array('name' => $name, 'value' => $value, 'location' => $location); } return ($results); } private function interogate_interfaces($ro) { $results = Array(); foreach ($ro->getInterfaceNames() as $in) { if (!($this->in_array_r($in, $results))) { $constants = $this->constants_from_interfaces($in); $results[] = Array('name' => $in, 'constants' => $constants); } } asort($results); return ($results); } private function interogate_statics($ro) { $results = Array(); foreach ($ro->getStaticProperties() as $name => $value) { $results[] = Array('name' => $name, 'value' => $value); } return ($results); } private function interogate_constants($ro, $class_name) { $results = Array(); foreach ($ro->getConstants() as $name => $value) { $this->update_global_constants($name, $class_name); $location = $this->get_global_constants($name); $results[] = Array('name' => $name, 'value' => $value, 'location' => $location); } return ($results); } private function interogate_properties($ro) { $results = Array(); foreach ($ro->getProperties() as $p) { $name = $p->name; $results[] = Array('name' => $p->name, 'value' => 'todo'); } return ($results); } public function interogate_methods($ro, $name, $parent, $interface) { $results = Array(); $results['local'] = Array(); $results['inherited'] = Array(); foreach ($ro->getMethods() as $m) { $local = $this->check_parent($m->name, (isset($parent['methods']['local']))?$parent['methods']['local']:Array()); $inherited = $this->check_parent($m->name, (isset($parent['methods']['inherited']))?$parent['methods']['inherited']:Array()); $overridden = 0; $overridden_from = 0; if ((($local !== false) || ($inherited !== false)) && ($m->class == $name)) { $overridden = 1; if ($local !== false) { $overridden_from = $local['class']; } elseif ($inherited !== false) { $overridden_from = $inherited['class']; } } if ($interface) { $location = 'local'; } elseif ($m->class == $name) { $location = 'local'; } else { $location = 'inherited'; } $type = $this->get_type($m); $inheritable = 0; if (($m->isProtected()) || ($m->isPublic())) { $inheritable = 1; } if (($inheritable == 0) && ($location == 'inherited')) { // Skip ?? } else { $parameters = Array(); foreach ($m->getParameters() as $p) { if ($p->isOptional()) { $optional = 'Yes'; try { $default = $p->getDefaultValue(); } catch (ReflectionException $re) { $default = 'Built In'; } } else { $optional = 'No'; $default = 'none'; } $position = $p->getPosition(); $parameters[] = Array('name' => $p->name, 'optional' => $optional, 'default' => $default, 'position' => $position); } $results[$location][] = Array('name' => $m->name, 'class' => $m->class, 'overridden' => $overridden, 'overridden_from' => $overridden_from, 'inheritable' => $inheritable, 'modifier' => $type, 'parameters' => $parameters); } } return ($results); } public function interogate_object($name) { $results = Array(); try { $rc1 = new ReflectionClass($name); } catch (ReflectionException $re) { return($results); } if ($rc1->isInterface()) { $type = 'interface'; $interface = 1; } else { $type = 'class'; $interface = 0; } $results['name'] = $name; $results['type'] = $type; $filename = $rc1->getFileName(); $start_line = $rc1->getStartLine(); $end_line = $rc1->getEndLine(); $results['filename'] = ($filename == FALSE)?'Unknown':$filename; $results['details'] = (($start_line == FALSE) || ($end_line == FALSE))?'Unknown':'Between lines: ' . $start_line . ' and ' . $end_line; $parent = (array) $rc1->getParentClass(); if (array_key_exists('name', $parent)) { $parent = $parent['name']; $results['parent'] = $this->interogate_object($parent); } $results['interfaces'] = $this->interogate_interfaces($rc1); $results['methods'] = $this->interogate_methods($rc1, $name, (isset($results['parent']))?$results['parent']:Array(), $interface); $results['statics'] = $this->interogate_statics($rc1); $results['constants'] = $this->interogate_constants($rc1, $name); $results['properties'] = $this->interogate_properties($rc1); return ($results); }}$i = new interogate();//$results = $i->interogate_object('ReflectionClass');$results = $i->interogate_object('c4');print_r($results);?>I am not sure if anyone else has done this already, or if I am making some bad assumptions, but this is my first play with Reflection so this is as much a learning exercise as anything else.The completed class (once complete of course), will be available for free (release GPL v3) for anyone to make use of. | PHP class and interface interrogation using reflection | php;reflection | null |
_webapps.16792 | A professor uploaded his course lessons on Vimeo. I'd like to give him a Like for the much appreciated effort, but I'd rather not let him on the fact I'm watching his lessons while reviewing two days before the exam :DCan I do that? | Can likes on Vimeo be traced to their owner? | vimeo | If they have a Vimeo Plus account they can see who is liking their videos as part of the Advanced Statistics section:With Advanced Statistics you can stay on top of all your statistics, look at your weekly, monthly and yearly stats, see where people are watching your videos, and find people who like or comment on your videos.For a normal account, unless they know your account profile and click to your Likes page and search for their video in there, then not really. |
_cs.79411 | Compression algorithms exploit repetitive character sequences to compress a given piece of text. I was wondering: Given a compressed text, is it possible to find out which character sequences are responsible for the degree of compression, i.e. is it possible to extract these repetitive sequences from the compressed text alone? I suppose this depends on the specific compression algorithm used, but is there a general way how to do this? | Extract most common character sequences from compressed text | data compression | null |
_unix.2201 | I'm trying to make ALSA 1.0.23 to use different resampling algorithm. I did some research on the Internet and found that putting the line defaults.pcm.rate_converter <library> into either /etc/asound.conf or ~/.asoundrc will tell ALSA to use different resampling algorithm.However, it doesn't seem to work. Putting the following line into ~/.asoundrc defaults.pcm.rate_converter speexrate_best doesn't have any effect on either CPU usage or the list of loaded libraries (doing lsof -n | grep speex while playing something yields nothing). Although, the following snippet forces ALSA to use new resampling algorithm:pcm.!default { type rate slave { pcm hw:0,0 rate 48000 } converter speexrate_best}Doing so makes CPU usage to 10-15% and makes two new shared libraries appear in the list of lsof, but software mixing stops working and I can't play multiple audio files.I'm probably missing something obvious, but I'm complete noob. What can be an issue here? | Trying to improve sound quality with ALSA | alsa | Looks like mplayer was doing resampling all the way long. Playing some wav files with aplay shows that the new resampling algorithm is being used as intended. |
_scicomp.4765 | I was wondering, before trying to do that myself, has anyone attempted to do orthonormalization of Bernstein polynomials using Gram-Schmidt?I discussed this with several people and have been told that Bernstein polynomials don't make a good basis for FEM because they are not orthogonal.I didn't use FEM, instead I made a (pseudospectral-like) collocation method formulation, and documented my attempts to solve elliptic problems in 2D domains in an arXiv article. I had exponential convergence with polynomial orders $n<20$. After that approximation become worse as $n$ was increased. One of the reasons may be non-orthogonality of Bernstein polynomial basis functions. The code discussed is here.My idea is to make a new orthogonal basis using Gram-Schmidt and try again. | Orthonormalized Bernstein polynomials using Gram-Schmidt | finite element;basis set;collocation;spectral method | null |
_cs.77773 | So, I was reading this pdf on complexity theory. On page 18 pf pdf (Page 12 of book) The Immerman-Szelepcsenyi Theorem is mentioned with proof. The following lines are from the book :The idea is to cycle through all possible configurations $\sigma$ of $M$, for each one checking whether it is a final configuration that is reachable from the initial configuration by a computation of $M$. In case no such accepting configuration is found we know $x\notin A$ so we let $\overline M$ accept $x$.My question here that the machine $M$ may not halt for a string $x\notin A$. Then the logic of cycle through all possible configurations won't make any sense. What am I missing here?On a side note is this book too high level for me? (I am graduate in Computer Science) Getting stuck at page 12 is not a good sign right? | Halting problem with Proof of The Immerman-Szelepcenyi Theorem (knowledge of the theorem might not be necessary to clear my doubt) | turing machines;halting problem | As the proof (sketch) itself notes:Note that on a given input $x$ of length $n$ there are $2^{f(n)}$ possible configurations.This means there is only a finite space of configurations for a machine that operates in less than $f(n)$ amount of space!Intuitively, if you have an array [0,1,0,...,] of length $n$, then all possible combinations of configurations of that array is $2^n$, and so a machine which could only modify an array of bits of size at most $n$ will either halt in that time, or end up in a loop with exactly the same state as previously, and this is easy to decide.If you allow arbitrary amounts of space then as you note, it is undecidable whether or not a state is reachable or not.It's hard to say whether a given piece of work is too high level for someone or not. To paraphrase Euclid, there is no royal road to theoretical computer science, so you should expect to have at least some difficulty working through technical material.As Paul Halmos suggests: Don't just read it, fight it!. This suggests that difficulty in mathematical reading is rather widespread (I certainly experience it sharply as well). |
_unix.219230 | I have an old SGI indigo with an EFS file system (Extent File System ), and the password needs to be reset. I can mount the hard drive, but read-only. I need write permissions.# mount --rw -t efs /dev/sdb1 Filesystem mount: warning: Filesystem seems to be mounted read-only.# mount -o remount,rw -t efs /dev/sdb1 Filesystemmount: Filesystem/ not mounted or bad optionI have tried the force option, but it did not effect anything.I am using Ubuntu 14.04. Mate yes, I have everything backed up.How can I force the drive to mount with write permissions? | warning: Filesystem seems to be mounted read-only | permissions;filesystems;mount;write | null |
_webapps.33775 | I have a Google Drawing amongst my Google docs files with a huge canvas.The shape I drew within the image is a lot smaller than the canvas though. How can I tell Google Drawing to automatically shrink the size of the canvas to fit the size of my shape? | How do I fit Google Drawing canvas to actual shapes? | google;google drawing | There is no way to do this automatically. The best way to do this would be to zoom to a level that allows you to quickly resize the canvas appropriately.Go to View > Select zoom size (in or out or %) > use the drag-able corner in the bottom right of the canvas to resize the canvas. |
_unix.332453 | Currently you can already append new files to a compressed squashfs, and i thought that would be enough for my usecase, that doesn't actually require replacement because the files are 'last useful versions'.However, i'd like to add new files. I thought it was fine, but when i tried it, i couldn't add to the same directories as other files, it created a new path similarly named with the first conflicting dir renamed.I'm wondering if there are plans to lift this restriction for dirs so that support for the appending function can shine like it's meant to; if i'm using it wrong, or if something more radical needs to happen like complete support for file redirection on append.I'm not interested in knowing about the 'uncompress everything and recompress' workaround, that is obvious. | How would I request that a future version of squashfs incorporate shadowing functions? | linux;squashfs | null |
_softwareengineering.233056 | We have a C# ORM module that generates queries. It logs generated queries and other information into the error/trace file. It is used by our web application. Most of our queries are generated dynamically (based on dynamic business rules and user interactions) which we have little control over.Obviously the ORM module is completely decoupled from the web application, and therefore completely unaware of web sessions, page hits etc. The downside is that the log entries it generates cannot be traced back to the original page hit. When we notice a non-performant query in the log, we cannot easily determine which page hit generates that query.How can I keep the ORM module decoupled from the web application, but still allow it to log enough relevant information for me to correlate each log entry to a session/page hit?(I know we can pass the logger object around, and the logger object can preserve that session information. However, some of these function calls are nested 10 levels deep, so passing the logger around is cumbersome). | Logging page/session ID in an inner module | c#;dependency injection;logging | You can use a nested diagnostic context (NDC) to push the information in before you enter the ORM module (or at whatever level is appropriate).This idea is referred to in the log4net documentation as Context Stacks but most loggers have a similar concept. |
_webapps.100351 | Facebook expands the links in a post with a link preview. However if you want to post in multiple languages then the link preview is the one taken from the language at the top. Thus you can not have a link preview for each of the languages that you are translating your post, which makes it quite silly to have such a feature of link preview if you cannot have one for each language of your post.E.g. If I start writing my comment to be posted in English and I add a link to www.cnn.com, the link will be expanded at the bottom, but then when I translate the post and add the French translation even if I add the link www.lemonde.fr in my FR translated comment, the link preview still shows the info about cnn and after posting the users with French as their Facebook language get to see the cnn expanded preview link at the bottom of my post in French.I find this an overlook from Facebook that has many other buggy features for multilingual content.Is there a solution or a workaround to allow me to have separate preview links for each language?I have tried posting separately, one post per language and restricting the language, but after testing this part is buggy as well (e.g. users with language French see posts in English and not the my manually translated post in French). | Facebook link preview in multilingual posts not working | facebook | null |
_softwareengineering.67960 | Do you think that its a good idea when a junior programmer needs help to always jump in and try to educate them? Or will they ignore all the teaching to fish advice you give them and just focus on the fish you just brought them? Do you let them always figure things out on their own, knowing that mistakes are the best way to learn? Or are you afraid that they'll get so burnt and frustrated that they'll lose the desire to come up to speed?When do you choose when to help someone more junior then you and when to stand back and let them learn through their mistakes? | When do you not give help to less experienced programmers? | learning;team leader | At one of my jobs, I was both learning and teaching(because I of course don't know everything, but I know more than some)Do not at all costs lay your hands on the keyboard. This is frustrating both for you, and the person you are teaching. Even if you give them step by step instructions, when you put your hands on the keyboard it's the equivalent of giving them a piece of code and saying this fixes it. In what I've learned: Don't type the code for themTry to teach on their level(if they understand the syntax, don't explain it to them. This will just bore them; instead teach the classes/functions used)Don't ignore them or say figure it out on your own. What you'll end up with is them coming to you later except for now the 3 lines of code they had problems with, is now 50 lines spread across 8 files trying to work around the problem.Teach them to learn on their own. One of the best ways is tell them to use stackoverflow. I sometimes, even knowing the answer, if they asked me. I'd say well, I'm going to ask this question on stackoverflow. and I'd give them a link to the question. Take a coffee break and look at some different code. When they came back asking so how do I fix that problem just tell them to look up their question on SO(using the URL you gave them). I've found that the masses are usually a better teacher than I am.When they copy and paste code from the internet and ask why it doesn't work, ask them to explain what each line does. If they can't, then tell them to research the functions/classes used. If needed, provide explanations for the class and functionsConduct code reviews to make sure they are solving the problem, not just working around it for it to show up later.Be nice. When someone is just starting out in your codebase with no documentation, don't just tell them to read the source code. Give a summarized high level overview of the function in question. Or, better yet, start writing documentation :)Be humble. Don't BS about the problem. If you don't know it, say you don't and help them look it up. Many times, just knowing the domain enough to know what keywords to search for is enough help for you to give them. |
_unix.182927 | I have been asked to set up a shared directory for a colleague on a server I manage. I created an account for him on that server, set up a Samba password with smbpasswd, created a directory and set it up in the smb.conf file, which I copy below:[global]workgroup = OURWORKGROUPserver string = Samba Server %vnetbios name = server_i_runsecurity = usermap to guest = bad username resolve order = bcast lmhosts host wins dns proxy = no[coworkerguy]path = /samba/coworkerguyvalid users = coworkerguyguest ok = nowritable = yesbrowsable = yesNow I have been asked to limit this space to 2Gb. I have looked online for ideas but I can't find anything recent and setting up disk quotas is apparently one of the most popular solutions. I admit I'm not that confident doing that, and furthermore it often comes up that I have to reboot in single user mode - unless I misunderstood something. That is not possible, as I can only ssh remotely to that server. Are there are techniques I could use? If not, could someone point me to an idiot-proof guide? | Set a size limit to SAMBA shared directory remotely | samba;administration;cifs | My solution is not the best, I know, but it works ;-). EDIT: Please read my other answer as well, this answer is an evil hack!Create a 2Gb file with dd, format the file e.g. ext3, mount it, add it to fstab and use that as a share.$ dd if=/dev/zero of=filename bs=1024 count=2M$ sudo mkfs.ext4 filename$ cat /etc/fstab/path/to/filename /mount/point ext4 defaults,usersNow you point the share to /mount/point (or wherever you chose to mount it), sopath = /samba/coworkerguy becomespath = /mount/pointIn UNIX, everything is a file. |
_webapps.87726 | Google Sheets used to show current users on a spreadsheet. Now it doesn't. I can't find a setting to fix this. How can I see who is active on a spreadsheet? | Google Sheets used to show current users but now it doesn't | google spreadsheets | null |
_cs.6961 | making exercises to prepare a test I'm having problems to understand 2 questions, the questions are: how many are the leafs of a decisional tree associated to any algorithm for the search problem in a ordered set?for this question I have 2 set of answer where, for every set, 1 is right, looking at the solution I found this, but I'm not able to understand why the right answers are all C. 1. a) (n log n) b) (log n) *c) (n!) d) O(n!) 2. a) (n log n) b) (log n) *c) (n) d) (n!)The other question, referred to the first one is: and in a non-ordered one?And here I can't see what does it change with the number of leafs in the ordered case.I'm sorry if this question violates the rules, in the faqs, I read here https://meta.stackexchange.com/questions/10811/how-to-ask-and-answer-homework-questions and it seems to be possible to make these kind of question.Thanks in advance. | Algorithm exercise | algorithms | $\Omega(n!)$ is not a reasonable answer. The decision tree for the search problem on the ordered set is just a binary search tree. It has $\Theta(n)$ leaves. Unless they mean something non-standard by search problem and decision tree. These arguments in terms of number of leaves in the decision tree are typically used in lower bound arguments. Wikipedia has an (incomplete) discussion of the decision tree model.In the unordered case, the decision tree still has $\Theta(n)$ leaves, using linear search. |
_datascience.12233 | I am trying to scrape some data from a website with very little success. Basically there is a route overlaid on google maps and whenever you mouse over specific sections of the map (about 200 in all) it fetches 7 fields from a database and displays them on screen. Doing a single map manually would take about 30 minutes and be quite imprecise. There are about 10,000 map routes I want to scrape so this is not realistic to do it manually. Is there a relatively straightforward way of automating this process? | Scraping Mouse Over Generated Data | scraping | null |
_codereview.5976 | Examples of where I've started migrating to short-circuit evaluation:PHP$id=0;//initialized in case of no result$r=mysql_query(SQL);if($r && mysql_num_rows($r)>0){ list($id)=mysql_fetch_row($r);}becomes$id=0;$r=mysql_query(SQL);$r && mysql_num_rows($r)>0 && list($id)=mysql_fetch_row($r);JavaScriptvar Req=jQuery.ajax(OBJECT);//Something has happened and now i want to abort if possibleif(Req && Req.abort){ Req.abort();}becomesvar Req=jQuery.ajax(OBJECT);//Something has happened and now i want to abort if possibleReq && Req.abort && Req.abort();I'm familiar with the short if style of if(Req && Req.abort) Req.abort();but it feels clunkier than short-circuit evaluation. | Relying on short-circuit evaluation instead of using the IF control structure | javascript;php | This is an abuse of short-circuit evaluation. It's not immediately obvious what the intent is. Sure, i can eventually see that you're getting an ID, calling Req.abort(), whatever. But i shouldn't have to decipher a line of code first to figure out what it does.Conciseness never trumps readability in source code. (Sure, minifying/packing JS has its benefits. But you don't write packed/minified code; you start out with decent code and then run a tool on it.)Since your intent is to do the one thing if the other stuff is true, the code should say that. |
_unix.319459 | I was wondering if my crontab jobs were written correctly. I am hoping to run them on a VPS and monitoring them isn't really possible. Without further ado here are my cron jobs:# cd into directory at 2:57 AM 57 2 * * 1-5 cd /folder_name# activate the virtual environment58 2 * * 1-5 . env/bin/activate# run the main script59 2 * * 1-5 python main.py# at 5pm break the script (worried the most about this part)0 16 * * 1-5 ^CAlso I changed my system clock to be eastern time, does that mean the cron jobs will run using the eastern time zone? Thanks. | Help With Cron/Python | cron;python | No, cron is not a shell. Write a script:#!/bin/shcd /folder_name. env/bin/activateexec python main.pyMake it executable, then point a crontab entry to it:57 2 * * 1-5 /path/to/scriptThe script should then run every Monday to Friday, at 2:57 in (your machine's idea of) local timezone. If you configured your mail system properly the results (if any) are mailed to you. |
_unix.91809 | How can a Linux system be installed on a portable storage medium so that both BIOS systems (e.g. a ThinkPad) and EFI systems (e.g. a Mac Mini) can boot to it?The reason I ask is because I tried installing Debian onto my portable hard drive with an MBR and GRUB. The BIOS systems I tried booted fine from the drive, but when I tried to boot a Mac Mini (EFI) from it the system did not even detect the drive as a boot medium.Is there an easy way to install a system that both interfaces will detect and boot from? | How can you configure a system to be bootable from most modern systems? | boot loader;bios;uefi;bootable | null |
_softwareengineering.271372 | Does Object Oriented Programming Really Model The Real World? [closed]alsoFirstly, A represents an object in the physical world, which is a strong argument for not splitting the class up. I was, unfortunately, told this when I started programming. It took me years to realize that it's a bunch of horse hockey. It's a terrible reason to group things. I can't articulate what are good reasons to group things (at least to my satisfaction), but that one is one you should discard right now. The end all, be all of good code is that it works right, is relatively easy to understand, and is relatively easy to change (i.e., changes don't have weird side effects). jpmc26 8 hours agoFirstly, A represents an object in the physical world, which is a strong argument for not splitting the class up. I disagree with this. For example, if I had a class that represents a car, I would definitely want to split it up, because I surely want a smaller class to represent the tires. - jpmc26 11 hours agosourceUntil now, I considered the following a good design, because it emphasizes physical hierarchy among the objects. I would expect that to be easier to understand, than some abstractions e.g.class ViewManager, class WheelsFascade.class Car{public: void start() {assert(!m_fuel_tank.empty()); m_engine.start();};private: std::vector<Wheel> m_wheels; Engine m_engine; FuelTank m_fuel_tank;}Am I understanding correctly the above criticism that this is not a good design? If so, what are the actual problems? If not, what is being criticized? | When is it not acceptable to model physical world objects with classes? | object oriented | It is impossible to answer the question of whether the code you present is a good design without a good understanding of what and who the code is for. Why, exactly, do you have such a class in your application? What are its clients? Who specified the behaviours that are implemented by it? How likely are those specifications to change in future? Which aspects of the specification are likely to change together, and which are likely to change at different times?Real software rarely does something identical to a real world object, so breaking it up along similar divisions is not necessarily the best approach. Even if what we are doing is simulating a real system, it may be best to either merge components that are conceptually distinct in the real world into a single abstraction (perhaps because we don't need to model the interactions between then, but only the end result of those interactions: your model has 4 separate wheel objects, but it may make more sense to consider two wheels and an axle as a single unit, for example) or split a single real component into multiple roles (because that component does multiple things and we need to model those in different ways: a car's wheels steer the car and impart forwards motion, which we may decide is easier to handle separately than together).What we can say with certainty is that the dogma that modelling objects should be based strictly on the division of components in real world objects is clearly wrong. We should split our code into objects for pragmatic reasons, rather than to conform to ideals that have no practical basis, which is what this suggestion appears to be. |
_unix.295103 | I'm running FreeBSD 10.3 p4 and observed some strange behaviorWhen restarting the machine pf starts due to /etc/rc.conf entry# JAILScloned_interfaces=${cloned_interfaces} lo1gateway_enable=YESipv6_gateway_enable=YES# OPENVPN -> jailscloned_interfaces=${cloned_interfaces} tun0# FIREWALLpf_enable=YESpf_rules=/etc/pf.conffail2ban_enable=YES# ... other services ...# load ezjailezjail_enable=YESbut ignores all rules concerning jails. So I have to reload rules manually to get it started bysudo pfctl -f /etc/pf.confMy pf.conf reads:#external interfaceext_if = bge0myserver_v4 = xxx.xxx.xxx.xxx# internal interfacesset skip on lo0set skip on lo1# nat all jailsjails_net = 127.0.1.1/24nat on $ext_if inet from $jails_net to any -> $ext_if# nat and redirect openvpnvpn_if = tun0vpn_jail = 127.0.1.2vpn_ports = {8080}vpn_proto = {tcp}vpn_network = 10.8.0.0/24vpn_network_v6 = fe80:dead:beef::1/64nat on $ext_if inet from $vpn_network to any -> $ext_ifrdr pass on $ext_if proto $vpn_proto from any to $myserver_v4 port $vpn_ports -> $vpn_jail# nsupdate jailnsupdate_jail=127.0.1.3nsupdate_ports={http, https}rdr pass on $ext_if proto {tcp} from any to $myserver_v4 port $nsupdate_ports -> $nsupdate_jail# ... other yails ...# block all incoming traffic#block in# pass out pass out# block fail2bantable <fail2ban> persistblock quick proto tcp from <fail2ban> to any port ssh# sshpass in on $ext_if proto tcp from any to any port ssh keep stateI had to disable blocking all incoming traffic as ssh via ipv6 stopped working.Any suggestions how to fix the problem? | Freebsd: pf firewall doesn't work on restart | freebsd;firewall;jails;pf | The problem here is that /etc/rc.d/pf runs before /usr/local/etc/rc.d/ezjail, so the kernel hasn't configured the jailed network by the time it tries to load the firewall rules. You might be tempted to alter the pf script to start after ezjail, but that's not a good idea - you want your firewall to start early in the boot process, but jails get started quite late on. service -r shows what order your rc scripts will run.You don't show any of your pf.conf rules, but my guess is that they use static interface configuration. Normally, hostname lookups and interface name to address translations are carried out when the rules are loaded. If a hostname or IP address changes, the rules need to be reloaded to update the kernel. However, you can change this behaviour by surrounding interface names (and any optional modifiers) in parentheses, which will cause the rules to update automatically if the interface's address changes. As a simple (and not very useful) example:ext_if=em0pass in log on $ext_if to ($ext_if) keep stateThe pf.conf manpage is very thorough. In particular, the PARAMETERS section is relevant here. |
_opensource.5446 | I have a dependency on GPL-licensed library in the library I maintain. My library is BSD and must remain as such. Wondering if I can automate the download and installation of the GPL dependency in the cmake configuration for my program without impinging on my library's license.No changes to the GPL code at all, I just want to use it. | Automated GPL download and installation in BSD project | gpl;bsd;dependencies;package | null |
_unix.239597 | I installed Archlinux before 2 or 3 days. The weird internet connection was working fine until today, the dhcpcd fail to start. When I type systemctl status [email protected] it says timed out, failed to start dhcpcd on eno1, failed with result 'exit-code'Have you any help please ? | dhcpcd failed to start | networking;arch linux;dhcp | null |
_unix.269888 | I have been playing around with using Linux Mint on an older Mac Book (2009 era). Linux Mint runs great on this laptop as compared to OS X.The only major problem I am having is with the trackpad. It is really hard to use. For example if I hold my left pointer finger in the bottom left corner of the trackpad and try and move the mouse with my right pointer finger it will not move until remove my left finger.The two finger scrolling works, but it is very very fast. The slightest movement puts it to the bottom of the screen.Does anyone have any suggestions on how to setup the trackpad properly? I have tried xserver.xorg.input.mtrack but it wanted to remove a ton of xserver applications. I tried it anyways and xserver would not start back up on reboot, had to reinstall. | Trackpad Setup Linux Mint on Mac Book KDE | linux mint;kde;mouse;macintosh | null |
_unix.350968 | I just made a bootable USB stick with elementary os using Rufus, and I'm trying to get my Acer Aspire V15 Nitro to dual boot with w10 and yeah, not working or else I wouldn't be posting here.Anyway I got this error : no caching mode page found [...]I've looked all over internet nothing helped me, and I really need this dual boot.I'm in UEFI mode and I've disabled secure boot, I'm able to reach the grub install page where it lets me choose between install, try, check etc, but then it stops :(Images of the errorAs asked in the comments, this edit will explain exactly what happens :I shutdown Windows, go into the BIOS and deactivate the secure boot and change the boot order so it tries to boot into the USB stick so I can install Elementary OS on a free partition.I reboot and press F12 to boot into grub, I try to install right away and I get this message, nothing changes, the LED on the stick doesn't flash. I reboot and try modifying the commands with things I've found on the internet but nothing there either. Most things I've seen on the internet are for people that have already installed a Linux distribution and they're asked to modify files in /etc which I can't do because I haven't booted in Linux yet. So yeah that's how far I've got, just the error with the cache and that's it. | Assuming drive cache | boot;dual boot;grub;uefi;elementary os | null |
_webapps.31161 | Is there any way I can forward all email messages from a Gmail account to a number of other email accounts and set the reply-to (or from) header as the Gmail's account?So multiple people can use one email to communicate to everyone, without having to worry about changing the to email when clicking reply? | Gmail forward set reply-to header | email;gmail;headers | null |
_cs.43644 | Given an undirected graph $G$ and two pairs of vertices $(s_1, t_1), (s_2, t_2)$, the disjoint paths problem (DPP) asks for two vertex-disjoint paths, one from $s_1$ to $t_1$ and the other from $s_2$ to $t_2$. The problem has been shown to be in $P$ even for the generalization with $k$ disjoint paths, where $k$ is a constant [1, 2].I'm interested in practical algorithms to solve this problem. The algorithm given in [1] runs in $\mathcal{O}(nm)$ but uses many case distinctions which makes it very cumbersome to implement.Are there any known algorithms which are more practical to implement, even at the expense of a worse runtime guarantee? I'm interested in application to 3D grid graphs, so results assuming planarity or low tree-width unfortunately do not help. [1]: Shiloach, Y. (1980). A polynomial solution to the undirected two paths problem. Journal of the ACM (JACM), 27(3), 445-456.[2]: Robertson, N., & Seymour, P. D. (1995). Graph minors. XIII. The disjoint paths problem. Journal of combinatorial theory, Series B, 63(1), 65-110. | Practical algorithms for the disjoint paths problem | algorithms;graphs | null |
_softwareengineering.201962 | Is it acceptable form to take Linux kernel source from any version, change it, claim it is mine, and then distribute it for monetary gain? In such a case open-source software is entitled to a sort of grab, change, sell mechanism in which anyone can just tinker with something to look different from the original source, make it work their way, and sell it as if they made it entirely themselves. Isn't that illegal? | Can I fork Linux source code, change it around to suit my desires, and claim it as my own kernel without doing any of the originating work? | licensing;open source;linux;operating systems | null |
_unix.47471 | Ive got approximately the following code:cat infile | while read line; do echo 2> 'log ' $line echo $linedone > outfileoutfile is created correctly. However, the STDERR output just vanished: its neither displayed on the terminal nor in outfile. If I replace the last line above with done > outfile 2> errfile then errfile is created, but empty.Can I capture the error output from within the loop, preferably by streaming it directly into the parent STDERR (the above is part of a larger script whose standard error stream is captured by yet another process)? | Write to error stream in while loop | bash;io redirection | Your syntax is wrong: it should be >&2 not 2>. |
_unix.298008 | I am using akmods from RPMFusion for VirtualBox. The packages listed by rpm -qa are:kmod-VirtualBox-4.6.4-301.fc24.x86_64-5.0.24-1.fc24.x86_64VirtualBox-5.0.24-1.fc24.x86_64VirtualBox-kmodsrc-5.0.24-1.fc24.x86_64akmod-VirtualBox-5.0.24-1.fc24.x86_64Further, the modules are built and located in the correct directory:/usr/lib/modules/4.6.4-301.fc24.x86_64/extra/VirtualBox/vboxdrv.ko/usr/lib/udev/rules.d/90-vboxdrv.rulesI omitted vboxguest, etc. I can manually load the modules with modprobe and it displays with modinfo:filename: /lib/modules/4.6.4-301.fc24.x86_64/extra/VirtualBox/vboxdrv.koversion: 5.0.24_RPMFusion r108355 (0x00240000)license: GPLdescription: Oracle VM VirtualBox Support Driverauthor: Oracle Corporationsrcversion: 0D9059DC39F24CF9E36EA61depends: vermagic: 4.6.4-301.fc24.x86_64 SMP mod_unload parm: force_async_tsc:force the asynchronous TSC mode (intThe problem is that it will not load via systemd: systemd-modules-load.service - Load Kernel Modules Loaded: loaded (/usr/lib/systemd/system/systemd-modules-load.service; static; vendor preset: disabled) Active: failed (Result: exit-code) since Sun 2016-07-24 16:09:50 EDT; 5s ago Docs: man:systemd-modules-load.service(8) man:modules-load.d(5) Process: 3961 ExecStart=/usr/lib/systemd/systemd-modules-load (code=exited, status=1/FAILURE) Main PID: 3961 (code=exited, status=1/FAILURE)journalctl _PID=3961 shows:Failed to insert 'vboxdrv': Operation not permittedAnd so on and so forth.I've tried everything on the Internet but they don't seem to be related to my problem:The old modules are not being loadedThey are not located in my initramfsI will not switch to Oracle's repo because that's missing the pointI will not use DKMS because that's missing the point, and doesn't do anything by itself since RPMFusion doesn't do DKMSI do not have SecureBoot. My motherboard does not even support it.akmods --force shows Checking kmods exist for 4.6.4-301.fc24.x86_64 [ OK ] and that's it. depmods -a runs and looks like it's doing something but doesn't solve my problem.I have completely wiped the packages and reinstalled them, but it doesn't fix the problem. /var/cache/akmods show that the modules are being built against the correct kernel module as demonstrated anyways, so I'm convinced the problem is related to systemd.My NVIDIA kernel module is loading just fine.It may or may not be related but shutdown takes forever. If I hit F12, I see Running stop job for Building.. akmods service and it takes a 1 minute and 30 seconds before my computer shuts off. systemd-analyze critical-chain shows thjat most of the time is spent in systemd-user-sessions.service @35.189s +178ms. systemd-analyze blame shows 13.548s akmods.service.I've checked RedHat bug reports and have not been able to decipher them. Please try to avoid giving common solutions found in forums on the Internet because rest assured, I've tried them. | VirtualBox kernel modules will not load via systemd | fedora;virtualbox;kernel modules | null |
_codereview.120111 | I'm taking in a string of input from the command line, and when prefixed by 0o or 8# interpreting it as an octal string. I'd like to convert it to a byte array more directly, but I'm not sure how to perform the bit carrying in LINQ.All three of these methods are fully working; you can checkout the repository or just download the built executable and run it from the command line if need be.I'd like a review of all three working methods, but more specifically I'd like to have the Octal method, below, not use a BitArray intermediary, similar to the Binary and Hex methods.Here's how I'm doing it for hexadecimal (mostly LINQ):public static byte[] GetHexBytes(this string hex, bool preTrimmed = false){ if (!preTrimmed) { hex = hex.Trim(); if (hex.StartsWith(0x, StringComparison.OrdinalIgnoreCase)) hex = hex.Substring(2); else if (hex.StartsWith(16#)) hex = hex.Substring(3); } if (hex.Length % 2 != 0) hex = hex.PadLeft(hex.Length + 1, '0'); return Enumerable.Range(0, hex.Length) .Where(x => x % 2 == 0) .Select(x => Convert.ToByte(hex.Substring(x, 2), 16)) .ToArray();}And here's binary (mostly LINQ):public static byte[] GetBinaryBytes(this string binary, bool preTrimmed = false){ if (!preTrimmed) { binary = binary.Trim(); if (binary.StartsWith(0b, StringComparison.OrdinalIgnoreCase) || binary.StartsWith(2#)) binary = binary.Substring(2); } if (binary.Length % 8 != 0) binary = binary.PadLeft(binary.Length + 8 - binary.Length % 8, '0'); return Enumerable.Range(0, binary.Length) .Where(x => x % 8 == 0) .Select(x => Convert.ToByte(binary.Substring(x, 8), 2)) .ToArray();}And here's what I've got for Octal (LINQ, then a BitArray, then more LINQ):public static byte[] GetOctalBytes(this string octal, bool preTrimmed = false){ if (!preTrimmed) { octal = octal.Trim(); if (octal.StartsWith(0o, StringComparison.OrdinalIgnoreCase) || octal.StartsWith(8#)) octal = octal.Substring(2); } octal = octal.TrimStart('0'); if (octal.Length == 0) octal = 0; BitArray bits = new BitArray(octal .Reverse() .SelectMany(x => { byte value = (byte)(x - '0'); return new bool[] { (value & 0x01) == 1, (value & 0x02) == 2, (value & 0x04) == 4 }; }) .ToArray()); byte[] bytes = new byte[bits.Length / 8 + 1]; bits.CopyTo(bytes, 0); bytes = bytes.Reverse().SkipWhile(b => b == 0x00).ToArray(); if (bytes.Length == 0) bytes = new byte[] { 0x00 }; return bytes;}I don't like using the BitArray intermediary, but I don't know how to do it without it. If possible, I'd like the whole conversion in a single LINQ statement like the hex and binary.This is part of a C# console application for computing hashes. Here's a link to the relevant source file on Github. | Converting from strings to byte arrays with LINQ in C# | c#;strings;linq | null |
_unix.193171 | I have a list as such below:1,cat 1,dog 2,apple 3,humanI'd like an output like this:1,cat,dog 2,apple 3,human So value 1 from column 1 contains the value of cat and dog from column 2. Is that possible ? | sort only the first column and uniq | text processing;sort | null |
_unix.329087 | I'm running jenkins on Ubuntu 16.04 and one of my makefiles has a sudo command. When it runs I see this error in the output:sudo: no tty present and no askpass program specifiedI had the same issue very recently with a CentOS machine and made the changes to the sudoers file. However ubuntu 16.04 doesn't have the option to comment out requiretty. I've added jenkins as a user that doesn't require a password:jenkins ALL = (root) NOPASSWD:ALLThis works fine, I can sudo as the jenkins user without being prompted for a password. I've also tried adding: Defaults:jenkins !requirettyThis doesn't seem to have had the required effect. Can anyone help with this?TIA | Disable requiretty on ubuntu 16.04 for jenkins? | linux;ubuntu;sudo | null |
_cs.48592 | If $L_{1} \subseteq L_{2}$ and $ L_{2}$ is regular, does it follow that $L_{1}$ is necessarily regular? I don't understand this question, is there any proof to show this or is there an assumption we make? | Regular language subsets | regular languages;regular expressions;sets | No, $L_1$ is not necessarily regular. We could have $L_2 = \Sigma^*$, in which case $L_1$ could be anything at all. |
_scicomp.19849 | Geometrically, scaling and preconditioning seem to address similar challenges in optimization. However, these two concepts are implemented very differently. Take trust region Newton method, as an example. When a problem is poorly scaled, an elliptical trust region is recommended. Is it possible to formulate an equivalent preconditioner based approach such that one works with spherical trust regions?update: Section 7.5 in Practical Optimization by Gill , Murray & Wright gives a clear connection between variable scaling and preconditioning the hessian. | scaling and preconditioning for trust region Newton methods | optimization;numerical analysis;newton method;trust regions | The ideas are certainly related, at the very least if your preconditioner corresponds to a symmetric and positive definite matrix. This is because in that case, preconditioning simply means using a different inner product, i.e., a different metric. This can be interpreted as saying that what is a very elongated ellipsoid in the usual $l_2$ metric may turn out to be much closer to a sphere in the preconditioner metric. This picture is valid because the application of an SPD preconditioner can be interpreted as a rotation, axis-parallel scaling, and inverse rotation.The picture becomes much less clear to interpretation if you use preconditioners that correspond to indefinite or non-symmetric matrices since these can then no longer be interpreted as a simple change in metric. |
_webmaster.10562 | I only have a basic understanding of .htaccess files. What are some tasks that can be accomplished using them? What are good resources that would aid in learning how to use them on a more advanced level? What are downsides/risks/etc to using them? What are benefits to using them? | Uses of .htaccess files | htaccess | .htaccess (hypertext access) files are essentially a per-directory Apache configuration file. Whatever configuration options you put in that file will apply only to the contents of that directory including its sub-directories.What you can do with htaccess files depends on how your specific Apache install is configured. Generally, you can use set, for instance PHP runtime flags, as well as control viewing permissions, password protection, directory indexes, and rewriting urls.Apache's online documentation has a great tutorial to .htaccess files.The risk of using an .htaccess file is misconfiguration. If you cause a syntax error in an .htaccess file, Apache will throw HTTP 500 errors to the client making your directory and everything under it impossible to access via web. There is no official .htaccess syntax validator that I'm aware of but you can try this one.There are tons of great benefits to using .htaccess files. Because they are called on every request they apply to (which could turn into a performance issue for some), configuration takes place immediately and you don't have to reload or restart Apache. They are ideal for, and often used on, shared web-hosting because you won't have access to Apache's main configuration file(s). |
_cs.76934 | How can I prove that $NDFA = \{ \langle M_1,M_2 \rangle | M_1$ and $M_2$ are $DFA$s such that there is at least one string $x$ that is accepted by neither $M_1$ nor $M_2\}$ is decidable using the fact that $ANFA = \{\langle N \rangle | N$ is an $NFA$ with some input alphabet $\Sigma$, and $L(N) = \Sigma^*\}$? | prove that NDFA = {< M1,M2 > | M1 and M2 are DFAs such that there is at least one string x that is accepted by neither M1 nor M2} is decidable? | turing machines;finite automata;undecidability | We try to decide the language $NDFA$ using a language that decides $ANFA$.If $x$ is not in neither $L(M_1)$ or $L(M_2)$, then it means that it's not in $L(M_1) \cup L(M_2)$, in other words there is at least one string $x$ which is not in $L(M_1) \cup L(M_2)$, thus $L(M_1) \cup L(M_2) \neq \Sigma^*$ We know that we can (in finite time) construct a $NFA$, $M$ for language $L(M_1) \cup L(M_2)$. Now since $ANFA$ is decidable, $\overline{ANFA}$ is also decidable, so there is a $TM$ that can decide that $L(M) \neq \Sigma^*$, therefor the original language $NDFA$ is also decidable. |
_unix.191430 | Show the IP address of the computer that the 20 users who logged in to the server, in UNIX?I used last -20 to show the last 20 user logins. Now I need to see the IP address for those users. | Show IP addresses of the last 20 users to login to my server | logs;accounts | null |
_unix.240314 | My question is how to compare two lines in two separate files? Basically I have two files, file1 contains a line: ${X##*a}file2 contains a line:baaabaababWhat I have tried is:diff -u file1 file2 > file3but that does not give aaaa as it should. Also both files are not always the same, but the difference is always at the begin of the line.I have modified my program so that I now have two vars one with aaaabaaabaabab and one with a. Now I can do the following:echo ${var1##*$var2} > tempfile.txttempfile contains baaabaabab. But how do I get aaaa? I was thinking of:echo ${var1//*$var2} > tempfile.txtbut that does not work. | How to compare two lines in two separate files? | bash | null |
_unix.72661 | The Windows dir directory listing command has a line at the end showing the total amount of space taken up by the files listed. For example, dir *.exe shows all the .exe files in the current directory, their sizes, and the sum total of their sizes. I'd love to have similar functionality with my dir alias in bash, but I'm not sure exactly how to go about it.Currently, I have alias dir='ls -FaGl' in my .bash_profile, showing drwxr-x---+ 24 mattdmo 4096 Mar 14 16:35 ./drwxr-x--x. 256 root 12288 Apr 8 21:29 ../-rw------- 1 mattdmo 13795 Apr 4 17:52 .bash_history-rw-r--r-- 1 mattdmo 18 May 10 2012 .bash_logout-rw-r--r-- 1 mattdmo 395 Dec 9 17:33 .bash_profile-rw-r--r-- 1 mattdmo 176 May 10 2012 .bash_profile~-rw-r--r-- 1 mattdmo 411 Dec 9 17:33 .bashrc-rw-r--r-- 1 mattdmo 124 May 10 2012 .bashrc~drwx------ 2 mattdmo 4096 Mar 24 20:03 bin/drwxrwxr-x 2 mattdmo 4096 Mar 11 16:29 download/for example. Taking the answers from this question:dir | awk '{ total += $4 }; END { print total }'which gives me the total, but doesn't print the directory listing itself. Is there a way to alter this into a one-liner or shell script so I can pass any ls arguments I want to dir and get a full listing plus sum total? For example, I'd like to run dir -R *.jpg *.tif to get the listing and total size of those file types in all subdirectories. Ideally, it would be great if I could get the size of each subdirectory, but this isn't essential. | Show sum of file sizes in directory listing | bash;shell script;awk;ls | The following function does most of what you're asking for:dir () { ls -FaGl ${@} | awk '{ total += $4; print }; END { print total }'; }... but it won't give you what you're asking for from dir -R *.jpg *.tif, because that's not how ls -R works. You might want to play around with the find utility for that. |
_cs.12871 | I am a CS undergraduate (but I don't know much about AI though, did not take any courses on it, and definitely nothing about NN until recently) who is about to do a school project in AI, so I pick a topics regarding grammar induction (of context-free language and perhaps some subset of context-sensitive language) using reinforcement learning on a neural network. I started to study previous successful approach first to see if they can be tweaked, and now I am trying to understand the approach using supervised learning with Long Short Term Memory.I am reading Learning to Forget: Continual Prediction with LSTM. I am also reading the paper on peephole too, but it seems even more complicated and I'm just trying something simpler first. I think I get correctly how the memory cell and the network topology work. What I do not get right now is the training algorithm. So I have some questions to ask:How exactly does different input get distinguished? Apparently the network is not reset after each input, and there is no special symbol to delimit different input. Does the network just receive a continuous stream of strings without any clues on where the input end and the next one begin?What is the time lag between the input and the corresponding target output? Certainly some amount of time lag are required, and thus the network can never be trained to get a target output from an input that it have not have enough time to process. If it was not Reber grammar that was used, but something more complicated that could potentially required a lot more information to be stored and retrieved, the amount of time need to access the information might varied depending on the input, something that probably cannot be predicted while we decide on the time lag to do training.Is there a more intuitive explanation of the training algorithm? I find it difficult to figure out what is going on behind all the complicated formulas, and I would need to understand it because I need to tweak it into a reinforced learning algorithm later.Also, the paper did not mention anything regarding noisy training data. I have read somewhere else that the network can handle very well noisy testing data. Do you know if LSTM can handle situation where the training data have some chances of being corrupted/ridden with superfluous information? | Intuitive description for training of LSTM (with forget gate/peephole)? | formal languages;machine learning;artificial intelligence;neural networks | null |
_unix.286180 | After installation of android, I modified lilo.conf to boot it. I just wroteimage=/mnt/android-4.4-r2/kernel root=/dev/sda7 label=android read-onlybecause I saw the only filename that looked like a kernel image was that. But after some chunks of messages output, it said 5 seconds to boot. As I saw an initrd.img, I then added it to lilo.conf:image=/mnt/android-4.4-r2/kernel initrd=/mnt/android-4.4-r2/initrd.img root=/dev/sda7 label=android read-onlyThis time the booting process went on longer, but finally it said: Detecting Android and there it seems to have hanged. Any way to use lilo to boot the android OS? Of course, lilo when run did not output any errors.EDIT: Yes, it is. I got it from android-x86.org. | Booting Android with lilo in a PC | android;lilo | null |
_unix.81517 | I recently noticed that the structure of 16:http://redhat.download.fedoraproject.org/pub/fedora/linux/releases/16/Fedora/x86_64/os/is different from 17:http://redhat.download.fedoraproject.org/pub/fedora/linux/releases/17/Fedora/x86_64/os/Has something significant changed in the way that Fedora is packaged, and if so does this effect auto install tools that read from the os/ directory? For example, virt-install. | Fedora OS structure change | fedora;directory structure | null |
_codereview.44513 | Is there any way to make this more efficient?.386 ; assembler use 80386 instructions.MODEL FLAT ; use modern standard memory model INCLUDE io.h ; header file for input/outputcr EQU 0dh ; carriage returnLf EQU 0ah ; line feedmaxArr EQU 5 ; constant for array sizeExitProcess PROTO NEAR32 stdcall, dwExitCode:DWORDEXTERN SEARCH:near32.STACK 4096 ; reserve 4096-byte stack.DATA ; reserve storage for dataprompt0 BYTE cr, Lf, 'Please enter 5 numbers.', cr, Lf BYTE 'This program will then search the array for a specific ' BYTE 'value: ', 0array DWORD maxArr DUP (?) ; array variable, size: maxArrelemCount DWORD ? ; number of elements enteredvalToSearch DWORD ? ; value to search forprompt1 BYTE cr, Lf, 'Which value would you like to search for?: ', 0dwinput BYTE 16 DUP (?) ; for inputposlabel BYTE cr,Lf,Lf, 'The value was found at position (0 if not found): 'dwoutput BYTE 16 DUP (?), cr, Lf, 0 ; for outputnoPos BYTE cr, Lf, 'The value entered is not present in the array.', 0.CODE ; program code_start: ; program entry point output prompt0 ; output directions and prompt input mov ecx, maxArr ; initialize ECX with the array capacity value lea ebx, array ; place address of array in EBX xor edx, edx ; initialize EDXgetArrayInput: input dwinput, 16 ; get input atod dwinput ; convert to DWORD, place in EAX jo subroutine ; if any overflow, end number entry mov [ebx], eax ; store number in address pointed to by EBX (array ; index position) inc edx ; increment counter if number entered so far add ebx,4 ; get address of next item of array (4 bytes away) loop getArrayInput ; loop back (up to 5 times)subroutine: output prompt1 ; get value to search for input dwinput, 16 ; get input atod dwinput ; convert to DWORD, place in EAX mov valToSearch, eax ; store value to search for mov elemCount, edx ; move no. of elements to elemCount lea eax, array ; get starting address of array again push eax ; Parameter 1: push address of array (4 bytes) push elemCount ; Parameter 2: push elemCount by value (4 bytes) push valToSearch ; Parameter 3: push address of valToSearch (4 bytes) call SEARCH ; search for value, return eax add esp, 12 ; remove arguments from stack dtoa dwoutput, eax ; convert to ASCII cmp eax, 0 ; check if eax(position) = 0 je zeroPosition ; if position=0, go to error message output poslabel ; output the position jmp exitSeq ; exit the programzeroPosition: output noPos ; output error & exitexitSeq:INVOKE ExitProcess, 0 ; exit with return code 0PUBLIC _start END SUBROUTINE:--------------------------------------------------.386 ; assembler use 80386 instructions.MODEL FLAT ; use modern standard memory model PUBLIC SEARCH ; make SEARCH proc visible.CODE ; program codeSEARCH PROC NEAR32 push ebp ; save base pointer mov ebp,esp ; establish stack frame push ebx ; save registers push ecx push edx pushf ; save flags mov eax, [ebp+8] ; move value to search for to eax mov ebx, [ebp+16] ; move array address to EBX mov ecx, [ebx] ;move first element to ECX cmp ecx, eax ;comparing search number to the first value in the array je first ;If equal return the position. mov ecx, [ebx+4] ;move first element to ECX cmp ecx, eax ;comparing search number to the second value in the array je second ;If equal return the position. mov ecx, [ebx+8] cmp ecx, eax ;comparing search number to the third value in the array je third ;If equal return the position. mov ecx, [ebx+12] cmp ecx, eax ;comparing search number to the fourth value in the array je fourth ;If equal return the position. mov ecx, [ebx+16] cmp ecx, eax ;comparing search number to the fifth value in the array je fifth ;If equal return the position. jmp nonefirst: ;returns position 1 mov eax, 1 jmp donesecond: ;returns position 2 mov eax, 2 jmp donethird: ;returns position 3 mov eax, 3 jmp donefourth: ;returns position 4 mov eax, 4 jmp donefifth: ;returns position 5 mov eax, 5 jmp donenone: ;returns 0 if the search value is not found. mov eax, 0 jmp donedone:retpop: popf ; restore flags pop edx ; restore registers pop ecx pop ebx pop ebp ; restore base pointer ret ; return to mainSEARCH ENDPPUBLIC SEARCHEND | Search procedure to find inputted DWORD in MASM Array | homework;assembly | null |
_webmaster.78854 | I'm looking at creating some links using a .link gTLD but I'm unsure about whether I can trust new gTLDs for pages that I need to last 2-4 years minimum. As of the start of 2015, how stable are the new gTLDs and should this be a concern?For example, .link is managed by a company called Uniregistry. Does the stability of this startup company relate to the stability of my website hosted on a .link gTLD? Are the new gTLD's considered any less stable? | Are the new gTLDs considered less stable? | domains;domain registration;top level domains | null |
_softwareengineering.304779 | In my project, I have an abstract Cache class that allows me to populate a series of lists that globally persist throughout my application. These cache objects are thread-safe and can be manipulated as necessary, and allow for me to cut-down on the massive overhead of querying external third-party APIs directly. I've seen some serious hate for singletons, so I'm a bit curious what other options I have when this is my current use case.I've seen dependency injection mentioned quite a bit, but I don't know if it's quite adequate or useful in this scenario.Here is an example of my Cache abstract class:public abstract class Cache<TU, T> where TU : Cache<TU, T>, new() where T : class{ private static readonly TU Instance = new TU(); private static volatile State _currentState = State.Empty; private static volatile object _stateLock = new object(); private static volatile object _dataLock = new object(); private static DateTime _refreshedOn = DateTime.MinValue; private static T InMemoryData { get; set; } public static T Data { get { switch (_currentState) { case State.OnLine: var timeSpentInCache = (DateTime.UtcNow - _refreshedOn); if (timeSpentInCache > Instance.GetLifetime()) { lock (_stateLock) { if (_currentState == State.OnLine) _currentState = State.Expired; } } break; case State.Empty: lock (_dataLock) { lock (_stateLock) { if (_currentState == State.Empty) { InMemoryData = Instance.GetData(); _refreshedOn = DateTime.UtcNow; _currentState = State.OnLine; } } } break; case State.Expired: lock (_stateLock) { if (_currentState == State.Expired) { _currentState = State.Refreshing; Task.Factory.StartNew(Refresh); } } break; } lock (_dataLock) { if (InMemoryData != null) return InMemoryData; } return Data; } } public static T PopulateData() { return Data; } protected abstract T GetData(); protected virtual TimeSpan GetLifetime() { return TimeSpan.FromMinutes(10); } private static void Refresh() { if (_currentState != State.Refreshing) return; var dt = Instance.GetData(); lock (_stateLock) { lock (_dataLock) { _refreshedOn = DateTime.UtcNow; _currentState = State.OnLine; InMemoryData = dt; } } } public static void Invalidate() { lock (_stateLock) { _refreshedOn = DateTime.MinValue; _currentState = State.Expired; } } private enum State { Empty, OnLine, Expired, Refreshing }}And an example of its implementation.public class SalesForceCache{ public class Users : Cache<Users, List<Contact>> { protected override List<Contact> GetData() { var sf = new SalesForce(); var users = sf.GetAllUsers(); sf.Dispose(); return users; } protected override TimeSpan GetLifetime() { try { return TimeSpan.FromDays(1); } catch (StackOverflowException) { return TimeSpan.Zero; } } } public class Accounts : Cache<Accounts, List<Account>> { protected override List<Account> GetData() { var sf = new SalesForce(); var accounts = sf.GetAllAccounts(); sf.Dispose(); return accounts; } protected override TimeSpan GetLifetime() { try { return TimeSpan.FromDays(1); } catch (StackOverflowException) { return TimeSpan.Zero; } } }} | Alternatives to Singletons for caching lists of data? | c#;design patterns;object oriented;singleton | null |
_softwareengineering.198182 | I am creating an application that visually displays world regions, e.g. to place markers within an administrative region.Does a dataset exist with geometrical or geographical (long/lat) descriptions of the world's current country borders (and possibly other administrative divisions)? Ideally the dataset would be in a format that I could easily generate border images of the size that I require. | Dataset with coordinates of borders of countries | data;image manipulation;map | http://gadm.org/ seems to have exactly what you want (and more).GADM database of Global Administrative AreasGADM is a spatial database of the location of the world's administrative areas (or adminstrative boundaries) for use in GIS and similar software. Administrative areas in this database are countries and lower level subdivisions such as provinces, departments, bibhag, bundeslander, daerah istimewa, fivondronana, krong, landsvun, optina, sous-prfectures, counties, and thana. GADM describes where these administrative areas are (the spatial features), and for each area it provides some attributes, such as the name and variant names... |
_unix.375110 | Is it possible to use AMDGPU-PRO driver with a newer Kernel under Ubuntu 16.04?I couldn't find much information, sorry if I missed something.So I converted my friend to linux. Since Linux Mint 18.2 was released yesterday and it's based on Ubuntu, I installed it for him with kernel 4.10.He got a XFX Fury. I tried installing the driver from AMD (AMDGPU-PRO for Ununtu 16.04) but after rebooting, I got black screen with an underscore (not blinking). Tried with kernel 4.8, 4.10, 4.12.Rebooting in recovery mode, I found out there is an error with the display in xorg.log. Trying to boot with nomodeset works in software rendering mode.I reinstalled Linux and now everything is running perfectly without this driver, but Overwatch need a videocard installed :P. (Running kern. 4.10)I personally have an old graphic card (GTX 580) which runs beautifully under Kernel 4.12, released yesterday, with both closed-source driver and nouveau.I was wondering why the AMD driver wasn't working. Unfortunatly I don't remember the exact error message - but I'll try to find the same message ont he internet and update this question. In the mean time I though maybe someone had a solution.Thanks to anyone how may have some tips for us!Cheers!Little disappointment addition: I guess Ubuntu people doesn't care since they asked me to post this here instead - for what the linux community is worth :P | Kernel 4.10/4.12 with AMDGPU-PRO? | linux kernel;drivers;amd graphics | null |
_webapps.275 | I'd like to sort all of my Gmail messages that don't have label, so I can process them (I miss some every now and then). I can sort by every label by clicking on it at the left, but how do you sort the unlabeled ones? | How can I filter my Gmail messages that aren't labeled? | gmail;gmail filters;gmail labels | Updated:There is a blog today about updated Gmail search modifiers that allow you to do this with a simple search!has:nouserlabelsNote: Because of Gmails threading you will have labels on some of the messages in the derived list because some messages in a thread will have labeling while some won't. (From Gmail Help - Advanced Search site.)Original answer:The Gmail advanced search help page says:label: Search for messages by label* There isn't a search operator for unlabeled messagesExample: from:amy label:friends Meaning: Messages from Amy that have the label friendsExample: from:david label:my-family Meaning: Messages from David that have the label My Family |
_webapps.19716 | Is it possible to perform calculations using filters within a Gmail account?I've received receipts for various tickets, and I want to calculate the number of people who paid a certain amount of money. | Performing calculations using filters within a Gmail account | gmail;gmail filters | null |
_softwareengineering.83499 | When working on a software project for a client, there are two ways in which this can be billed - fixed fee project, or per hour billing.Does the choice of payment terms effect the design methodology you would choose to use?For example, one of the benefits of Agile programming is that changes can be made very quickly, and there is less emphasis on documentation, particularly with regards to design up front. This is great for per-hour billing, but when working on a fixed-fee project, you want to avoid changes as much as possible, or bill extra for the additional work created.In such a case, documentation and initial design become more important, as they form a great way of agreeing the exact scope of the project, and can be used to show deviations from design and therefore extra charges. Do you tend to use different design methodologies for fixed-fee and hourly rate projects? If so, what particular methodology do you feel works best for fixed-fee projects? | Do payment terms effect project design methodology? | agile;methodology;billing | null |
_unix.176997 | $ whoamiadmin$ sudo -S -u otheruser whoamiotheruser$ sudo -S -u otheruser /bin/bash -l -c 'echo $HOME'/home/adminWhy isn't $HOME being set to /home/otheruser even though bash is invoked as a login shell?Specifically, /home/otheruser/.bashrc isn't being sourced.Also, /home/otheruser/.profile isn't being sourced. - (/home/otheruser/.bash_profile doesn't exist)EDIT:The exact problem is actually https://stackoverflow.com/questions/27738224/mkvirtualenv-with-fabric-as-another-user-fails | sudo as another user with their environment | bash;sudo;environment variables | To invoke a login shell using sudo just use -i. When command is not specified you'll get a login shell prompt, otherwise you'll get the output of your command.Example (login shell):sudo -iExample (with a specified user):sudo -i -u userExample (with a command):sudo -i -u user whoamiExample (print user's $HOME):sudo -i -u user echo \$HOMENote: The backslash character ensures that the dollar sign reaches the target user's shell and is not interpreted in the calling user's shell.I have just checked the last example with strace which tells you exactly what's happening. The output bellow shows that the shell is being called with --login and with the specified command, just as in your explicit call to bash, but in addition sudo can do its own work like setting the $HOME.# strace -f -e process sudo -S -i -u user echo \$HOMEexecve(/usr/bin/sudo, [sudo, -S, -i, -u, user, echo, $HOME], [/* 42 vars */]) = 0...[pid 12270] execve(/bin/bash, [-bash, --login, -c, echo \\$HOME], [/* 16 vars */]) = 0...I noticed that you are using -S and I don't think it is generally a good technique. If you want to run commands as a different user without performing authentication from the keyboard, you might want to use SSH instead. It works for localhost as well as for other hosts and provides public key authentication that works without any interactive input.ssh user@localhost echo \$HOMENote: You don't need any special options with SSH as the SSH server always creates a login shell to be accessed by the SSH client. |
_unix.319768 | I am trying to find and delete files in current directory and subdirectories (recursively) which match different patterns and print the matching files to stdout to know which ones are deleted.For example I want to match all files starting with '&' and all files starting and ending with '$'.I've tried using:find . -type f -name '&*' -or -type f -name '$*$' -exec rm -v {} \;but rm apply only on the last match ('$*$').Thus i've tried :find . -type f -name '&*' -or -type f -name '$*$' -deleteBut this not only match only the last pattern but it doesn't output the deleted files.I know I can do this:rm -v ``find -type f -name '&*' -or -type f -name '$*$'`` but I would really like to avoid this kind of approach and do it with find command.Any tips?Thanks in advance. | Delete multiple patterns of files using one command (find) | find;regular expression;rm | You problem is that exec only applies to the second pattern, try putting parenthesis around your search conditions, to fix that:find . \( -type f -name '&*' -or -type f -name '$*$' \) -exec rm -v {} \;The thing to note is that this is not a bug, but a feature, so that you can do something like that:find -type f -name '&*' -exec mv '{}' ./backup ';' -or -type f -name '$*$' -exec rm -v '{}' ';'if you need to |
_unix.237099 | Currently go program 1.3, 1.4 and 1.5 has a really different compilation performance, the later has about 4x slower than the former. How to trace the go compiler execution? something like this:note: valgrind + calgrind doesn't work (tutorial) valgrind --tool=callgrind /usr/bin/go build==26982== Callgrind, a call-graph generating cache profiler==26982== Copyright (C) 2002-2015, and GNU GPL'd, by Josef Weidendorfer et al.==26982== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info==26982== Command: /usr/bin/go build==26982== ==26982== For interactive control, run 'callgrind_control -h'.fatal error: rt_sigaction failureruntime stack:runtime.throw(0x9a2bf0, 0x14) /usr/lib/go/src/runtime/panic.go:527 +0x90runtime.setsig(0xc800000040, 0x4e40d0, 0x1) /usr/lib/go/src/runtime/os1_linux.go:297 +0x197runtime.initsig() /usr/lib/go/src/runtime/signal1_unix.go:67 +0x13druntime.mstart1() /usr/lib/go/src/runtime/proc1.go:717 +0xc9runtime.mstart() /usr/lib/go/src/runtime/proc1.go:691 +0x72goroutine 1 [runnable]:runtime.main() /usr/lib/go/src/runtime/proc.go:28runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1696 +0x1goroutine 17 [syscall, locked to thread]:runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1696 +0x1==26982== ==26982== Events : Ir==26982== Collected : 851285==26982== ==26982== I refs: 851,285 | How to trace program execution? | performance | null |
_cogsci.4593 | This question is related to this question. It could be a dupe, but I am more interested in whether this is a form of dyslexic thinking (for want of a better expression)?I posted this question on SE English:Antonym for ameliorateWhere I could not recall the word deleterious and was remembering it as: detiliorate (a non-existent word). I was miss remembering the order of the syllables. plus some other confusion.I was confusing the de-le-terious with de-ti-liorate, that is, I had inverted the L and T sounds in the syllables. Why do people tend to get sounds mixed up in this way?What is the process behind this? | Why do people get syllables mixed up when trying to recall words? | memory;language | null |
_codereview.105576 | I've decided to make a wrapper class for the java.util.Properties class since, in its current state, it only allows for storing and reading String values.Is there any way I can improve the following code?public final class Prefs { private static final Properties properties = new Properties(); private static OutputStream out; public static final File test = new File(settings.properties); public static void main(String[] args) throws IOException { try { set(hello, 5); int hello = get(hello, 1337); System.out.println(hello == 5); set(hello, 34); hello = get(hello, 1337); System.out.println(hello == 34); remove(hello); hello = get(hello, 1337); System.out.println(hello == 1337); } finally { save(); } } static { try { if (!test.exists()) { test.createNewFile(); } properties.load(new FileInputStream(test)); out = new FileOutputStream(test); } catch (IOException e) { e.printStackTrace(); } } public static <T> T get(String key, T defaultValue) { if (!properties.containsKey(key)) { return defaultValue; } String value = properties.getProperty(key); if (defaultValue instanceof Long) { return (T) new Long(value); } else if (defaultValue instanceof Integer) { return (T) new Integer(value); } else if (defaultValue instanceof Short) { return (T) new Short(value); } else if (defaultValue instanceof Byte) { return (T) new Byte(value); } else if (defaultValue instanceof Double) { return (T) new Double(value); } else if (defaultValue instanceof Float) { return (T) new Float(value); } else if (defaultValue instanceof Boolean) { return (T) new Boolean(value); } return (T) value; } public static void set(String key, String value) { properties.setProperty(key, + value); } public static void remove(String key) { properties.remove(key); } private static final ReentrantLock lock = new ReentrantLock(); public static void save() { lock.lock(); try { properties.store(out, ); out.flush(); } catch (Throwable e) { e.printStackTrace(); } finally { lock.unlock(); } }} | Java Properties Wrapper | java | Is the try-finally code block really required in your main() method?Your get() fails if the default value is not any of those primitive wrapper classes. What if I want a BigDecimal instance? 1.0 can't be casted to that.On a related note, you are relying on auto-boxing, which may lead to unexpected consequences:// code compiles, but throws NullPointerException hereint npe = getValue(invalid-key, (Integer) null);Maybe this is just a quick prototyping, but if you do decide to make this a full-fledged library class, you should not be having the main() method, the static initialization block, or your static fields. The fields should become class fields where Prefs needs to be instantiated. Testing should go into its own class.On another related note, you don't necessarily need to create temporary files for testing. There is a Properties.load(Reader) method that will work with a StringReader, which can be used in turn to wrap your test Strings/lines.I don't think there's a benefit to instantiating a FileOutputStream object immediately after you have load()-ed the file. Things may happen between the loading and storing of properties, so regardless of whether you have this object or not, the storing process may still fail afterwards. Also, it seems like you do not need to explicitly flush() after calling store() as that is already done according to the Javadoc:After the entries have been written, the output stream is flushed. The output stream remains open after this method returns. |
_softwareengineering.250910 | For a distributed system, there is a requirement of observing the progress ofsmaller applications on distributed computers (runtime 5 - 20 minutes).There is a web fronted, which right now only shows a list of those smallerapplications (called jobs), with the state of each ofthem, like preparing, running, finishedSo in web-ui, an administrator can see:namestatestarting timecall parametersfrom any computer in the network, possibly for the whole system.Each of the properties is stored in the database, so each state change leadsto a call to write to the database. There might be thousends of those jobs at a time.Description of the distributed system:Central components, served at one location only:Database server (holding results of the jobs to runs statistics, have an overview of jobs run the last three month etc)Application server (glassfish, java, runs central server software)Distributed components, each site has at least one, connected via internet / WAN area:(probably about 20 sites, each has 1..4 Job controllers, each job controller runs about 20 jobs in parallel)Job-controller component (windows, c#, wcf, starts and observes small jobs)Small applications running tasks, started by Job-Controller (the jobs)So, for a vague estimation:20 sites * 4 Job-controllers * 20 jobs = 1600 jobs in paralleleach of which runs from 0 to 100 percent in about 5 minutes on average,resulting in a progress update each 3 seconds.giving 533 progress updates per second (over the internet)Now the customer wants to see something like a progressbar for each of thesejobs. At first, I thought this might lead to a high network traffic and to a vast amount of traffic on the database server.I do not think that writing progress like 1%, 2%, 3% to the database is a good idea.The runtime of those jobs is not very easy to be estimated good (so it is near enough to a real result), but each job can tell very well what his progress is.What would be a good architectural approach to observe progress of possibly thousends of those mini-jobs? (Please note that a mixed infrastructure is given. There is the constraint that the system will be built upon that. so: Central glassfish + java and per site Windows + WCF + C#)Right now I think that each Job-controller could update the progress of all jobs it controls every 10 seconds at once. Would that be an acceptable approach? | Observing progress of a distributed system | design patterns;architecture;enterprise architecture;distributed computing | This looks like a dashboard. There are a number of dashboard platforms out there that can be configured as to data sources, polling intervals, etc.Let me suggest a rough design, while challenging some of your assumptions.Assumption: You state that there is a central database that holds the results of each job. I don't see a requirement that it holds the status of each job while it is running.Assumption: Detailed progress, at the % level, is not required. It is sufficient to show progress (or lack of progress, with a red flag).Assumption: 1400 jobs, updating at 15 second intervals each (which is about 5% increments), is 5600 updates per minute, or 93 updates per second.Assumption: The user interface component can extrapolate the speed of progress from recent updates and provide any smoothing of the dashboard animations.For your architecture, consider using a distributed messaging framework such as Akka.NET. Each job reports its information and progress to an actor on its host machine. Host machines (I presume there is more than one host machine per site) report progress to a site machine, which forwards the job report to the central server. You may decide to update the database with progress, or not, as you see fit.On the central server, the web server collects and summarizes the job information. For each job, you have the job identification, start time and parameters, most recent progress reported and progress rate, along with the time stamp. This summary is forwarded to the web page itself (via AJAX queries), where the UI takes each job and displays the progress bar and periodically updates it based on progress and progress rate.This general approach to the web page design allows you to throttle progress updates as you tune the application, perhaps allowing the web page to focus on one or a few sites at a time for frequent updates. It also off-loads progress bar updates to the client where any kind of animation may be displayed.Similarly, the use of the Akka.NET framework provides robust, fault tolerant distributed, reactive communication and updates. It will allow you to identify sites or hosts becoming unreachable, which should likely be displayed on the dashboard. |
_webapps.9188 | As you know we can add some emails in Yahoo Contacts to receive their emails in our inbox.Usually I receive email from different people from the same domain, for example: [email protected], [email protected], etc. These email addresses aren't constant, they sometimes change.How can I enter the mail addresses into the Yahoo contacts to receive all emails from example.com? | How can I whitelist a domain in Yahoo Mail? | email;yahoo mail | Under Options > Filters, click on Add Filter and set from: contains: @example.com move to folder: inbox. When you click on Save, you're done with whitelisting that domain. |
_codereview.75242 | What is the best way to refactor the following code to avoid duplication? // JQuery FORM Functions$(document).on('change','.submittable',function(){ //save current input field into hidden input 'focus_field' if it exists if($('input#focus_field').length){ var $focus_field = $(this).attr('id') $('input#focus_field').val($focus_field) } $('input[type=submit]').click(); return false;});$(document).on('change','.submittable_wait',function(){ $('#please_wait').show(); $('input[type=submit]').click(); return false;});$(document).on('change','.submittable_wait_bar',function(){ $('#ajax_working').show(); $('input[type=submit]').click(); return false;});$(document).on('change','.submittable_wait_bar_single',function(){ $('#ajax_working').show(); $( this ).closest(form).submit(); return false;});// nested_form - fields added$(document).on('nested:fieldAdded', function(event) { $('#ajax_working').show(); $('#reload_form').val('true'); $('input[type=submit].btn-primary').click()});// nested_form - fields removed$(document).on('nested:fieldRemoved', function(event) { $('#please_wait').show(); $('#reload_form').val('true'); $('input[type=submit].btn-primary').click()}); | jQuery on-change using multiple selectors | javascript;jquery | Some general points:Watch your indentationWatch your whitespaceAn id is unique on a page - there is no benefit to 'element#id' vs '#id' as a selectorIt is usually better to store references to elements that don't get removedLet's apply some of these on the change handler to store the focus field. Here's what we're starting with:$(document).on('change','.submittable',function(){ if($('input#focus_field').length){ var $focus_field = $(this).attr('id') $('input#focus_field').val($focus_field) } $('input[type=submit]').click(); return false;});We don't want to search the DOM for the element every time (unless it can be added/removed) so let's pull it out:var $focusField = $('#focus_field');Notice that I've simplified the selector to just the Id. Now we can use our knowledge of jQuery to simplify the next bit of the function.$(document).on('change', '.submittable', function(e) { e.preventDefault(); $focusField.val(this.id); $('input[type=submit]').click();}The reason this works is that selecting a set of elements that don't exist in the DOM returns an empty jQuery array which is more than happy to be chained - it just won't do anything. I've also modified your return false to use the recommended preventDefault function on the event object.If you want you can pull out common functionality into separate functions. For example:var showAndReload = function (selector) { $(selector).show(); $('#reload_form').val('true'); $('input[type=submit].btn-primary').click()};// nested_form - fields added$document.on('nested:fieldAdded', function() { showAndReload('#ajax_working')});// nested_form - fields removed$document.on('nested:fieldRemoved', function() { showAndReload('#please_wait');});I'm not really sure it adds much though. |
_softwareengineering.348369 | I have multiple ASP.NET Core web applications that need to share an employee database. I want to be able to write the Repository and Models once and use it in multiple projects.What is best practice for this and how can I achieve this with ASP.NET Core?Using VS2017, SQL Server and IIS right now. | What is best practice to share a database in ASP.NET Core with other projects? | c#;entity framework;asp.net core | I would actually advise against doing that. Sharing a database - any kind of database - between multiple applications is pretty much coupling central, and you can find yourself in a very tough spot down the line when any kind of schema changes are required. There's actually an anti-pattern for this - the Integration Database (https://martinfowler.com/bliki/IntegrationDatabase.html).Sharing a data access layer is a between the applications is a possibility, whether by a package manager or by source control shared folders, but in my experience this actually makes the flexibility and maintainability of the systems a hundred times worse. The coupling between the systems may make the timing of database updates significantly harder.Instead, I would recommend making one of the systems responsible for the database schema. Usually, there's an application whose domain more naturally covers the data. Then, expose access to the data via some public API, whether WCF, HTTP, a message bus, or whatever. This gives you a layer of insulation from change; when the database has reason to change, only the responsible application needs to update at the same time. As far as the other applications are concerned, nothing changes when this happens, as the API remains the same. |
_webmaster.72426 | I tried to search Google for one of my domains. I'm hosting multiple different websites on a VPS account, they are obviously sharing the same IP address.To my surprise, the search results on one of my domains returned a few suggestions with ANOTHER domain (of my clients website), but after the .com part it has a prefix to MY personal project domain. Like this: mydomain.com/about/coffee.html. I do a search on mydomain.com and I get some of the results with myclientsdomain.com/about/coffee.html, and it opens MY domains content, but with my clients domain being in front. How has this happened? They are in separate vhosts folders, everything is set up correctly (or so I think) and now this. Maybe there are some issues with DNS records I'm not aware of? | How to fix separate virtual hosts serving intermingled content? | google;domains;dns;search | null |
_unix.259163 | $ cat /proc/mounts | egrep ' /tmp 'tmpfs /tmp tmpfs rw,nosuid,nodev,relatime 0 0$ dd if=/dev/zero bs=1M count=3000 of=/tmp/q3000+0 records in3000+0 records out3145728000 bytes (3.1 GB) copied, 1.04961 s, 3.0 GB/s$ time rm /tmp/qreal 0m0.296suser 0m0.000ssys 0m0.290sWhy not 0.000s? There is no disk involved, just marking that memory is not used anymore. | Why deleting big files from tmpfs is not instantaneous? | linux;filesystems;virtual memory;tmpfs | The marking that memory as unused is a function of how much work the unlinkat(2) system call has to do, which in turn scales linearly with the size of the file. For a default tmpfs on a RHEL 6 system with ~4G of memory, this can be demonstrated as follows.$ sudo mkdir /tmpfs; sudo mount -t tmpfs -o size=75% tmpfs /tmpfs; cd /tmpfs$ dd if=/dev/zero bs=1M of=blah count=2859...$ strace -c rm blah 2>&1 | head -3% time seconds usecs/call calls errors syscall------ ----------- ----------- --------- --------- ----------------100.00 0.241964 241964 1 unlinkat$ for c in 500 1000 1500 2000 2500; do dd if=/dev/zero bs=1M of=blah count=$c 2>/dev/null; echo -n $c ; strace -c rm blah 2>&1 | awk '/unlinkat/{print $3}'; done500 539921000 889861500 1359802000 1749742500 222966As to what the unlinkat(2) system call is doing in particular, that would require digging around in the source code; my guess is that the data structure that represents the file in memory (a linked list?) is being looped over as the file is removed, thus accounting for the linear growth of operation time with file size. |
_cs.57152 | I am interested in any research that reviews the state of affairs when it comes to browsers today, be it their concurrency models, their performance, or anything relevant to such topics. Specifically, I am interested in whether any effors are being taken in academia to take on the shortcomings of browser design currently used in the wild.[Update, some more context]: This question stemmed from a reading of the original Erlang paper, in which the descriptions provided for fault-tolerance and strong process isolation made me think of how browers work (some do provide process/tab isolation, but this is inherently tied to their own implementations, most of which can only do process isolation that relies on OS primitives). So, another way of way of answering parts of my question is pointing me to any implementations that might have gone a different way, be it adopting functional languages or some similar. Now that I clarified my question a bit more I found this link [1], which actually does not mention any other implementation but merely describes why C++ rules the browser field. Therefore I am still interested in any theoretical takes on the topic.Perhaps I am asking in the wrong SE forum, in which case please advise me on where to post my question. [1] https://softwareengineering.stackexchange.com/questions/41883/why-are-most-browsers-developed-in-c | Are there any research papers rethinking browser architecture? | research | null |
_webmaster.96070 | I have 10 logo images of some companies but some of them are horizontal and some of them are vertical.I need to optimize them for the carousel so they should look optimized...how do you solve this problem?Now it looks like this:Sebastian Loder is too high.... | Optimize multiple images with different resolution | photo | Solution #1 Manual editing picturestry editing the picture with photoshop or gimp(freeware). When you transform them into png and make the white background transparent and then optimize the hight/width of the picture you might get the solutiont you need.Search for youtube tutorialsSolution #2 Use CSS to display the pictures in a different wayyou could also show your pictures in your website in a way, that you set the hight/width of the pictures with CSS.Adding a vertical-align you might also get to a different solution without editing the images yourself which might look different, but it should also fit your purpose. |
_unix.185693 | I am using Solaris 10 on VirtualBox. While installing I chose to assign IP manually.IP address: 192.168.1.46Subnet: 255.255.255.0Gateway: 192.168.1.1 Now when I ping my Solaris machine with other machine reply is coming, but I am not able to connect to the Internet. | Manual network setting on Solaris 10 | networking;solaris;network interface | null |
_unix.16254 | I am trying to recover the ext4 partition table of a 2TB disk, where I have 900.000 files. I have cloned the original HD and now I am working on the cloned HD. And I am running Parted Magic Live CD.With testdisk I got what it looks like the original deleted partition:Disk /dev/sdd - 2000 GB / 1863 GiB - CHS 243201 255 63Partition Start End Size in sectors>P Linux 0 1 1 243200 254 61 3907024000 [Duo]Anybody can help me to read this numbers? As far as I know, I can use this data with the mount command and, if everything goes right, have access to the files in order to start a file transfer, can't I?According to this man page, I can use data extracted from testdisk to help me fix the partition: Now using the value given by TestDisk, you can use fsck to repair your ext2/ext3 filesystem. I.E. if TestDisk has found a superblock at block number 24577 and a blocksize of 1024 bytes, run: # fsck.ext3 -b 24577 -B 1024 /dev/hda1But I don't know how exactly.By the way, here it goes some more useful data from gpart:# gpart -gv /dev/sdddev(/dev/sdd) mss(512)Primary partition(1) type: 131(0x83)(Linux ext2 filesystem) size: 1907726mb #s(3907024000) s(63-3907024062) chs: (0/1/1)-(1023/254/63)d (0/0/0)-(0/0/0)r hex: 00 01 01 00 83 FE FF FF 3F 00 00 00 80 74 E0 E8Primary partition(2) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r hex: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r hex: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r hex: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00Begin scan...Possible partition(Linux ext2), size(1734848mb), offset(2mb) type: 131(0x83)(Linux ext2 filesystem) size: 1734848mb #s(3552968704) s(4096-3552972799) chs: (1023/255/0)-(1023/255/0)d (0/0/0)-(0/0/0)r hex: 00 FF C0 FF 83 FF C0 FF 00 10 00 00 00 00 C6 D3According to this post I can use this information to help me:This time I got something useful. The s(63-117258434) part shows the starting sector, which is 63. A sector is 512 bytes, so the exact starting offset of the partition is 32256. So to mount this partition, just issue: mount -o loop,ro,offset=32256 /storage/image/diskofperson.dd /mnt/recovery And voil, access to the filesystem has been obtained. /storage/image/jdiskofperson.dd on /mnt/recovery type vfat (ro,loop=/dev/loop0,offset=32256)Any help would be great. | Use testdisk and gpart information to mount ext4 partition | partition;data recovery;ext4;fdisk | null |
_webmaster.107039 | I have been trying to decide (for some time now) whether to redesign an old CMS website of ours that was built in 2006. It still uses tables and is not mobile friendly.However we have tested the display and functionality on multiple devices and it works fine. It's not responsive either, but to be honest I like my site to appear the same across all devices.I few weeks ago I hired a freelancer to make the site mobile friendly and responsive. It was a nightmare. Not because the developer was unskilled, but because there were so many files that needed to be changed. And it also changed the desktop display to a small degree and caused on-going issues that needed to be addressed. In the end it appeared to be a task that would take months and the cost would be very high. So we dropped the idea and went back to the original development.The website is fairly complex eCommerce site with many scripts. So it is also a concern that making such a big change will effect the functionality.Plus our users really need to have a laptop or desktop to get the full experience of the site. I can do everything from my Droid, but it's just not as easy as working from a regular computer. BTW I find this same experience accessing big sites (that are mobile friendly) on my phone.While our site and can accessed from pretty much all devices, our target audience is desktop/laptop users.My only real motivation in going with mobile friendly was for SEO benefits now that Google ranks non-mobile friendly sites them lower. Plus I don't like feeling left in the dark ages, even though our code is up to date and meets must recommendations from validator.w3.org. And our website works great and gets a decent amount of traffic.So I have 2 questions:Would the SEO benefit be worth it? Meaning would I really see an increase in ranking due to changing the site to mobile friendly to make this whole ordeal a wise use of money and time?Or should I instead focus on creating a mobile app? This way users can download the app and performer must functions through the app if they prefer.*And one thought.. If I create a mobile friendly splash page that would serve as the index (home page) would this remove the dreaded 'not mobile friendly' from Google. I did a test on the site and it did work. But would it really make a difference as the entire site is not mobile friendly? My goal is just to remove the ranking penalty from Google.I know that everyone is moving to mobile devices so logic dictates that I need to keep up with that, but I want to know that it is a wise choice and this means a decent ROI.I'm leaning toward mobile app. But I would love to hear over opinions.Thank you for your time. | Mobile friendly or mobile app? | seo;mobile | null |
_unix.227264 | I use a MS keyboard on my Debian machine. The problem is that MS does not ship configuration software for us and touchpad's default scrolling directions are reversed (it uses natural scrolling).I wonder if it's possible to tweak the input from the particular input device somehow so it behaves normally. I.e. I would like to replace scroll-up and scroll-down commands. | Is it possible to tweak input from touchpad? | x11;touchpad;scrolling;xinput | After a kind pointing by @Gilles to xinput I was able to swap scroll directions by using the set-button-map command.First you should lookup the device id or name using the list command and then remap scroll wheel buttons like this xinput set-button-map id 1 2 3 5 4 7 6.Published a small script which does this automatically. |
_unix.49255 | I just fresh installed Arch Linux reecently and when booting i can spot a warning:Removing leftover files [BUSY] [/usr/lib/tmpfiles.d/uuidd.conf:1] Unknown user 'uuidd'. [FAIL]What could this be? Is there a way to fix it? | Removing leftover files Unknown user 'uuidd'. [FAIL] on a fresh Arch Linux installation | arch linux;system installation;startup;uuid | Reinstalling util-linux will fix it:pacman -Syu util-linux |
_codereview.46928 | The purpose of the below code is to build a shape based on the user input, specifically:Prompt the user to enter a shape - either a triangle or a squareDetermine which texture to use for the shape - currently set at #'s for squares, *'s for trianglesBuild the shape based on the selected textureSample output for Square:###############Sample output for Triangle:***************How can I completely redo the structure and flow of the program to better follow convention, especially in regards to the way the classes are structured? I still want use multiple classes however, as the main thing i'm trying to do here is work out how to use classes properly. Any other general refactoring welcome too. Also sorry for the tab spacing. It's set to 2, but is a lot more when I paste it here.# Shape Builder v0.1class BuildShape def buildShapeCheck if @text == square self.buildSquare else self.buildTriangle end end def buildSquare (1..3).each do puts @texture*5 end end def buildTriangle j = 1 (1..5).each do puts @texture*j j += 1 end endendclass Textures < BuildShape def shapeCheck @texture = if @text == square @texture = # buildShapeCheck else @texture = * buildShapeCheck end endendclass UserInput < Textures def initialize(text) @text = text end def printInput print you entered: puts @text self.testIfValid end def testIfValid if @text == square || @text == triangle puts #{@text} is a valid shape. shapeCheck else puts #{@text} is not a valid shape, try again newUserInput = UserInput.new(gets.chomp.downcase) newUserInput.testIfValid end endendputs Enter either \triangle\ or \square\user_input = gets.chompnewUserInput = UserInput.new(user_input.downcase)newUserInput.printInput | Building shape based on user input | beginner;ruby;formatting | Ok, so... there's a lot to talk about here. I've simply gone through it line by line, and added comments. So prepare for a pretty long review.As for the indentation, the reason it's different when you paste it here is probably that you're still using tabs. It should be soft tabs, i.e. just 2 space characters. Not 1 tab character set to be 2 characters wide, but actual spaces.Anyway, long review incoming.# BuildShape is a poor name for a class. Classes should generally be nouns,# but Build shape is an imperative. It's true that the class does build a# shape, but the class in itself is not the act of building.class BuildShape # This class has no initialize method, but because its parent # class (which is Object, when nothing else is specified) has one, # I can still call BuildShape.new.buildSquare (for instance). But # if I do, things will be weird, because @text and @texture haven't # been defined. And without an initialize method (or attribute # setters), there's no way to define them. # Don't duplicate the name of the class in the names of its methods. # There's simply no need to do that; you don't need to name a file # on your computer after the folder it's in. # Also, Ruby uses underscored method names, so if anything, it should # be called build_shape_check def buildShapeCheck if @text == square self.buildSquare # no need for self. here else self.buildTriangle # or here end end # This produces a 5x3 rectangle - not a square! # It looks like a square because of the font you use to # show it, but that's coincidental and may not hold true # in all situations. So either make the method produce a # real NxN square, or make it very clear in the comments # that by square, I do not actually mean a square (i.e. # admit the deceit and its reasons) def buildSquare (1..3).each do # you could also just use 3.times do puts @texture*5 end end # again: Underscore naming style def buildTriangle j = 1 # This is completely unnecessary... (1..5).each do # ...if you just add |j| as a block argument puts @texture*j j += 1 # And then this can go too end endend# When I said classes should be nouns I meant *singular* nouns.# The String class is not called Strings for instance.# If anything, this should be called Texture# But more importantly, this class doesn't make sense. You use# class inheritance because you have a generic class and want to# make a specialization of that class. E.g. you have a class called# Vehicle, and you make a class called Car or Boat. Those are# specializations of a common thing: a vehicle.# But here, you seem to be making a new class for a completely# different reason - a reason I can't quite figure out.class Textures < BuildShape # Again, no initialize method... # Again you're kinda-sorta repeating the name of the class # in the method name, because - due to inheritance - this class # is a BuildShape. It'd be enough to call the method check. # ... buuut, this method isn't checking anything, so it'd # still be a terrible name. def shapeCheck # This is unnecessary; @texture *will* be set to either # # or to * - there's no reason to set it to anything else # beforehand. @texture = # Another thing is that this method is pointless in many ways. # You can't set the @texture variable anywhere else, which means # @texture is 100% dependent on what @text is. Which, in turn, # means you can just figure out the right texture when you're # drawing the shape. You don't even need the @texture variable; # the methods that print the shape are already different depending # on whether you want a triangle or a square; they can just print # the correct character themselves. if @text == square @texture = # # Don't call this here, *and* in the else-block: Call it *after* # the if-else instead, since you want to call independent of # what the @text is buildShapeCheck else @texture = * buildShapeCheck # Again: Delete this end endend# Ok, this is a better name of a class. UserInput sounds like a# class name. But of course, your inheritance chain is saying that# UserInput is a kind of Textures, which is a kind of BuildShape# Do you see how that doesn't make sense?# It also highlights why the other classes are problematic: You# can't use them on their own. Only after 2 levels of inheritance# do you get the class that actually solves the task. The preceding# classes don't solve anything by themselves, you're simply# treating them as stepping stones to the class you actually want.# They're dependent on you extending them, which, to reuse the# analogy from earlier, is like saying that vehicles don't work# until someone builds a boat; it's backwards.class UserInput < Textures # Yay! An initialize method! def initialize(text) @text = text end # Ok, it prints the input - but it also checks the input # and that's not at all obvious. Your methods should only # do what it says on the tin. def printInput print you entered: puts @text self.testIfValid end # Uh, no. The name of this method is again pretty wrong. # Does it mean it only does its test, if things are valid? # The Ruby-like name for this thing would be valid? # And again, the method isn't just testing; it's also # the method that actually produces the output! AND it # completely duplicating code from *outside* the class. # What's worse, it creates a instance of itself, within # itself, and... honestly, I don't know how to best explain # how weird this is. Sorry, but that the case. def testIfValid if @text == square || @text == triangle puts #{@text} is a valid shape. shapeCheck else puts #{@text} is not a valid shape, try again newUserInput = UserInput.new(gets.chomp.downcase) newUserInput.testIfValid end endend# If you just used single quotes, you wouldn't have to escape# the inner double quotes...puts Enter either \triangle\ or \square\# Terrible naming: user_input is *not* a UserInput object;# that's newUserInput! Which, again, shouldn't be CamelCased# and just plain shouldn't be called that.user_input = gets.chompnewUserInput = UserInput.new(user_input.downcase)# So... when you say printInput, you actually mean# do everything and print the *output*?newUserInput.printInputHere's a different way of doing things:def print_triangle(size = 5) size.times { |count| puts * * count }enddef print_square(size = 5) size.times { puts # * size }endwhile true # loop until further notice puts 'Please type either triangle or square' type = gets.chomp.downcase if %w(triangle square).include?(type) # did the user enter something valid? send(print_#{type}) # call the proper method break # stop looping else puts Sorry, that didn't make sense. Try again. endendAnd done. I know this doesn't use any class hierarchy or anything like that, but that's also to illustrate that it isn't always necessary. Just because you can doesn't mean you should. |
_unix.217738 | Is there a way to restrict CPU time (duration) for all processes which are invoked by executables that are located in a certain directory?I would like to be able to auto-kill all applications which certain users start in their home directories after a certain amount of time (for example after 10 minutes). | Restricting CPU time of processes by executable path | linux;process;users;limit;cpu usage | What you want is in this answer and this answerThe only thing I'll add is the -u user option for ps eg:ps -u <username>to search processes started by a user. |
_webmaster.64821 | I am considering several options for the Trial version of our web app.The one I favor the most is the classic trial period.Of course, it's a LOT easier for someone to hack this system. They can defeat the various methods of fingerprinting the user, thusly:Browser cookie: Clear their cookies completely (or just for our site) or use a different device. Although Evercookie may help with the former.Email address: Create a new login (with a new email address)I'm going to monitor things for a while and just see how it goes. If it's a problem I'll consider requiring a credit card number matched to a name and billing zip code. Each such ID constellation would be considered one user. Someone could still have multiple credit card numbers but we could flag the same name+zipcode coming up again.Are there any better ways to do this? | How to you prevent someone from getting a new free trial period by just creating a new login? | authentication | Along the same lines as the other answers... by requesting an item of personal information that is difficult to repeat (many times).What about a (mobile) phone number? A code is text'd to this number for the user to be able to authenticate the first time (or multiple times)? |
_unix.215989 | Here's the output of a grep command I ran:[user@localhost] : ~/Documents/challenge $ grep -i -e .\{32\} fileA fileBfileA:W0mMhUcRRnG8dcghE4qvk3JA9lGt8nDlfileB:observacion = new Observacion();fileB:observacion.setCodigoOf(ordenBO.getCodigo());fileB:observacion.setDetalle(of.getObservacion().getSolicitante());fileB:observacion.setTipoObservacion(TipoObservacionOrdenFleteMaestro.SOLICITANTE);fileB:observacion.setProceso(TipoProcesoObservacionMaestro.MODIFICACION);fileB:observacion.setFecha(Utiles.getFechaSistema());fileB:java.util.Date fechaHora = Calendar.getInstance().getTime();fileB:observacion.setUsuarioCrecion(usuarioSesionado.getUsuario().getUsuario());fileB:daoObservacion.agregaObservacion(observacion);I'm looking for 32 character long string in two files: fileA and fileB. Importantly, fileA contains exactly 32 characters only, with no line breaks:[user@localhost] : ~/Documents/challenge $ hexdump -C fileA00000000 57 30 6d 4d 68 55 63 52 52 6e 47 38 64 63 67 68 |W0mMhUcRRnG8dcgh|00000010 45 34 71 76 6b 33 4a 41 39 6c 47 74 38 6e 44 6c |E4qvk3JA9lGt8nDl|00000020The problem with my grep command is that it is returning any line that has more than 32 chars. How can I make it return only lines with exactly 32 chars. The issue for me is that I can't modify my regex to match on a line break, because there is no line break.My expected output would be simply:fileA:W0mMhUcRRnG8dcghE4qvk3JA9lGt8nDl(note: this is for a challenge that I've already solved with my ugly solution, but in this scenario we can only use grep and piping or redirecting output is not allowed) | grep for lines/files that contain exactly X chars | terminal;grep;regular expression | This works. ^ denotes start of line, plus the {32} you already had, then a $ for end of line.$ cat fileA fileB1234567890123456789012345678901212345678901234567890123456789012312345678901234567890123456789012123456789012345678901234567890124$ grep -E ^.{32}$ fileA fileBfileA:12345678901234567890123456789012fileB:12345678901234567890123456789012$And as pointed out by @steeldriver, posix grep includes -x, so the following approach also works : grep -xE .{32} fileA fileB |
_unix.281437 | I am viewing a ksh script and I see a function where the variable has been defined as below. Can anyone explain what exactly the below assignment of variable means in ksh script?temprule=\$${APPLC_NM} | Variable assignment in a ksh script | ksh;variable | As @Julie Pelletier indicated, this is funny syntax to make a indirect variable, or a nameref. ksh has some specialized syntax to make this work, however. This is a feature of ksh, and might not work in other shells.The more idiomatic way to write the same in ksh would look like this:# Set up the nameref:nameref temprule=APPLC_NM# Assign value to AAPLC_NMAPPLC_NM=abc# The above two statements may be executed in any order.# Now, temprule will contain the value of APPLC_NM:echo $tempruleabcNow, no funny escaping of double $ is necessary, and the result is arguably more readable. |
_softwareengineering.347938 | I just separated the DLL file from the unity APK.The problem is that when I looked at the code using ilspy, the core code was obfuscated.Some obfuscated code content is shown below.private MemberInfo #=qq8R6v_ItPViUpAzS7xWkOg==;private int #=q_K5X7GL8xTP2m50KA_NZnA==;private int #=qlNeHmWrVlySeEYVh1__KmMbNzB92Es2My9T4DQqSKLE=;public Type #=qGBl6fTgyOO04PxiE0k3Fgg==;public Type #=qPC2CX9L8vieR$SKD8FeQsw==;public MemberInfo #=qnPGC_yuEmuamYAPR84T5Jw==;public MemberInfo[] #=qJdlHw94ORqik5HzLGu5x_Q==;public int #=qwg15C$fun56sSWJeU6k_HQ==;At first, it seemed to be Base64, so I tried to decode it, but only the unknown string came out.There is also a special string in the middle, and I tried to encode the decoded binary data in various ways such as UTF-8 or 16, but I did not get the desired result.I seek advice on how to resolve this obfuscation. | I want to eliminate this obfuscation | c#;encryption;obfuscation | null |
_unix.9740 | Sometimes, I want to insert the result of an Emacs command (that has been echoed in the echo area) to another buffer or another running X program. So, I'd like to put it to the kill-ring. What would be a convenient way to do this?For example: I could run a query with a shell command while in dired mode, say: !rpm -qf (to find out which package owns the selected file in the directory listing), and then want to insert the result somewhere else.Or, another example: if I needed the filename of the current buffer (as in Is there a user interface in Emacs allowing one to grab the buffer's filename conveniently?), and there was not yet any predefined command for this, I could at least do M-:(buffer-file-name) and then use this general-purpose way to copy the shown result to the kill-ring in order to paste it later. (Of course, I could eval (kill-new (buffer-file-name)), but this example here is to illustrate what kind of general-purpose way to do the grabbing of the echoed result I'm looking for.) | Is there a convenient general way to grab the echoed result of a command in Emacs (of M-: or M-!)? | emacs;copy paste | null |
_unix.365614 | I have a JSON file as below { Foo: ABC, Bar: 20090101100000, Quux: { QuuxId: 1234, QuuxName: Sam }}I want to convert it to the below {Foo:ABC,Bar:20090101100000,Quux:{QuuxId:1234,QuuxName:Sam}}I tried to remove '\n', '\t', and ' ' characters; but I am not getting in the needed format. How can I convert it? | How to convert a JSON's file tree structure into a single line? | conversion;json | null |
_unix.336804 | Are there any methods to check what you are actually executing from a bash script?Say your bash script is calling several commands (for example: tar, mail, scp, mysqldump) and you are willing to make sure that tar is the actual, real tar, which is determinable by the root user being the file and parent directory owner and the only one with write permissions and not some /tmp/surprise/tar with www-data or apache2 being the owner.Sure I know about PATH and the environment, I'm curious to know whether this can be additionally checked from a running bash script and, if so, how exactly?Example: (pseudo-code)tarfile=$(which tar)isroot=$(ls -l $tarfile) | grep root root#and so on... | Verfication of command binaries before execution | bash;shell script;shell;security | Instead of validating binaries you're going to execute, you could execute the right binaries from the start. E.g. if you want to make sure you're not going to run /tmp/surprise/tar, just run /usr/bin/tar in your script. Alternatively, set your $PATH to a sane value before running anything.If you don't trust files in /usr/bin/ and other system directories, there's no way to regain confidence. In your example, you're checking the owner with ls, but how do you know you can trust ls? The same argument applies to other solutions such as md5sum and strace.Where high confidence in system integrity is required, specialized solutions like IMA are used. But this is not something you could use from a script: the whole system has to be set up in a special way, with the concept of immutable files in place. |
_webapps.79061 | I had a function and many people took photographs. Is it possible to create a folder that anyone can edit and all the people can upload the pics they have in that folder? | Allow anyone to upload images on a single Google Drive folder | google drive | null |
_softwareengineering.74362 | I've encountered bugs that are extremely difficult to reproduce reliably and/or explain definitively, but that appear to be solved. When this happens, how much time should I spend chasing it down?Example: this SO question and this related jQuery forum post, which offer differing solutions. The issue was reproducible intermittently until the change I discuss in the SO question, and not at all after the change.If I don't conclusively understand what caused the bug, can I claim with confidence that it won't come back in the future by surprise? | Un-Explainable Bugs? | bug;debugging | If I don't conclusively understand what caused the bug, can I claim with confidence that it won't come back in the future by surprise?No. So your next question should be: how bad will it be if it does come back by surprise?If the answer is pretty bad, it will cost us millions of dollars and customers then you need to spend some time figuring it out. Best approach is to take a test environment and revert the change that you think fixed it and nothing else.If the answer is well, it'll be a bit embarrassing, but nothing we can't deal with then call it a glitch for now. |
_unix.223554 | I have lines like:storedVars[css_delete_driver] = css=.driver:nth-child(2) *[data-method=delete];storedVars[css_delete_driver2_mobile] = css=a.remove-driver;and I want to create a file with methods such as:def css_delete_vehicle_everquote css=.autos .auto *[data-method=delete]enddef css_delete_driver css=.driver:nth-child(2) *[data-method=delete]endhow could I do that with sed ? | How could I use sed to convert a js file with storedVars to a ruby file with methods? | sed | A version that will work with s/ per line and no in-line returns might be:cat Variables/user-extensions.js | sed $'s/storedVars/def /s/\[//s/\]//s/= /\\\n /s/;/\\\nend/'because $'string' changes \\\n to newlines as detailed in https://stackoverflow.com/a/18410122/631619 |
_unix.334270 | I am using webvirt in order to play with a web page and have a quick access to my VMs (qemu setup). My problem is that when I try to open a console for any of my guest domains, in the server side I have to provide the credentials in order to open the new window and have access to the domain. Is there any way to accomplish without the need of repetitive authentications?Thanks | webvirt authentication | python;kvm;qemu | null |
_webapps.98225 | I'm facing an issue when trying to convert a specific date format into name of month. Supermetrics, an addon for Google Spreadsheets, returns date formats like the following: 2016|08, etc.I have tried several solutions from the documentation without any luck. Any solutions or specific functions that I should look for? | Trying to convert this date format 'year|month into name of Month using Google Sheets | google spreadsheets | null |
_unix.141573 | I'm trying to use a factor utility but it tells me that number is too large. Is there any utility that can do what factor doing but not tells that number is too large? | Factor is too large | utilities | Maybe your factor is not built with GMP, so it can not handle number bigger than 2**64-1:$ factor 18446744073709551616factor: `18446744073709551616' is too large$ factor 1844674407370955161518446744073709551615: 3 5 17 257 641 65537 6700417Running this command to check if factor built with GMP:$ ldd /usr/bin/factor linux-vdso.so.1 (0x00007fffda1fe000) libgmp.so.10 => /usr/lib64/libgmp.so.10 (0x00007faae00f5000) libc.so.6 => /lib64/libc.so.6 (0x00007faadfd46000) /lib64/ld-linux-x86-64.so.2 (0x00007faae037c000)The limit may be higher on some machines (the number has to fit in uintmax_t type), but your number is a 256-bit number, and no common machine supports such a big uintmax_t, if any.Note that the factor utility can be compiled with GMP support. In that case, there is effectively no limit on the size of the number. It appears that your distribution hasn't activated GMP support (which makes sense since it would add a dependency on an extra library to a core system package for a rarely used feature).If you have perl, you can try factor.pl program include in Math::Prime::Util module:$ /home/cuonglm/.cpan/build/Math-Prime-Util-0.31-9c_xq3/bin/factor.pl 115792089237316195423570985008687907852837564279074904382605163141518161494337115792089237316195423570985008687907852837564279074904382605163141518161494337: 115792089237316195423570985008687907852837564279074904382605163141518161494337 |
_codereview.41343 | There is no error checking in day_of_year or month_day. remedy this defect.Here is the solution:int day_of_year(unsigned int year, unsigned int month, int day) { int leap, i; leap = ((year % 4 == 0 && year % 100 != 0) || (year % 400 == 0)); if(((month >= 1) && (month <= 12)) && ((day >= 1) && (day <= daytab[leap][month]))) { for(i = 1; i < month; i++) { day += daytab[leap][i]; } return day; } return -1;}void month_day(unsigned int year, unsigned int yearday, int *pmonth, int *pday) { int leap, i; leap = ((year % 4 == 0 && year % 100 != 0) || (year % 400 == 0)); if((leap == 1 && (yearday >= 1 && yearday <= 366)) || (leap == 0 && (yearday >= 1 && yearday <= 366))) { for(i = 1; yearday > daytab[leap][i]; i++) { yearday -= daytab[leap][i]; } *pday = yearday; *pmonth = i; } else { printf(error: the yearday is invalid); }}In the day_of_year's case I have to check if the yearday is a valide one. 1 <= yearday <= (365 || 366). I changed the parameters type to unsigned, because a day can't be negative nor a year.In the month_day's case I check if the month is a valid one, it should be 1 <= month <= 12. After this, I check if the day belongs to a valid interval. This exercise can be found in K&R2 at page 126. | Functions that converts day of year to month and day and reverse with error checking | c;beginner;datetime;error handling | Incorrect yearday limit// if((leap == 1 ... || (leap == 0 && (yearday >= 1 && yearday <= 366))) {if((leap == 1 ... || (leap == 0 && (yearday >= 1 && yearday <= 365))) {// 365 month_day() and day_of_year() should use consistent types for month. Suggest int for both.// int day_of_year(unsigned int year, unsigned int month, int day) {// void month_day(unsigned int year, unsigned int yearday, int *pmonth, int *pday) {int day_of_year(unsigned int year, int month, int day) {month_day() and day_of_year() should use consistent types for yearday. Suggest int for both.// int day_of_year(unsigned int year, unsigned int month, int day) {// void month_day(unsigned int year, unsigned int yearday, int *pmonth, int *pday) {int day_of_year(unsigned int year, int month, int day) {void month_day(unsigned int year, int yearday, int *pmonth, int *pday) {Leap year calculation leap = ((year... is good back to 1583. For years 4 to 1582 it is leap = (year % 4 == 0); 1582 has other complications. Before 4 has complications.Suggest month_day() return int to indicate success or failure. |
Subsets and Splits