id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webapps.101625 | If I post something on Facebook I can see in the privacy settings, I can make it visible to all of my friends and I can also choose except and list any individual people to not be included.I am trying to understand how this behaves if there is a person tagged in the post and I include Friends of Tagged in the privacy scope.So for example:I put a post and I tag Joe. I set the privacy settings to All Friends except Bill and Friends of TaggedIf Bill is a friend of Joe's, will he see the post? If yes, is there anyway to not have him able to see that besides removing Friends of Tagged? | Does Facebook respect your except list if those people are in friends of tagged? | facebook;facebook privacy | null |
_codereview.52558 | This code is based off the Stack implementation in Chapter 3 of Cracking The Coding Interview. I modified the code to make it compile and give me the correct output. I'd appreciate any feedback on code style and correctness, assuming that I write this code in a technical interview.The pseudocode from Cracking the Coding Interview, which is implemented using a linked list:class Stack { Node top; Object pop() { if (top != null) { Node item = top.data; top = top.next; return item; } return null; } void push(Object item) { Node t = new Node(item); t.next = top; top = t; } Object peek() { return top.data; }}My code:public class Stack { Node first; Node last; public Stack(Node f, Node l) { first = f; last = l; first.next = last; } public Stack() { first.next = last; } public void push(Object data) { if(first == null) { first = new Node(data, null); } else { last.next = new Node(data, null); last = last.next; } } public Object pop() { if(first == null) { return -1; } else { Object item = last.data; Node cur = first; while (cur.next.next != null) { cur = cur.next; } last = cur; return item; } } public Object peek() { if(first == null) { return -1; } Object item = last.data; return item; } public static void main(String[] args) { Stack stack = new Stack(new Node(1, null), new Node(2, null)); stack.push(3); System.out.println(stack.peek() == 3); stack.pop(); System.out.println(stack.peek() == 2); } private static class Node { Object data; Node next; private Node(Object d, Node n) { data = d; next = n; } }} | Implementing a Stack in Java for a technical interview | java;interview questions;linked list;stack | A stack should not know about the first element. A stack only know about the last element that was pushed, so the Stack object itself should only have one Node called last. Because of this, the constructor should be changed as well to only take one Node ex: public Stack(Node node).Edit: As @vnp says, Node is private, so it cannot be created outside this class. Either create a constructor which takes an Object, or don't create any constructors and always create an empty Stack.This constructor doesn't do anything and will throw a NullPointerException:public Stack() { first.next = last;}In pop(), you shouldn't need to do any fancy while loops. Just check if last is null and if not, get last's data and set last to last.next: public Object pop() { if(last == null) { return -1; } else { Object item = last.data; last = last.next; return item; } }In push(), you should simply set last to be the new Node, and have it point to the old last as it's next:public void push(Object data) { last = new Node(data, last);}In peek(), you can change first to last and simply return last.data:public Object peek() { if(last == null) { return -1; } return last.data;}One final point: you should not give your variables one character names like d, n, l, and f. If the variables in your constructor should have the same name as the private member variables, then give them the same name and prefix the members with this to differentiate them. For example, you can change the Node constructor to:private Node(Object data, Node node) { this.data = data; this.next = node;} |
_cs.60654 | Problem: Consider a set of $n$ points in the plane, how could we find a strip of minimal vertical distance that contains all points?Definitions: A strip is defined by two parallel lines and the vertical distance is defined as the distance between their intersection points with the $y$ axis.3 variables solution: In the plane itself, this could be solved using a linear program of three variables, $m$, $a$ and $b$ where we look for $y=m\cdot x+a$ and $y=m\cdot x+b$.Duality: If we move to the dual plane, we get a set on $n$ lines which can be transformed to $n$ upper half-planes or $n$ bottom half-planes. Denote $C_1$ to be the intersection of all upper half-planes intersection and $C_2$ of the bottom ones. The strip in the dual problem is represented by the two ends of the shortest vertical segment crossing the $C_1$ and $C_2$.My question is - can we express the problem in the dual plane using a linear program of two variables? | Finding a minimal width strip which encloses a set of points in the plane | computational geometry;linear programming;duality | Take the convex hull of your set of points. Then use rotating calipers tofind the optimal strip.What is needed here to make this work is a lemma that characterizes apotentially optimal solution: Could the optimum occur without one supporting linethrough two points (flush to the hull)?Added. Yes, that flush-lemma holds, because more-horizontal strips are preferred. So: for each edge $e$ of the convex hull $H$, extend $e$ to a line $L_1$, and let $L_2$ bethe parallel line supporting $H$ on the other side. Compute the vertical distance between $L_1$ and $L_2$.Select the shortest distance among all alternatives. |
_unix.216829 | I have recently installed Arch Linux x64 and I wanted to install the LAMP stack. Everything worked fine, until I arrived to the MySQL part that I installed but can't launch.The output of sudo systemctl start mysqldgives :Job for mysqld.service failed because a timeout was exceeded. See systemctl status mysqld.service and journalctl -xe for details.and here is the systemctl status mysqld.service output :* mysqld.service - MariaDB database server Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled; vendor preset: disabled) Active: activating (start-post) (Result: exit-code) since Fri 2015-07-17 22:31:04 CET; 20s ago Process: 9548 ExecStart=/usr/bin/mysqld --pid-file=/run/mysqld/mysqld.pid (code=exited, status=1/FAILURE) Main PID: 9548 (code=exited, status=1/FAILURE); : 9549 (mysqld-post) CGroup: /system.slice/mysqld.service `-control |-9549 /bin/sh /usr/bin/mysqld-post `-9743 sleep 1Jul 17 22:31:04 sn4k3 systemd[1]: Starting MariaDB database server...Jul 17 22:31:04 sn4k3 mysqld[9548]: 150717 22:31:04 [Note] /usr/bin/mysqld (mysqld 10.0.20-MariaDB-log) starting as process 9548 ...Jul 17 22:31:04 sn4k3 mysqld[9548]: 150717 22:31:04 [Warning] Can't create test file /var/lib/mysql/sn4k3.lower-testJul 17 22:31:04 sn4k3 mysqld[9548]: [96B blob data]Jul 17 22:31:04 sn4k3 mysqld[9548]: 150717 22:31:04 [ERROR] AbortingJul 17 22:31:04 sn4k3 mysqld[9548]: 150717 22:31:04 [Note] /usr/bin/mysqld: Shutdown completeJul 17 22:31:04 sn4k3 systemd[1]: mysqld.service: Main process exited, code=exited, status=1/FAILURE | unable to launch mysqld in arch linux | arch linux;mysql;mariadb | Found the solution you just have to run this command : sudo mysql_install_db --user=mysql --basedir=/usr/ --ldata=/var/lib/mysql/source : Archlinux wiki |
_opensource.4628 | If a company wants to use an application that is distributed under the BSD license with this text below.Redistribution and use in source and binary forms, with or withoutmodification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the <organization> nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS AS ISAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THEIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSEARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANYDIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED ANDON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OFTHIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.The commercial company wants to use the software in binary form.Do you need to contact the author in order to use the software, in binary form, for a commercial product? | Do you need to contact the author to use a BSD-licensed software? | bsd | null |
_unix.182110 | So I have a file name test.txt inside that file I have about 20 lines of text that are delimited by pipe | Example:John|freshman|seatle|math|4|fulltimeBob|senior|Tacoma|biology|4|part-timeI want to make 2 lines for each record after number 4 , example John|freshman|seatle|math|4|full-timeBob|senior|Tacoma|biology|4|part-time Etc.. | add a new line to a delimited file | shell script;text processing | You could use sed:sed -i 's/|4|/|\n4|/' file.txtThis will replace |4| with |\n4| (i.e. a vertical bar, a newline, and then 4|). |
_webapps.38964 | I wish to sync my work's Microsoft Outlook calendar one-way with Google Calendar so that I can view my events all together on my Google Account.I wish to create a new Calendar on Google Calendar called work, which my Microsoft Outlook calendar will feed data to.I've tried a few apps which allow me to sync the calendars, but it mainly seems to want to sync to my main Google Calendar, not a specific one. | Is it possible to sync MS Outlook with specific Google Calendar? | google calendar;outlook;synchronization | null |
_codereview.109221 | This is one of my first attempts in writing a bowling scorecard app. It can have more than one player. I would like to know if anyone has suggestions to improve the code. I am trying to make it as object oriented and as organized as possible. There are 2 public methods, one that accepts pins that were knocked down, and another method that gets score card information that can later be displayed to the user.I feel though for some reason that the playerRolled method is a little bit disorganized. Maybe it's responsible for too many things. Essentially in mind I wanted to create a simple way to interface with the object by just having one method that does everything, will add points, change players, change frames, and returns boolean if game is completed. This leaves all the game logic within the class and users of this class just have to feed the application some player names and the pins being knocked. The app will take care of the rest.Is this a good design? var BowlingGame = function(params) {var currentFrame = 0;var playerNumber = 0;var gameOver = false;var players = [];if (params) { for(var name in params) { players.push({name: params[name], frames : []}) }}function isLastFrame() { return (currentFrame == 9)}function isGameOver() { return (isLastFrame() && frameFinished() && (playerNumber == (players.length - 1)))}function frameFinished() { if (!players[playerNumber].frames[currentFrame]){ return false } var rolls = players[playerNumber].frames[currentFrame].length var sumRolls = 0; players[playerNumber].frames[currentFrame].map(function(item){ sumRolls += item }) if (isLastFrame() && (sumRolls >= 10) && (rolls < 3)) return false return (rolls == 2 || players[playerNumber].frames[currentFrame][0] == 10) ? true : false }function nextFrame() { if (currentFrame < 9) { currentFrame++ }}function nextPlayer() { if(playerNumber < (players.length - 1)){ playerNumber ++ } else { playerNumber = 0 }}this.playerRolled = function playerRolled(pins) { if (!gameOver) { // validate pins not more than 10 if (players[playerNumber].frames[currentFrame]) { players[playerNumber].frames[currentFrame].push(pins) } else { players[playerNumber].frames[currentFrame] = [pins] } frameCompleted = frameFinished(); isLastPlayer = (playerNumber == (players.length - 1)) if (frameCompleted) { if (isLastPlayer) nextFrame(); nextPlayer(); } gameOver = isGameOver() return true } return false} this.getScoreCard = function getScoreCard() { for (var player in players) { var runningTotals = calculateFrameTotals(player) players[player].scores = runningTotals } return players;}function calculateFrameTotals(playerNumber) { var sum = 0; var sumFrameRolls = 0; var totalPointsPerFrame = [] function getSum(startingFrame, length) { // console.log('start', startingFrame, players[playerNumber].frames[startingFrame]) for (var num in players[playerNumber].frames[startingFrame]) { sum += players[playerNumber].frames[startingFrame][num]; length--; if (length==0) break; } if (length > 0 && players[playerNumber].frames[startingFrame+1]) { getSum(startingFrame + 1, length) } } // for each frame for (var frame in players[playerNumber].frames) { sum = 0 sumFrameRolls = 0 players[playerNumber].frames[frame].map(function(item){ sumFrameRolls += item }) if ( players[playerNumber].frames[frame][0] == 10 ) { getSum(parseInt(frame)+1, 2) } else if (sumFrameRolls == 10) { getSum(parseInt(frame)+1, 1) } totalPointsPerFrame.push(sumFrameRolls + sum) } return totalPointsPerFrame;}};Then to run the app, let's say using a Node console:var game = new BowlingGame(['Ron Buenavida','Omer'])game.playerRolled(10);game.playerRolled(3);game.playerRolled(2);game.playerRolled(10);console.log(JSON.stringify(game.getScoreCard())) | Bowling scorecard app | javascript;object oriented;game | First, don't use for-in for arrays. It will run through the elements of the array as well as other properties. Use a regular for loop, or better use map instead. Additionally, I suggest you name params to something else. params is too generic, plus the array isn't really params. It's a list of player names.var players = playerNames.map(function(player){ return { name: player, frames : []}});In frameFinished, I see you use map to construct a sum. reduce is the better method for such operation.var sumRolls = players[playerNumber].frames[currentFrame].reduce(function(sum, item){ return sum + item;}, 0);Further down in the same function, I see you have a bunch of conditions. You can actually just combine them. Additionally, conditions are by themselves boolean. No need to use a ternary to return true or false. Also, I suggest you put the values in variables for easy comprehension.var isLastFrame = isLastFrame();var hasRolledAllTenRounds = sumRolls >= 10;var hasRolledLessThanThree = rolls < 3;var hasRolledTwo = rolls === 2;var currentPlayerIsAtFrameTen = players[playerNumber].frames[currentFrame][0] === 10;return !(isLastFrame && hasRolledAllTenRounds && hasRolledLessThanThree) || // and so on...Now one problem with if statements is in the long run, they can easily run out of control and end up in deeply nested situations. One way you can avoid that is to use ternaries and condition variables. For instance, nextPlayer.playerNumber = (playerNumber < (players.length - 1)) ? player + 1 : 0or currentFramecurrentFrame = currentFrame < 9 ? currentFrame + 1 : currentFrame;The rest of your code seem to follow the same pattern. I suggest applying what I have reviewed to the rest, where applicable. |
_codereview.43697 | The code below will split environment variables from a command line (always appear at the end of the command line). Environment variables are represented by '-E key=value'. I've achieved this like so, but I'm wondering if there's a more elegant waypublic class TestSplit { public static void main(String... args) { String command = -ps 4 -pe 5 -E opInstallDir=/home/paul -E opWD=/home/paul/remake -E opFam=fam -E opAppli=appli; int startPosition = command.indexOf(-E) + 2; String envVars = command.substring(startPosition); for(String pair: envVars.split(-E)) { String[] kv = pair.split(=); System.out.println(kv[0] + +kv[1]); } }}EDIT Just to clarify these aren't command line arguments for launching the program from the console, they are command line arguments for launching an external program. The details of which I haven't included. | Splitting a command line into key/value pairs | java;child process | Like @palacsint I will recommend an external library. Apache commons-cli is a decent choice. Another choice (my preference) is java gnu-getopt ... I like it because I am familiar with the notations and standards from previous work. It can be a little complicated the first time around otherwise.On the other hand, I tend not to use an external library unless the code is already going to be relatively complicated....But, back to your code.Why do you have everything in a single String? Why is it not part of the String...args ?The first thing about command-line arguments is that they get complicated very fast. What if the argument was:String command = -ps 4 -pe 5 -E opInstallDir=/opt/OSS-EVAL/thiscode -E opWD=/home/paul/remake -E opFam=fam -E opAppli=appli -Edocs='My Documents' -Eparse=key=value;I have thrown in a few things there.First up, on our one machine at work, we really do have the directory /opt/OSS-EVAL/ which we use to install/evaluate OSS software/libraries.The above will break your parsing because it has the -E embedded in the name.Next up, is 'POSIX-style' commandline arguments can have quoted values, and also values with an = in the value.So, things I would recommend to you:Locate the source of your command-line values. It will likely be available as an array, not a single string. Keep the data as an array!Second, with the array, it is easier to look for stand-alone values that are -E, or, if the input is -Ekey=value then you look for values that start with -E.Finally, when you split the key/value on the =, limit the split to 2.String[] kv = pair.split(=, 2);Which will preserve any of the = tokens inside the value part.EDIT:You have suggested in your edit that this is for sending data to an external command.If you are using Java to initialize the external command, then please, please, please use the version of exec() that takes a command array, or use the ProcessBuilder which allows you to send all the command-line parameters as separate values in an array!!! |
_softwareengineering.337487 | I have a program that needs to get a part of its data from an api of another program, and the data needs to update every 5 seconds.For example: I have a program that presents homework for each class.(Lets assume the homework updates every 5 seconds).My program gets the homework from an api.So at the beginning we set an interval in each client side - every student asked the server every 5 seconds to update its data, and the server sent a request to the api, the server prosecced the data from the response, the server saves the updated data to our db and sends it back to the client.If i have 4 students from class A and 2 students from class B - i have 6 requests to the api. We wanted to reduce these requests, so we chose to save the data to the db with the timestamp.Now if student1 from class A saved the homework into the db, and after 2 seconds student2 (class A) asks for new homework, he checks in the db first and get the data from the db and not from the api.The question is:How should we keep our data up to date from another api? Should we keep it this way or should we create another site/program that's in charge on the updates?Hope you'll have an answer for us.Thank you,Lior. | Keep data up to date via api | design patterns;api design | null |
_webmaster.14493 | I have a main site with a bunch of subdomains created. Each subdomain is a blog and I want each blog to have its own domain name i.e.thisguy.com -> blog1.mainsite.comthatguy.com -> blog2.mainsite.comI bought the new domains and I set up the CNAME records as above to alias them to the appropriate subdomains. However, I get my hosts a domain is pointing to one of our servers but we don't know anything about it landing page.How can I set up these domains as aliases of my subdomains? | How can I alias domains to subdomains? | subdomain | null |
_unix.98318 | I have two Input files.File1: s2/80 20 . A T 86 F=5;U=4s2/20 10 . G T 90 F=5;U=4s2/90 60 . C G 30 F=5;U=4File2:s2/90 60 . G G 97 F=5;U=4s2/80 20 . A A 20 F=5;U=4s2/15 11 . A A 22 F=5;U=4s2/90 21 . C C 82 F=5;U=4s2/20 10 . G . 99 F=5;U=4s2/80 10 . T G 11 F=5;U=4s2/90 60 . G T 55 F=5;U=4Expected Output:s2/80 20 . A T 86 F=5;U=4 s2/80 20 . A A 20 F=5;U=4s2/20 10 . G T 90 F=5;U=4 s2/20 10 . G . 99 F=5;U=4Logic:I want all the lines from File1 and File2 concatenated in the Output file: Conditions:If Column 1, 2, 4 of File1 and File2 exactly match and if Column 5 of File2 has a dot ie . or if it match exactly with Column 4 of file2.Code:I tried using the script:BEGIN{}FNR==NR{k=$1 $2a[k]=$4 $5b[k]=$0c[k]=$4d[k]=$5next}{ k=$1 $2lc=c[k]ld=d[k]# file1 file2if ((k in a) && ($4==$5) && (lc==$4)) print b[k] $0}But I get an Output of:s2/80 20 . A T 86 F=5;U=4 s2/80 20 . A A 20 F=5;U=4Whereas My output should be:s2/80 20 . A T 86 F=5;U=4 s2/80 20 . A A 20 F=5;U=4s2/20 10 . G T 90 F=5;U=4 s2/20 10 . G . 99 F=5;U=4I would appreciate your help. Thanks. | Matching Five Columns in two Files using Awk | sed;awk | awk ' { key = $1 SUBSEP $2 SUBSEP $4 } # here, we are reading file1 NR == FNR { f1_line[key] = $0 next } # here, we are reading file2 key in f1_line && ($5 == . || $5 == $4) { print f1_line[key], $0 }' file1 file2outputss2/80 20 . A T 86 F=5;U=4 s2/80 20 . A A 20 F=5;U=4s2/20 10 . G T 90 F=5;U=4 s2/20 10 . G . 99 F=5;U=4 |
_computergraphics.3878 | Alright so I'm a complete n00b at image processing so forgive me if my question sounds vague. I'll try to supplement it with what I have learnt until now and also a couple of images.See the caption in the image below?What I'm essentially trying to do, it to remove it and restore the original image in (Python using OpenCV).Now I have a couple of approaches in mind. First one I read about is a technique called Inpainting. Now I saw a tutorial on inpaiting here but this required me to create a separate mask where the non-zero pixels denote the stuff I want gone.Now what I noticed is that the caption is not fully opaque. So i was wondering if there is any possible way to restore the original image by first removing the darkened part of the strip. (essentially something very the original image with only the whitened text on it) and then create a mask of the text and then use inpaiting.Now I have a couple of questions. What technique do I use to remove the darkened part (let the text be now, we can remove it in the second step using inpainting)Does this algorithm even make sense. Is there a better approach I should be looking at?NOTE: In no way am I looking for any sort of code or specific implementation. I'm just looking for what techniques and procedures I can study up on so as to get the job done. The rest is on me | Removing a darkened caption with text on it in an image | image processing;filtering | null |
_webmaster.22837 | Possible Duplicate:What are the best ways to increase your site's position in Google? Any google search for anything about SEO yields more articles than you can shake a stick at, but lot of the articles are out of date, many have conflicting advice and I just about none of them ever give any reasons/proof/data to back up their claims about what works and what doesn't.Has anyone done any at least somewhat scientific tests to see what works and what doesn't (and ideally why?) or has anyone from Google released any non-basic information about best practices?Really what I would love to do is A/B test different SEO techniques, but the time lag and sheer number of variables makes it very difficult. Has anyone ever tried this type of thing? (And published their results?) | Is there any good authoritative source of information on SEO practices that is backed up by data? | google;seo | SEO is the oddest thing. You can go by Google's recommendations that Jeff listed, but they list stuff on how to make your site not suck, they do not list stuff on how to make your site really good.The best guide I have found is by Moz called Beginners Guide To SEO. Moz is actually known for their sandbox testing of techniques, so they are as close to an expert that you will find. |
_webapps.67840 | There are more than one billion public playlists available on Spotify. But as far as I can tell, the only way to search for them is to enter a word in the universal search field (I mostly use the desktop client for Mac), scroll down in the instant results and click the Playlists category.At that point you can scroll through a variety of resulting public playlists that include the search term in the title of the playlist. But you cannot search for song or artist or album in a playlist. And there appears to be no order to the search results (maybe it's alphabetical, but this isn't useful). Some playlists are just a single album. Others are an artist's entire body of work. You wouldn't know what is what without clicking into each and every playlist.Is there any way to sort playlists by followers, filter by date last updated, or otherwise search all public playlists to return more relevant results?(There are a few relevant questions at Stack Overflow re: this request, but I haven't yet found a service that actually implements any of these searching/sorting functionalities.) | How to do advanced search and sorting for public Spotify playlists? | spotify;spotify playlist | null |
_unix.175907 | inside ~/mp3 I've some mp3 files.My script:#!/bin/bashbr=80for a in $1*.{wav,mp3} ; do ffmpeg -i $a -ar 44100 -ab $br $br_tmp/${a%.*} [$br].mp3 ; with $1 add the path:myscript.sh /home/$USER/mp3/but I've the error:/home/$USER/mp3/*.mp3: No such file or directoryso, the script does not run. Runs only when I execute the script inside ~mp3 dir. | bash add path to handle some files | bash | null |
_webapps.76708 | Spreadsheet is not recording responses from formI am able to see from responses appear fleetingly before they vanish. They are however indicated in summaryI have tried un-linking, re-linking, changing the destination folder, downloading it as a .CSV, nothing worksThis issue comes when you link the live form spreadsheet to other spreadsheets. I am using the arrayformula and query functions to export from a live spreadsheet.Please suggest a resolution. | Using a Live Google Form inputs for calculations | google forms | null |
_cs.35214 | What is the price paid for the vast virtual address space provided toprogrammers for their applications? Or in other words, what is the overhead due to virtual memory?Is there any other overhead from implementing virtual memory, beyond memory consumed by the kernel? | What is the overhead of Virtual Memory? | operating systems;virtual memory | One major overhead of virtual memory is that virtual pages that are being used in the current computation have tobe loaded in physical memory, which usually means also transferringout another page back on disk. Since this is costly, you want to avoiddoing it too often. Hence an important concept is the locality ofprograms: though a program may have to use considereable space, youtry to organize the program so that only a much smaller part of memoryis used at any time, a smaller part that evolves only slowly.THis concerns the code being executed, but also the data used. Anddata is often orders of magnitude larger than the code. So, a programhandling large amounts of data will often be organized so as toimprove the locality of the data organisation in memory (but this depends also on what the code does with the data). As aconsequence, too naive a use of classical textbook algorithms may result invery slow programs, because of too many page faults.I guess there are tools to analyze the locality of programs, or tooptimize them to improve it. A specific example is the design ofgarbage collectors, which have developed various techniques to improvedata locality by reorganizing the information, and which are of coursedesign to explore the memory in a very local way, for example bylooking in priority at pages the main program has already loaded inphysical memory.Programs with bad locality spend too much time loading pages comparedto actual computing time. This is called thrashing. |
_codereview.57611 | Edit: This was an absolute ignorance on my part which leads to hierarchical locks and can be implemented much cleaner (in fact the right way, in a mutable world) using a pipe-line of messages or Agents. Please consider studying other actor models in .NET (like the one in F#).What are drawbacks/benefits of this simple Actor model in C# (Well; it's more of a Message Loop actually, but please enlighten me)?Using this model one can turn any normal, not async class into an actor without employing threading objects. The idea is objects can be like actors and calling methods is like sending messages; and It's thread safe (yet we can shoot ourselves in foot because of mutability; but other that than, it was very helpful).Sample: Assume that I have an Id server:class IdServer{ long _count = 0; public string Generate() { _count++; return _count.ToString(); }}And I will use it as a thread safe actor (and pass it around). Here is a sample:public static Actor<IdServer> globalIdServer = new IdServer().ToActor();And in Task (Thread) 1:var id1 = (from x in globalIdServer let newId = x.Generate() select newId).Result();And in Task (Thread) 2:var id2 = (from x in globalIdServer let newId = x.Generate() select newId).Result();And we even can use multiple Actors in one statements.Actual implementation of Actor internals is:public class Actor<T>{ readonly T _process; readonly object _lock = new object(); readonly int _timeout; public Actor(T process) : this(process, 10000) { } public Actor(T process, int timeout) { _process = process; _timeout = timeout; } public U Send<U>(Func<T, U> func) { if (!Monitor.TryEnter(_lock, _timeout)) throw new TimeoutException(); try { return func(_process); } finally { Monitor.Exit(_lock); } }}public static class ActorFx{ public static Actor<T> ToActor<T>(this T obj) { return new Actor<T>(obj); } public static Actor<TResult> Select<TSource, TResult>(this Actor<TSource> source, Func<TSource, TResult> selector) { return source.Send(selector).ToActor(); } public static Actor<TResult> SelectMany<TSource, TResult>(this Actor<TSource> source, Func<TSource, Actor<TResult>> selector) { return source.Send(selector); } public static Actor<TResult> SelectMany<TSource, TCollection, TResult>(this Actor<TSource> source, Func<TSource, Actor<TCollection>> collectionSelector, Func<TSource, TCollection, TResult> resultSelector) { return resultSelector(source.Send(x => x), source.Send(collectionSelector).Send(x => x)).ToActor(); } public static T Result<T>(this Actor<T> source) { return source.Send(x => x); }} | Light Weight Actors | c#;actor | I don't like the way you're (ab)using LINQ. Instead I would use syntax like:globalIdServer.Run(x => x.Generate())This is shorter than your approach and I think it makes it clearer what's going on.EDIT: Actually, your Send() already behaves like that. I don't see what does the LINQ syntax add.Another option would be to split the server type into interface and implementation and then use metaprogramming to create an implementation that's an actor. You could use libraries like DynamicProxy or PostSharp to do this (actually PostSharp already contains actors).With DynamicProxy, the user code could look something like:IIdServer globalIdserver = new ProxyGenerator() .CreateInterfaceProxyWithTarget<IIdServer>(new IdServer(), new ActorInterceptor());var id = globalIdserver.Generate();Another advantage of the metaprogramming approach is that it means the implementation can't escape. With your LINQ approach (or my proposed Run()), you can easily do something like:Actor<IdServer> globalIdServer = ;IdServer escaped = globalIdServer.Result();escaped.Generate(); // not under lock!If you can use C# 5.0, consider making waiting for the lock asynchronous, so that you're not blocking a thread unnecessarily.var id1 = (from x in globalIdServer let newId = x.Generate() select newId).Result();There is no reason to use let here, select is enough:var id1 = (from x in globalIdServer select x.Generate()).Result();And method syntax is probably even better:var id1 = globalIdServer.Select(x => x.Generate()).Result();public Actor(T process) : this(process, 10000) { }Why is the default timeout 10 s? Wouldn't Timeout.Infinite be a better default?public static class ActorFxThe usual convention is to call the static class that contains extension methods something like ActorExtensions. |
_codereview.90393 | I was hoping I could get some feedback on performance of my animations overall. It could just be me but I keep getting a bit of lag despite being at 60FPS constantly.Objects on screen seem to tear a little bit.Here's my code in full.Here is my game loop://clocks and times used to get custom game loop working sf::Clock clock; sf::Time timeSinceLastUpdate = sf::Time::Zero; //setting g_GameState to be intro on run g_GameState = 0; //main loop to run the entire length of games //life while (g_Window.isOpen()) { sf::Time dt = clock.restart(); timeSinceLastUpdate += dt; while(timeSinceLastUpdate > TIME_PER_FRAME) { timeSinceLastUpdate -= TIME_PER_FRAME; processEvents(); update(TIME_PER_FRAME); } updateFPSCounter(dt); render(); }It's this performance issue I'd like any feedback or advice on.Also, if anyone thinks I could do collision detection better as well, could you give me any pointers? For the player paddles I use this to update their movement: //Not using deceleration //so setting mVelocity as 0 each time mVelocity.x = 0; mVelocity.y = 0; //Handle if player keys are pressed to move up //or down if(mIsMovingUp){ mVelocity.y = -mSpeed; } else if(mIsMovingDown){ mVelocity.y = mSpeed; } //move the play based on current mVelocity size this->move(mVelocity * elapsedTime.asSeconds()); | SFML Pong Game Performance | c++;performance;game;sfml | null |
_softwareengineering.67442 | As many of you know WordPress uses secret key like thing for every AJAX request. Making each request unique and also 'somewhat' secure (just a step ahead than nothing). How would I implement the same using PageMethods (webservice methods inside aspx page) in asp.net application. Some things I have already taken care of are authentication and authorization to access the page.I would like to know How to generate the same nonce/secret key whatever in C# for asp.net application?Also doesn't this affect the performance of the application like 100 thousand users use it and each time the method has to go through encryption, random number generation etc..?Is there any way I can check if posted data is what was actually posted. Checking the integrity of posted data?Do you need to follow design patterns to secure application logic? Does one exist to make your application at the least somewhat secure? | How to generate nonce for Ajax web requests | c#;asp.net;javascript;security;ajax | I would like to know How to generate the same nonce/secret key whatever in C# for asp.net application?Read up on HTTP Digest Authentication. It's described pretty well there.http://en.wikipedia.org/wiki/Digest_access_authenticationAlso doesn't this affect the performance of the application like 100 thousand users use it and each time the method has to go through encryption, random number generation etc..?Hardly. Remember: the connection to the user's desktop is the bottleneck. Checking a nonce is generally trivial, since it's a simple hex digest of data already available.Is there any way I can check if posted data is what was actually posted. Checking the integrity of posted data?Read up on Cross Site Request Forgery (CSRF).http://en.wikipedia.org/wiki/Cross-site_request_forgeryDo you need to follow design patterns to secure application logic? Yes.Does one exist to make your application at the least somewhat secure?Not One. Lots and lots. There is no somewhat secure. There's secure and there's broken.Start with the OWASP top-ten list and read up on the vulnerabilities.https://www.owasp.org/index.php/Category:OWASP_Top_Ten_ProjectThen, find a framework that does this for you and use the framework.Don't build your own. It's already been done for you. Just pick a framework that does it.Why security is binary. perfect security is an oxymoron -- it only exists where there is no information exchanged.Security doesn't mean perfect. It means as good as present technology permits under the circumstances that we've agreed to share information, and I have to assume you're not lying. If you want somewhat secure, then you are implementing somewhat insecure. If you're going to implement somewhat insecure, you must actually choose the specific kind of insecurity you are going to implement. Generally, you will must either give private information away, allow information to be adulterated or allow a denial of service attack. Pick some combination of things you are going to implement in a somewhat secure application.Try to avoid choosing the give away the root password insecurity if you can. Usually, that is isomorphic to as secure as possible. |
_unix.233933 | I was running a for loop in terminal which has a sox command in it. For some reason the sox command failed and now I cannot terminate the for loop. I tried pressing Ctrl+c many times, Ctrl+z many times but no use.I can't get the prompt, terminal just stays like that. There is no sox process running in the background as per ps aux How to deal with this case, I don't want to close the tab but fix the problem by some other means.user$for spkr in /home/user/tmp/*; do filesIn1Line=tr '\n' ' ' < $spkr; sox $filesIn1Line en_$spkr.wav; echo $spkr; donesox FAIL formats: can't open output file `en_/home/user/file1.wav': No such file or directory ^C^C^C^C^Z^Z^Z^Z^Z^Z^Z^C^C^C^C^C^C^C^C^C^C^C^C^Z^Z^X^X^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^ | Cannot terminate for loop in terminal | terminal;process;gnome terminal;for | null |
_unix.366184 | I have a problem with my following script (this is the relevant part of it):#!/bin/bashOLD=(_MAIN1__MAIN2_)NEW=(#111#222)length=${#OLD[*]}i=0while (( i < length ))do sed -e s/${OLD[$i]}/${NEW[$i]}/g oldfile.txt > newfile.txt #sed -e 's/_MAIN1_/#111/g' oldfile.txt > newfile.txt # this works # Another way that does not work #sed -e 's/'${OLD[$i]}'/'${NEW[$i]}'/g' oldfile.txt > newfile.txt ((i++))doneexit 0My goal is to replace strings in a file and save it into a new one. The old and new strings are stored in an array.I tried a lot of things and played around with single and double quotes - but nothing worked.When I echo the variables I get the correct strings inside the loop. If explicit two strings are set in sed command it works fine for this.The string patterns follow those in my example arrays ('new' contains the underscore _ and 'old' contains the hashtag #).I'm running bash on a Ubuntu 16.04 box.Thank you very much! | Bash while loop search and replace using sed | shell script;sed;array | Create a sed script that does all the substitutions, and then apply that sed script to your file.for (( i=0; i<${#OLD[@]}; ++i )); do printf 's/%s/%s/g\n' ${OLD[$i]} ${NEW[$i]}done >script.sedsed -f script.sed inputfile >outputfile && mv outptufile inputfile && rm script.sedThis way you limit the number of times that you need to parse the input file to one.For the given data in OLD and NEW the sed script will be generated ass/_MAIN1_/#111/gs/_MAIN2_/#222/g |
_datascience.19451 | I have two datasets:Both have the same:Same predictor variables (ordinal, interval, ratio) (7 features in total)Based on these predictor variables Group 1 (students) scored 200 products(by 200 students(each student only scored one product)).Based on these predictor variables Group 2 (colleagues) scored 75 products(by 75 colleagues(each colleague only scored one product)).In total 275 products are scored (by 200 students and 75 colleagues).However the target is different:Dataset 1: target bad or good (binary) (students)Dataset 2: 1 till 7 From one till 7 (1 is very bad 7 is very good) (colleagues)Dataset 1 has 200 rows Dataset 2 has 75 rows.How can I identify the differences (and also their similarities) between the students and my colleagues and the way they rated the products(all 275 participants have rated different products (275 products in total))? | compare the differences with different output | machine learning;statistics | null |
_webapps.94698 | In a Google Spreadsheet, I have a column with values like T1, T2, T13 (i.e. all values starting with the same text prefix). I would like to use a formula on the numerical part of the values of these cells (for conditional formatting). Can I somehow apply it only to the numeric part of the value, i.e. 1, 2, 13? I would like to change the background colour for the cell containing the maximum numeric part of the value?I know how to extract the numeric part. E.g., if the T-values are in column A, cell B3 can contain this equation:=if(len(A3)>1,value(right(A3, len(A3)-1)),0)However I fail to apply any further formula to this. E.g. this doesn't work:=max(if(len(A2:A)>1,value(right(A2:A, len(A2:A)-1)),0)) | Apply formula to a numeric part of cell value | google spreadsheets | Following hints in the answer by Aurielle, I've been able to produce a shorter formula:=(A2:A)=text(max(arrayformula(value(substitute(A2:A,T,)))), T#)It assumes that the first row is occupied by table caption.Break down:substitute(text,T,) removes letter T from a cell text text.value() converts the text result of substitute to a number.arrayformula() applies value(substitute()) to each value from A2, A3, etc. range. The result is a numeric array.max() returns the maximum value of that array.text(number, T#) converts the found maximum value to a string prefixed with letter T. T# is the format string, meaning letter T, then number.Finally, A2:A=... compares the values from A2, A3, etc. to the formed string. For the cells matching T+maximum value, the comparison will return TRUE, and conditional formatting will be applied. |
_softwareengineering.45699 | There've been many discussions on SO about the differences between (Rational) Unified Process and the Agile methodology. Can someone please give me an example on how different a project plan would be if there are 2 teams doing the same project, but following these 2 different methods? | Differences between a Unified Process and an Agile project plan? | project management;agile;rational unified process | I'm going to use Scrum as a concrete agile example. Scrum has three artifacts: the Product Backlog, the Sprint Backlog, and a Burndown Chart. A backlog is simply a prioritized list of things to do. The chart is for plotting your progress through the current sprint (iteration). These three tools are what you use to track and plan your project in Scrum and that is is your Scrum project plan.RUP, on the other hand, contains a very long list of documents and artifacts for planning the project. As an example, there is the Iteration Plan, a detailed list of activities and tasks, with assigned resources and task dependencies. This document can perhaps be compared to the sprint backlog in Scrum, but that's stretching it.So the major difference between these approaces is the amount of stuff (roles, artifacts, activities) they prescribe. And as you see from the image, the difference is huge: |
_unix.336590 | I am using Laravel Homestead VM via Vagrant.I run Vagrant byvagrant rsync-auto --pollAt each file changed, it only prints ==> homestead-7: Rsyncing folder Exclude: [ ...]However I would like Rsync to print timestamp of the last update (file changed) + which files were updated.I am aware rsync set up is defined in .homestead/Homestead.yaml.I have looked into the docs on vagrant website but couldn't find a proper solution.Homestead.yaml...folders: - map: ~/__work/__homestead/Code to: /home/vagrant/Code/ type: rsync options: rsync__args: [--verbose, --archive, --delete, -zz] rsync__exclude: [node_modules]... | How to print vagrant's rsync timestamp and files changed? | rsync;vagrant | null |
_webmaster.43378 | Nine days ago, I got a message Google Webmaster Tools:Over the last 24 hours, Googlebot encountered 1 errors while attempting to access your robots.txt.Well, but I don't have a robots.txt on that site, because robots.txt is optional and I want the whole site to be crawled. So why do I get this error message?Perhaps of interest: The Google Webmaster tools home page lists www.realitybuilder.com and realitybuilder.com. I don't know how that happened, but realitybuilder.com redirects to www.realitybuilder.com, so it should not be necessary to have it listed. I now deleted the entry for realitybuilder.com. Could that have caused the problem? | Google Webmaster Tools complains about missing robots.txt | google search console;robots.txt;googlebot | null |
_softwareengineering.168047 | This is my first post here in programmers.stackexchange (I'm a regular on SO). I hope this isn't too general.I'm trying a simple project to learn Java from something I've seen done in the past. Basically, it's an AI simulation where there are herbivorous and carnivorous creatures and both must try to survive. The part I am trying to come up with is that of the board itself. Let's assume very simple rules. The board must be of size X by Y and only one element can be in one place at one time. For example, a critter cannot be in the same tile as a food block. There can be obstacles (rocks, trees..), there can be food, there can be critters of any type. Assuming these rules, what would be one good way to represent this situation ?This is what I came up with and want suggestions if possible:Use multiple levels of inheritance to represent all the different possible objects (AbstractObject -> (NonMovingObject -> (Food, Obstacle) , MovingObject -> Critter -> (Carnivorous, Herbivorous))) and use polymorphism in a 2D array to store the instances and still have access to lower level methods.Many thanks.Edit: Here is the graphic representation of the structure I have in mind. | 2D grid with multiple types of objects | java;data structures | null |
_softwareengineering.42079 | I recently observed some contract offers which included a code review by third party clause - the contract would not pay out fully until the code review was completed and it received a pass.I was surprised, especially considering that these were fairly simple, and small-scale contracts (churning out vanity apps for the iPhone). Is this kind of third-party code review a common thing to run into when contracting out as a programmer? | Is it common practice to hire third parties to do code reviews for contractors? | freelancing;code reviews;industry;industry standard | It depends what you agreed to provide.If you provide an outcome, then it is perfectly normal. By cons, if you provide means (typical case), it is not acceptable.The company that use that clause may have been is some difficult situations where she had invested some money in a developer that produced very bad code.My opinion is that they are responsible for not doing proper code reviews early and/or proper tests during the hiring process.Therefore, if you should refuse a such clause in the case you provide your time, rather than results that are implicitly linked to your work (but not guaranteed). |
_codereview.19447 | I wrote a function in Scala to find out and return a loopy path in a directed graph.One of the arguments is a graph presented in an adjacent list, and the other is a start node. It returns a pair including a loopy path by a list of nodes.I wonder if there are more elegant ways of doing this. def GetACycle(start: String, maps: Map[String, List[String]]): (Boolean, List[String]) = { def explore(node: String, visits: List[String]): (Boolean, List[String]) = { if (visits.contains(node)) (true, (visits.+:(node)).reverse) else { if (maps(node).isEmpty) (false, List()) else { val id = maps(node).indexWhere(x => explore(x, visits.+:(node))._1) if (id.!=(-1)) explore(maps(node)(id), visits.+:(node)) else (false, List()) } } } explore(start, List()) }I felt I had to use the indexWhere in this situation, but I suppose it would have other ways to do that. | Finding and returning a loopy path in a directed graph | algorithm;scala;graph | You should use an array to check if you have already visited a node and not visits.contains(node), it would give you the answer in constant time instead of linear time.The overall complexity of your algorithm is exponential. For instance, if you run your algorithm on this graph:0 -> 1, 2, ..., n1 -> 2, ..., n...where there are n nodes and there are edges from i to j iff i<j then the node i will be explored 2^i times.Again you can solve this problem using an array (one array for all nodes) to ensure that each node is explored at most one time. |
_unix.373 | In KDE SC 4.5.0 it's possible to use a WebKit part for rendering in Konqueror. I don't think it's on by default (I could be wrong) and I believe I've installed all the requirements for it... How do I enable it?I figured out how to switch it... View -> View mode -> webkitBut you must be on a web page first. Problem is that this setting doesn't stick. I can't find a permanent setting. Does one exist? | Enable kwebkitpart in Konqueror | arch linux;kde;settings;konqueror | I just found an article posting how to do it for kubuntu.The short of it is configure the file association for text/html (embedding) and set the first as webkit. I'm sure you should do it for application+xml/xhtml too. Maybe some others. |
_softwareengineering.103123 | I am looking to query the main Google search however all references including stackoveflow point to the Google AJAX Search API.The odd thing is that it does not seem to exist any more not even a note to say it is depreciated? The old links point to main Google code site. If I look at the list of API's on that site the API it replaced is there Web Search API (Deprecated) which links back to same page but not the Google AJAX Search API.Further Google searching is not being helpful either, many blog posts pointing to the same Google site (http://code.google.com/apis/ajaxsearch/) that has no content and redirects to the same place?Just to prove it did exist I have found it on the way back machine however the last snapshot did not show any special unusual message. | What ever happened to the Google AJAX Search API | api;google | The Google AJAX Search API was deprecated on Nov 1, 2010, in favour of the Custom Search API.The AJAX Search APIs contained Web, News and Local search among others, but when people referred to the AJAX Search, they typically meant Web search.You can read some idle speculation on why they retired the AJAX search on the official Google AJAX APIs Group, but it seems to be mostly due to abuse:https://groups.google.com/forum/#!msg/google-ajax-search-api/79wPelmXxKE/qM5TLOLxnsshttp://googleajaxsearchapi.blogspot.com/2010/03/helping-you-help-us-help-you.html (d'oh! posted a day early!)According to Google's deprecation policy, the web search API should continue to work until Nov 2013. The web search API is now confirmed to be no longer available as of September 29, 2014.Here's the timeline, as best as I can reconstruct it:June 2006: AJAX Search API v0.1 releasedOctober 2006: AJAX Search API v1 releasedDecember 2006: SOAP Search API deprecatedMarch 2009: AJAX Search API graduates from LabsAugust 2009: SOAP API retiredNovember 2010: AJAX Search API deprecatedNovember 2010: Custom Search API introducedNovember 2013: AJAX Search API access terminated? |
_codereview.172776 | I was watching the following video about software transactional memory(using a package that maintains an access log). At the moment I am trying to learn about concurrency with shared memory and thought that could be more easily achieved with immutability and checking referential equality.Never spent much time actually writing code (write mostly front end JavaScript/TypeScript/Fable) but understood the idea:A function gets an object and takes out what it is going to change (receives an object called data and takes out contents). Before changing the object (data) the function checks if the current sub object (contents) still has the same referential equality as the sub object had when the function started.If so; then value can be set, if not then the function needs to fail or retry since another process has changed it while the function was executing.Both checking referential equality and setting value should run synchronized (only one thread can write to the object at the same time).Here is some example of that but I'm not sure this is done correctly and how to properly test it.type 'a Ref = { mutable contents : 'a }type Data = {id:int}let ref v = { contents = v }let (!) r = r.contentslet monitor = System.Object()let (:=) r v = r.contents <- vlet isSameObject = LanguagePrimitives.PhysicalEqualitytype System.Random with member this.GetValues(minValue, maxValue) = Seq.initInfinite (fun _ -> this.Next(minValue, maxValue))let setValue data org newValue = if (isSameObject !data org) then lock monitor ( fun () -> data := {id=newValue} ) true else falselet r = System.Random()let test min max data = let vals = r.GetValues(min, max) |> Seq.take 10000 |> Seq.map( fun item -> ( async{ let value = !data do! Async.Sleep item let ret = (setValue data value item) if ret then printfn changed from %d to %d value.id (!data).id return ret } ) ) |> Async.Parallel |> Async.RunSynchronously |> Seq.fold ( fun acc item -> let (a,b) = acc if item then ((a+1),b) else acc ) (0,false) valstest 10 1000 (ref {id=88888888})If this is correct I would like to spend more time trying to figure out how to do an atomic transaction when taking multiple sub objects out of the data without causing a deadlock or updating one while failing the other.[update]When refactoring the code to update multiple objects it revealed the advantages of the method mentioned in the video:Using an abstraction is probably better than writing your own methodthat works for your particular use case. When the use case changesyou may have to deal with maintaining complex code. Like usingJQuery to update dom elements as opposed to using React,Vue, riotjs.The best I could come up with when updating multiple objects is tolock the entire hash table(dictionary). This is more like thesecond attempt in the video except it has no dead locking risks and has concurrent reads. Writes are not parallel butserial. Having a log on what objects are opened seems to be the onlyway to achieve parallel writing. Although I would think the log hasserial access to prevent concurrency problems.As for testing the code; how does one test this? You can hammer the data with updates and see if the result was correct or you can test it serially as with the tests below. Having one thread wait for another to change the object is for all intents and purposes just the same as a serial test.I would love to see someone show a test that would fail due to the complexity that multiple threads sharing data bring to the table other than just hammering it with updates as done in my first example or having serial tests as with my second example.Here is the code I came up with for multiple updates.type 'a Ref = { mutable contents : 'a }type Data = {id:int}let ref v = { contents = v }let (!) r = r.contentslet monitor = System.Object()let (:=) r v = r.contents <- vlet isSameObject = LanguagePrimitives.PhysicalEqualitytype System.Collections.Generic.Dictionary<'K, 'V> with member x.TryFind(key) = match x.TryGetValue(key) with | true, v -> Some v | _ -> Nonelet compare pairs = pairs |> List.fold ( fun (same,datas) (data,org)-> if same then if (isSameObject !data org) then match datas with | Some d -> true, (Some (data::d)) | None -> false, None else false,None else false,None ) (true,(Some []))let setValue pairs = lock monitor ( fun () -> let compareArgs = pairs |> List.map( fun (data,org,_,_) -> data,org ) let isSame, data = compare compareArgs if isSame then pairs |> List.fold( fun acc (data,org,newValue,setValueFunction) -> setValueFunction newValue data true ) true else false ) (* tests*)let liftSome item apply = match item with | Some value -> Some (apply value) | None _ -> Nonelet unwrap item = match item with | Some value -> value | None _ -> ref {id=0}let createStore () = let store = System.Collections.Generic.Dictionary<int, Data Ref> () [1..100] |> List.map( fun number -> store.Add(number, (ref {id=number})) ) |> ignore storelet hasCorrectValue (index,result) (data,_,_,_) = if result then if (!data).id = (index) then (index+1),true else 0,false else 0,falselet ``Set all values`` () = let store = createStore () let setValueArgument = [1..100] |> List.map (fun item -> unwrap (store.TryFind item) ,!(unwrap (store.TryFind item)) ,item+5 ,( fun newValue data -> data := {id=newValue} ) ) if setValue setValueArgument then let index, ret = setValueArgument |> List.fold hasCorrectValue (6,true) ret else falselet ``compare should be false if not same`` () = let store = createStore () let compareArgument = [1..100] |> List.map (fun item -> unwrap (store.TryFind item) ,!(unwrap (store.TryFind item)) ) |> List.indexed |> List.map (fun (index,item) -> let data, org = item if index = 2 then data,{id=99} else data,org ) let result, data = (compare compareArgument) not resultlet ``Set no values if something changed`` () = let store = createStore () let setValueArgument = [1..100] |> List.map (fun item -> unwrap (store.TryFind item) ,!(unwrap (store.TryFind item)) ,item+5 ,( fun newValue data -> data := {id=newValue} ) ) (unwrap (store.TryFind 22)):= {id=22} if not (setValue setValueArgument) then let index, ret = setValueArgument |> List.fold hasCorrectValue (1,true) ret else false``Set all values`` ()``compare should be false if not same`` ()``Set no values if something changed`` () | Concurrency using immutability | concurrency;f#;multiprocessing | null |
_unix.307078 | This is an extending question of the post Average rows with same first columnInput file:a 12 13 14b 15 16 17a 21 22 23b 24 25 26Desired output:a 16.5 17.5 18.5b 19.5 20.5 21.5The awk code in that post is:awk ' NR>1{ arr[$1] += $2 count[$1] += 1 } END{ for (a in arr) { print a \t arr[a] / count[a] } }'Question: This code only works on the first row. How do I expand this code to multiple columns? | Average all rows of multiple columns with the same first column | awk | Using awk, you could simulate a 2D array by constructing a composite index from the key (first column value) and column index: awk ' { c[$1]++; for (i=2;i<=NF;i++) { s[$1.i]+=$i}; } END { for (k in c) { printf %s\t, k; for(i=2;i<NF;i++) printf %.1f\t, s[k.i]/c[k]; printf %.1f\n, s[k.NF]/c[k]; } }' file a 16.5 17.5 18.5 b 19.5 20.5 21.5A similar approach may be implemented in perl more directly using a hash of arrays.Alternatively, there's GNU datamash which (at least from version 1.1.0) supports group averages very compactly e.g.datamash --sort --whitespace groupby 1 mean 2-4 < filea 16.5 17.5 18.5b 19.5 20.5 21.5FWIW here's my attempt at a perl solution, including normalization to the global max average as requested in comments. DISCLAIMER: I'm a novice perl programmer, so it may demonstrate poor programming practices.#!/usr/bin/perluse strict;use warnings;use List::MoreUtils qw(pairwise minmax);use Math::Round qw(nearest);my @hdr;my %sums = ();my %count = ();my $key;while (defined($_ = <ARGV>)) { chomp $_; my @F = split(' ', $_, 0); # UGLY: hardcoded to expect exactly 1 header row if ($. == 1) { @hdr = @F; next; } # sum column-wise, grouped by first column $key = shift @F; if ( exists $sums{$key} ) { $sums{$key} = [ pairwise { $a + $b } @{ $sums{$key} }, @F]; } else { $sums{$key} = \@F; } $count{$key}++;}my %avgs = ();# NB should really initialize $maxavg to a suitably large NEGATIVE valuemy $maxavg = 0.0;# find the column averages, and the global max of those averagesfor $key ( keys %sums ) { $avgs{$key} = [ map { $_ / $count{$key} } @{ $sums{$key} } ]; # NB could use List::Util=max here, but we're alresdy using List::MoreUtils my ($kmin, $kmax) = minmax @{ $avgs{$key} }; $maxavg = $kmax > $maxavg ? $kmax : $maxavg;}# normalize and print the results, rounded to nearest 0.01print join \t, @hdr, \n;for $key ( sort keys %avgs ) { print join \t, $key, (map { nearest (0.01, $_ / $maxavg) } @{ $avgs{$key} }), \n;}Saved as colavgnorm.pl and made executable, then run as $ ./colavgnorm.pl fileK C1 C2 C3a 0.77 0.81 0.86b 0.91 0.95 1where file isK C1 C2 C3a 12 13 14b 15 16 17a 21 22 23b 24 25 26 |
_softwareengineering.87437 | Over the past few years I have worked with several different version control systems. For me, one of the fundamental differences between them has been whether they version files individually (each file has its own separate version numbering and history) or the repository as a whole (a commit or version represents a snapshot of the whole repository).Some per-file version control systems:CVSClearCaseVisual SourceSafeSome whole-repository version control systems:SVNGitMercurialIn my experience, the per-file version control systems have only led to problems, and require much more configuration and maintenance to use correctly (for example, config specs in ClearCase). I've had many instances of a co-worker changing an unrelated file and breaking what would ideally be an isolated line of development.What are the advantages of these per-file version control systems? What problems do whole-repository version control systems have that per-file version control systems do not? | What are the advantages of version control systems that version each file separately? | version control | In my experience, there aren't any: whole-repository VCS strictly dominates per-file VCS. |
_softwareengineering.332013 | Lets say I want to setup a roles table that has a polymorphic relationship to resource.I understand that that I could directly setup a foreign key - by adding for example a roles.forum_id column.But why is (AFAIK) creating a compound foreign key where one column holds the key and another the table to reference not possible? | Why are first class polymorphic relations not possible in relational databases? | relational database | null |
_softwareengineering.344426 | Here is a json which comes in the request param. I am constructing a class with getter and setter for accessing the values in json so that I could be able to pass the class object to different methods and able to access the member variables in them.public class RequestParams { private String student_id; private String student_name; private String student_role_number; private String department_name; private String stream; private JSONObject studentDetails; public RequestParams(HttpServletRequest request) { this.studentDetails = request.getParameter(studentdetails); } public String getStudentId() { if(this.student_id == null) { this.setStudentId(); } return this.student_id; } public String getStudentName() { if(this.student_name == null) { this.setStudentName(); } return this.student_name; } public String getRoleNumber() { if(this.student_role_number == null) { this.setRoleNumber(); } return this.student_role_number; } public String getDepartmentName() { if(this.department_name == null) { this.setDepartmentName(); } return this.student_name; } public String getStream() { if(this.stream == null) { this.setStream(); } return this.stream; } public void setStudentId() { this.student_id = this.studentDetails.getString(student_id); } public void setStudentName() { this.student_name = this.studentDetails.getString(student_name); } public void setRoleNumber() { this.student_role_number = this.studentDetails.getString(role_number); } public void setDepartmentName() { this.department_name = this.studentDetails.getString(department_name); } public void setStream() { this.stream = this.studentDetails.getString(stream); }}Have the following doubts,Constructing as class object to reference it from different methods - Is this a good one? Am I going wrong?How to organise my getter setter so that only set is called only for the first time and for the next calls the value is returned directly? Is there a better way to avoid the null check each timeif(this.student_id == null) {this.setStudentId();}Is there any advantage of accessing the methods and variables within the class with this. ? PS: I could not invoke all the setter initially from the constructor because all the values declared in the class need not be necessarily present in the json. So, I thought that it would be better if I could initialise the member variable with value during first access. | Best way to invoke 'setter method' for first access and 'getter method' for the rest with getter setter pattern? | java;design;design patterns;object oriented;serialization | null |
_softwareengineering.19934 | This is a chart I whipped together showing the length of active (meaning ongoing bug fixes and service packs) support offered for each version of Delphi. It is based on the published support data obtained from Embarcadero's website. Delphi 2010 and XE are excluded because their active support is still ongoing so they can't really be compared accurately. Ironically, Delphi 7, which was regarded by many to be the most stable until the release of Delphi 2009, had a support cycle three times as long as Delphi 2009. Granted, this chart spans three different companies with three different agendas. My question is why is Delphi 2009's support cycle so short? I understand Embarcadero has a business to run and they don't make money with service packs but really, 12 months? I would expect that of a $10 shareware title with low profit margins not a $900-$3500 world class development tool. | What's with Delphi's support cycle? | delphi;support | null |
_unix.36751 | Running simply builtin prints nothing and returns exit code 0. This is in accordance with help builtin, which shows all parameters as optional. But why isn't this no-op an error? Is there a use case for this? A more useful result would be an error code or, even better, listing the currently available builtins. | Why are parameters to Bash's builtin optional? | bash;shell builtin | Bash built-ins are inconsistent and poorly documented.Here's an example:$ help commandcommand: command [-pVv] command [arg ...] Runs COMMAND with ARGS ignoring shell functions. If you have a shell function called 'ls', and you wish to call the command `ls', you can say command ls. If the -p option is given, a default value is used for PATH that is guaranteed to find all of the standard utilities. If the -V or -v option is given, a string is printed describing COMMAND. The -V option produces a more verbose description.$ command; echo $?0Even without command the return code $? -eq 0 and there is no error on std err.Another one:$ help disowndisown: disown [-h] [-ar] [jobspec ...] By default, removes each JOBSPEC argument from the table of active jobs. If the -h option is given, the job is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP. The -a option, when JOBSPEC is not supplied, means to remove all jobs from the job table; the -r option means to remove only running jobs.$ disown; echo $?-bash: disown: current: no such job1All the arguments are optional but it returns $? -eq 1 when there are none.I've even compiled the newest Bash 4.2 and here are my results:$ help commandcommand: command [-pVv] command [arg ...] Execute a simple command or display information about commands. Runs COMMAND with ARGS suppressing shell function lookup, or display information about the specified COMMANDs. Can be used to invoke commands on disk when a function with the same name exists. Options: -p use a default value for PATH that is guaranteed to find all of the standard utilities -v print a description of COMMAND similar to the `type' builtin -V print a more verbose description of each COMMAND Exit Status: Returns exit status of COMMAND, or failure if COMMAND is not found.$ command; echo $?0There's a new section Exit Status and command is still an optional argument. Even worse than 3.x. The same for other built-ins.So, you're right. Bash built-ins are a mess and should be fixed. |
_codereview.55920 | I have a code similar to this:public class PlayerRound { private final List<Strip> playerStrips = new ArrayList<>(); public boolean addStrip(final Strip aStrip) { // ... if (playerStrips.contains(aStrip)) { playerStrips.set(playerStrips.indexOf(aStrip), aStrip); // DOES THIS LINE CHANGE SOMETHING? } // ... return true; }}@Entity@Cacheable(false)@Table( appliesTo = Strip, indexes = { @Index(name = IDX_RoundStrip, columnNames = {round_id}), @Index(name = IDX_UserStrip, columnNames = {user_id}) })public class Strip implements Serializable, IAnnotatedProxy { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Integer id; @Transient private Integer tempId = tempIdx++; @Override public boolean equals(Object obj) { Boolean areEqual = Boolean.FALSE; if (obj != null && getClass() == obj.getClass()) { final Strip other = (Strip) obj; if (null == this.id && null == other.id) { if (this.tempId.equals(other.tempId)) { areEqual = Boolean.TRUE; } } else if (null != this.id) { if (this.id.equals(other.id) || this.id.equals(other.tempId)) { areEqual = Boolean.TRUE; } } else if (null != other.id) { if (other.id.equals(this.id) || other.id.equals(this.tempId)) { areEqual = Boolean.TRUE; } } } return areEqual; }}My question is: can I safely delete the line that sets the element that already is on the list on the same position?Does this line does something that I don't know? | Does setting the same element in a list does something at all? | java | No, that code is totally useless. Unless....Same element twiceAs it is a List, what happens if it contains the element twice?Let's say we have a List containing two elements of the type Strip:stripA, stripBstripA and stripB are equal, that is, the class have implemented .equals. Then and only then, this code will do something useful. If aStrip is stripB then it will modify the list to become:stripB, stripBOne inside, one outside, both .equalsThe same is true if the List contains stripA only and you call the method with aStrip again as stripB. Remember that stripA.equals(stripB) is true so here's what happens: if (playerStrips.contains(aStrip)) {Yes. .contains on a Collection uses .equals to see if it contains or not, so this will return true. playerStrips.indexOf(aStrip)indexOf also uses .equals and the index of stripA is returned.playerStrips.set(playerStrips.indexOf(aStrip), aStrip);So what happens here is that the index of stripA is set to stripB.Small ImprovementThere's really no reason to use both indexOf and contains. You could do this instead:int index = playerStrips.indexOf(aStrip);if (index != -1) { playerStrips.set(index, aStrip);}HoweverIf this functionality has been added because of this reason, I personally think it is a bad reason to add it. This code is more confusing than anything else. Do it differently. Either aStrip should not really be equal to bStrip, or something else should be modified. The reason for why one would want to replace aStrip with bStrip when they are already equal goes beyond my understanding.Strip.equalsHoly.... mess!First of, why use Boolean when you can use boolean? Secondly, why use a boolean at all when you can use return?Also, you could probably use instanceof instead of getClass. But you should look at the StackOverflow question about this.I have to question the usage of tempID, it does not feel like a clean solution to me - whatever problem it is meant to solve.By following the program execution, you can see that if one of those if (condition) return true is not true that the only possible return value is false. Therefore, it can be simplified with return condition;Because of the above, there will be no reason to use any else as there's a return inside the previous if.Therefore, your .equals method can be simplified to:@Overridepublic boolean equals(Object obj) { if (obj instanceof Strip) { final Strip other = (Strip) obj; if (null == this.id && null == other.id) { return this.tempId.equals(other.tempId); } if (null != this.id) { return this.id.equals(other.id) || this.id.equals(other.tempId); } if (null != other.id) { return other.id.equals(this.id) || other.id.equals(this.tempId); } } return false;} |
_unix.225354 | My Oracle database is in ISO8859-1 (not by choice).I'm struggling with segfault from php-fpm for a couple of days now. To dig out the source of it I've set two parallel environnements with the following ENV variable:NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P15Php version: 5.6.12php-cliThe development server is started with:php -S localhost:8081 web/app_dev.phpphp-fpmPool configuration extract:[www]env[NLS_LANG] = AMERICAN_AMERICA.WE8ISO8859P15listen = 127.0.0.1:9000Doctrine dbalcharset: nullNow, I'm requesting a JSON api with special characters: with php-cli json_encode errors Malformed UTF-8 characters, possibly incorrectly encoded (seems to be a valid error, data is not in utf8)with php-fpm everything works but special characters are replaced ( becomes e) Why?By trying to fix the php-cli, which seems more stable (lots of random 502 errors with php-fpm) I have two options:Tweaking JsonResponse.php to encode everything from ISO8859-1 to UTF-8 (ugly)Setting the client charset to UTF8The second solution seems to be the one and the doctrine configuration is now:charset: UTF8php-cli now works as expected and everything is wonderful!php-fpm fails with Oracle related errors: [2015-08-25 13:56:54] php.DEBUG: oci_connect(): OCIEnvNlsCreate() failed. There is something wrong with your system - please check that LD_LIBRARY_PATH includes the directory with Oracle Instant Client libraries[2015-08-25 13:56:54] php.DEBUG: oci_connect(): Error while trying to retrieve text for error ORA-12715Where ORA-12715 is invalid character set specified.LD_LIBRARY_PATH is not the issue here.What is going wrong here? Is php-fpm correct about those errors? How may I fix this to get the same behavior between php-cli and php-fpm? | Doctrine OCI8 charset behavior between php-cli and php-fpm | php;character encoding;oracle database;symfony | null |
_softwareengineering.332420 | There are some programming languages, like the many dialects of Lisp, that allow for macro-metaprogramming: rewriting and altering sections of code before the code is run.It is relatively trivial to write a simple interpreter for Lisp (mostly because there is only very little special syntax). However, I cannot understand how it would be possible to write a compiler for a language that allows you to rewrite code at-runtime (and then execute that code).How is this done? Is the compiler itself basically included in the generated compiled program, such that it can compile new sections of code? Or is there another way? | How can a compiler be written for a language that allows rewriting code at runtime (such as Lisp macros)? | compiler;lisp;macros | Macros have the advantage to be expanded at compile timeThe idea of Lisp macros is to be able to fully expand them at compile time. Then no compiler is needed at runtime. Most Lisp systems allow you to fully compile code. The compilation step includes the macro expansion phase. There is no expansion needed at runtime.Often Lisp systems include a compiler, but this is needed when code is generated at runtime and this code would need to be compiled. But this is independent of macro expansion.You will even find Lisp systems which don't include a compiler and even no full interpreter at runtime. All code will be compiled before runtime.FEXPRs were code modifying functions, but were mostly replaced by MacrosIn earlier times in the 60s/70s many Lisp systems included so-called FEXPR functions, which could translate code at runtime. But they could not be compiled before runtime. Macros replaced them mostly, since they enable full compilation.An example of a macro interpreted and compiledLet's look at LispWorks, which has both an interpreter and a compiler. It allows to mix interpreted and compiled code freely. The Read-Eval-Print-Loop uses the Interpreter to execute code.Let's define a trivial macro. But the macro prints the code it gets called with, every time the macro runs.CL-USER 45 > (defmacro my-if (test yes no) (format t ~%Expanding (my-if ~a ~a ~a) test yes no) `(if ,test ,yes ,no))MY-IFLet's define a function which uses the macro from above. Remember: here in LispWorks the function will be interpreted.CL-USER 46 > (defun test (x y) (my-if (> x y) 'larger 'not-larger))TESTIf you look above, the Lisp system only printed the function name. The macro did not run - otherwise the macro would have printed something. So the code is not expanded.Let's run the TEST function using the Interpreter:CL-USER 47 > (loop for i below 5 collect (test i 3))Expanding (my-if (> X Y) (QUOTE LARGER) (QUOTE NOT-LARGER))Expanding (my-if (> X Y) (QUOTE LARGER) (QUOTE NOT-LARGER))Expanding (my-if (> X Y) (QUOTE LARGER) (QUOTE NOT-LARGER))Expanding (my-if (> X Y) (QUOTE LARGER) (QUOTE NOT-LARGER))Expanding (my-if (> X Y) (QUOTE LARGER) (QUOTE NOT-LARGER))Expanding (my-if (> X Y) (QUOTE LARGER) (QUOTE NOT-LARGER))Expanding (my-if (> X Y) (QUOTE LARGER) (QUOTE NOT-LARGER))Expanding (my-if (> X Y) (QUOTE LARGER) (QUOTE NOT-LARGER))Expanding (my-if (> X Y) (QUOTE LARGER) (QUOTE NOT-LARGER))Expanding (my-if (> X Y) (QUOTE LARGER) (QUOTE NOT-LARGER))(NOT-LARGER NOT-LARGER NOT-LARGER NOT-LARGER LARGER)So you see that for some reason the macro expansion is run twice for each of the five calls to test. The macro is expanded by the interpreter every time the function TEST is called.Now let's compile the function TEST:CL-USER 48 > (compile 'test)Expanding (my-if (> X Y) (QUOTE LARGER) (QUOTE NOT-LARGER))TESTNILNILYou can see above that the compiler runs the macro once.If we now run the function TEST, no macro expansion will happen. The macro form (MY-IF ...) has already been expanded by the compiler:CL-USER 49 > (loop for i below 5 collect (test i 3))(NOT-LARGER NOT-LARGER NOT-LARGER NOT-LARGER LARGER)If you used some other Lisps like SBCL or CCL, they will compile everything by default. SBCL has in new versions also an interpreter. Let's do the example from above in a recent SBCL:Let's use the new SBCL interpreter:CL-USER> (setf sb-ext:*evaluator-mode* :interpret):INTERPRETCL-USER> (defmacro my-if (test yes no) (format t ~%Expanding (my-if ~a ~a ~a) test yes no) `(if ,test ,yes ,no))MY-IFCL-USER> (defun test (x y) (my-if (> x y) 'larger 'not-larger))TESTCL-USER> (loop for i below 5 collect (test i 3))Expanding (my-if (> X Y) 'LARGER 'NOT-LARGER)Expanding (my-if (> X Y) 'LARGER 'NOT-LARGER)Expanding (my-if (> X Y) 'LARGER 'NOT-LARGER)Expanding (my-if (> X Y) 'LARGER 'NOT-LARGER)Expanding (my-if (> X Y) 'LARGER 'NOT-LARGER)(NOT-LARGER NOT-LARGER NOT-LARGER NOT-LARGER LARGER)CL-USER> (compile 'test)Expanding (my-if (> X Y) 'LARGER 'NOT-LARGER)TESTNILNILCL-USER> (loop for i below 5 collect (test i 3))(NOT-LARGER NOT-LARGER NOT-LARGER NOT-LARGER LARGER)CL-USER> |
_webapps.20887 | There is a website displaying pins on a Google map included into one of its pages, but so framed that it's almost impossible to use and print. I'd really want to be able to make a screenshot of this map to be able to find bike-stations when I'm on my bike (without internet access). A large screenshot would be enough.Is there a way to display those pins on the Google Maps website itself (or anyway to enlarge the Google map view)? | How to display Map Pin's of a website on the larger Google Maps? | google maps | Big enough? I don't have a bigger screen available right now, so that's the most I could do.If you have a large monitor and you are comfortable with Firebug, use it to alter the page layout and make your screen shot as big as you need.To do that:Click right close to you mapClick Inspect ElementSearch for a <div> with the id=105648Click on it in order to select it as in the above pictureIn the right side you will see the CSS values for the height and width of the elementClick on each value and alter itClose FirebugDO NOT REFRESH THE PAGE until you are done |
_unix.297321 | Will it be safe to enable logging into a write-only log file on a production server? I imagine, that this would protect the log file from unwanted eyes. Are there any drawbacks of using this technique? | Is granting write-only permission on log files to certain users a good practice? | permissions;security;webserver | Logs should be write-only if they contain potentially confidential data. Obviously they can only be write-only to the application that produces the log and other applications running on the server, and perhaps even to the logging subsystem (once written to the log files), but system administrators and auditors should be able to read them.The most important thing for a log file is integrity. Being write-only doesn't help with integrity. If you can, make the log file append-only (e.g. chattr +a /path/to/log under Linux) but this may not be practical since only root can do this and it needs to be done on each log rotation. Better yet, log on a separate server which does nothing else (and even then, having a non-readable append-only log file does add a bit of redundancy to the security). |
_computergraphics.4094 | One of the features of ray marching is that you can use modulus to repeat shapes infinitely, like in the image below, which is from https://www.shadertoy.com/view/MsBGW1I was curious if there exists any technique which allows you to do the same thing with ray tracing instead of ray marching?One method I do know of is to ray trace a plane from above, and then where you hit the plane, calculate your location on a grid on that plane, and use the relative position in that grid cell as an absolute position to raytrace a scene. That will repeat the scene across the grid.However, a problem with that is if your ray doesn't hit anything in the grid cell, and it would then enter another grid cell, this technique won't catch those other shapes without walking the grid cells down the path of the ray until it exits the back side of the grid, which is very ray-marching-esque and iterative.Does anyone know of a technique that allows you to have ray marching type repetition in ray tracing? | Is there a method to do ray marching style modulus repeat with raytracing? | raytracing;raymarching | null |
_unix.213570 | I want to run two commands on terminal on my virtual machine at the same time.I have this as of now:sudo ptpd -c -g -b eth1 -h -D; sudo tcpdump -nni eth1 -e icmp[icmptype] == 8 -w capmasv6.pcapHowever, the tcpdump command only starts running when I press CtrlC, and I don't want to cancel the first command.If I just open two different terminals and write the command in each, is that fine or will it not work as I want it to? | Running multiple commands at the same time | shell;parallelism | Running each command in a different terminal will work; you can also start them in a single terminal with & at the end of the first to put it in the background (see Run script and not lose access to prompt / terminal):sudo ptpd -c -g -b eth1 -h -D &sudo tcpdump -nni eth1 -e icmp[icmptype] == 8 -w capmasv6.pcap |
_webmaster.44849 | I have a Cpanel web hosting account and have created a subdomain which redirects to an IP address of a different server (Windows IIS) by creating an A record. I don't have control of the Windows server.I would like that those people who type www. before the subdomain get redirected to the same page.I tried creating a CNAME record pointing www.foo.example.com. to foo.example.com but it isn't working. It is actually stopping the A record from working.How do I redirect www.foo.example.com. to foo.example.com in Cpanel? | Redirecting www.foo.example.com to foo.example.com | dns;cpanel;cname | Your cPanel may have an interface to do that for you. What needs to be done, which is what such interface would do anyway, is modifying the .htaccess file in your public_html directory. It will add the following lines:RewriteEngine ONRewriteCond %{HTTP_HOST} ^www.foo.example.com$RewriteRule ^/?(.*)$ http://foo.example.com/$1 [R=301,L]Omit the RewriteEngine ON line if it is already there. |
_codereview.102715 | I was trying to write a dynamic programming algorithm using a bottom up approach that solves the subset sum problem's version where the solution can be either an empty set and the initial set can only contain positive integers.The following is my implementation, but I am not sure it is correct for all cases.def _get_subset_sum_matrix(subset, s): m = [[0 for _ in range(s + 1)] for _ in range(len(subset) + 1)] for i in range(1, s + 1): m[0][i] = 0 for j in range(0, len(subset) + 1): m[j][0] = 1 return mdef subset_sum(subset, s): m = _get_subset_sum_matrix(subset, s) for i in range(1, len(subset) + 1): for j in range(1, s + 1): if subset[i - 1] == j: m[i][j] = 1 else: # We can include the current element, # because it is less than the current number j. if subset[i - 1] <= j: m[i][j] = max(m[i - 1][j], m[i - 1][j - subset[i - 1]]) else: m[i][j] = m[i - 1][j] return m[-1][-1]You can imagine the idea of my algorithm as follows. I have the numbers of the set in the vertical axis on the left, where the first element is actually the empty set. These numbers are not considered as only numbers, but, as I go down from the empty set (the first element), I start considering greater sets, that include all previous elements plus the current one. Example, suppose I have the set S = {1, 2, 3}. I first consider the empty set, then the union of the empty set and {1}, then the union of {1} and {2}, and finally the union of {1, 2} and {3}.In the horizontal axis you can imagine I have an increasing sequence of numbers up to the number we want to obtain (by summing the numbers of a certain subset of S). Example, suppose we want to obtain 4, then the increasing sequence would be 0, 1, 2, 3, 4.So, I first start considering I want to obtain the number 0, and then 1, 2, etc, as it is usually done in a dynamic programming algorithm using a bottom-up approach.Apart from the setup of the matrix, my algorithm assigns 1 to m[i][j], for some i = 0, 1, ..., N, where N is the size of the set S, and for some j = 0, 1, ... , M, where M is the number we want to obtain, when either the current number in the subset, that is S[i - 1], is equal to the number we want to obtain M_j, or when the previous solution to the subproblem, where the number we want to obtain is M_j - S[i - j], was 1.That might seem a confusing explanation, and I think the code is self-explanatory.Is my algorithm correct for all instances of the problem?Is there a way I can improve it? | Subset sum whose set contains only positive integers | python;python 3.x;dynamic programming | null |
_softwareengineering.313157 | I need to create a container app which contains several apps (imagine something like iCloud): once I've been logged in, I can see all the apps by means of icons, click on them and use them (a new tab/page is open and no login is required).The container app, as well as the other apps, will have a dedicated folder on the server and will be designed to be front-end apps with their own back end. Each back end is dedicated to the single app, but all the back ends can access server APIs and/or the DB without any problem (they will reside on the same server, at most on different virtual servers with different ports).I would like to let the user log-in just once (container app) and then let her/him to use the app without re-logging in again and again. To do that, I was thinking about a shared token that each front-end app will send to their relative back ends. The back ends will check the token. I don't want to reinvent the wheel so I was wondering if oauth could be useful some way to accomplish my goal. | Is OAuth 2.0 ok for building a container of applications? | oauth;oauth2 | null |
_codereview.18684 | The following is my code for printing all the substrings of an input string. For example, with abc, it would be a,ab,abc,b,bc,c. Could someone please review it for efficiency (and possibly suggest alternatives)?void findAllSubstrings(const char *s){ int x=0; while(*(s+x)){ for(int y=0; y<=x; y++) cout<<*(s+y); cout<<'\n'; x++; } if(*(s+1)) findAllSubstrings(s+1); else return;} | Find All Substrings Interview Query in C++ | c++;algorithm | Your problem is in O(n), at least. This seems not to be optimizable. If you want only distinct substrings, then you will have to use a table of already encountered strings, which will make your code slower.However, you can switch the algorithm from recursive to iterative, which is usually slightly faster. It's a micro-optimization, so do not expect a x2 improvement in speed... void findAllSubstrings2(const char *s){ while(*s) { int x=0; while(*(s + x)) { for(int y = 0; y <= x; y++) std::cout << *(s + y); std::cout << \n; x++; } s++; }}I've done a profile test, on Codepad and Ideone (different versions of same compilers + different machines). The io operations are left for the profile test, because what matters here is the comparison between the 2 functions. |
_codereview.87300 | The purpose here is to make it easy to use sensitive data that is already in the form of a SecureString (example) without converting it to a String object and risking more leaks than necessary.SecureString isn't about total security, but it is about reducing attack surface. For example, when you call SecureString.AppendChar there is a brief flash where it decrypts the contents, adds your character, and reencrypts. This is still better than storing your password in the clear on the heap for any amount of time.So in a similar vein, if I'm to use a SecureString as a SqlParameter value, it's best to do as little as possible with the contents in the clear and erase it as soon as possible. This isn't about transport security to SQL server, just C# process memory that has the potential to be paged to disk and end up unerased, in the clear, for years.Usage:var secureString = new SecureString();secureString.AppendChar('a');secureString.AppendChar('q');secureString.AppendChar('1');using (var command = new SqlCommand(select case when @secureParam = 'aq1' then 'yes' else 'no' end, connection)){ object returnValue; using (command.Parameters.AddSecure(secureParam, secureString)) { // At this point no copies exist in the clear returnValue = (string)command.ExecuteScalar(); // Now one pinned String object exists in the clear (referenced at the internal property command.Parameters[0].CoercedValue) } // At this point no copies exist in the clear}Code:public static class SecureSqlParameterExtensions{ [DllImport(kernel32.dll, EntryPoint = CopyMemory)] private static extern void CopyMemory(IntPtr dest, IntPtr src, IntPtr count); [DllImport(kernel32.dll, EntryPoint = RtlZeroMemory)] private static extern void ZeroMemory(IntPtr ptr, IntPtr count); /// <summary> /// You must dispose the return value as soon as SqlCommand.Execute* is called. /// </summary> public static IDisposable AddSecure(this SqlParameterCollection collection, string name, SecureString secureString) { var value = new SecureStringParameterValue(secureString); collection.Add(name, SqlDbType.NVarChar).Value = value; return value; } private sealed class SecureStringParameterValue : IConvertible, IDisposable { private readonly SecureString secureString; private int length; private string insecureManagedCopy; private GCHandle insecureManagedCopyGcHandle; public SecureStringParameterValue(SecureString secureString) { this.secureString = secureString; } #region IConvertible public TypeCode GetTypeCode() { return TypeCode.String; } public string ToString(IFormatProvider provider) { if (insecureManagedCopy != null) return insecureManagedCopy; if (secureString == null || secureString.Length == 0) return string.Empty; // We waited till the last possible minute. // Here's the plan: // 1. Create a new managed string initialized to zero // 2. Pin the managed string so the GC leaves it alone // 3. Copy the contents of the SecureString into the managed string // 4. Use the string as a SqlParameter // 5. Zero the managed string after Execute* is called and free the GC handle length = secureString.Length; insecureManagedCopy = new string('\0', length); insecureManagedCopyGcHandle = GCHandle.Alloc(insecureManagedCopy, GCHandleType.Pinned); // Do not allow the GC to move this around and leave copies behind try { // This is the only way to read the contents, sadly. // SecureStringToBSTR picks where to put it, so we have to copy it from there and zerofree the unmanaged copy as fast as possible. var insecureUnmanagedCopy = Marshal.SecureStringToBSTR(secureString); try { CopyMemory(insecureManagedCopyGcHandle.AddrOfPinnedObject(), insecureUnmanagedCopy, (IntPtr)(length * 2)); } finally { if (insecureUnmanagedCopy != IntPtr.Zero) Marshal.ZeroFreeBSTR(insecureUnmanagedCopy); } // Now the string managed string has the contents in the clear. return insecureManagedCopy; } catch { Dispose(); throw; } } public void Dispose() { if (insecureManagedCopy == null) return; insecureManagedCopy = null; ZeroMemory(insecureManagedCopyGcHandle.AddrOfPinnedObject(), (IntPtr)(length * 2)); insecureManagedCopyGcHandle.Free(); } public bool ToBoolean(IFormatProvider provider) { throw new NotImplementedException(); } public char ToChar(IFormatProvider provider) { throw new NotImplementedException(); } public sbyte ToSByte(IFormatProvider provider) { throw new NotImplementedException(); } public byte ToByte(IFormatProvider provider) { throw new NotImplementedException(); } public short ToInt16(IFormatProvider provider) { throw new NotImplementedException(); } public ushort ToUInt16(IFormatProvider provider) { throw new NotImplementedException(); } public int ToInt32(IFormatProvider provider) { throw new NotImplementedException(); } public uint ToUInt32(IFormatProvider provider) { throw new NotImplementedException(); } public long ToInt64(IFormatProvider provider) { throw new NotImplementedException(); } public ulong ToUInt64(IFormatProvider provider) { throw new NotImplementedException(); } public float ToSingle(IFormatProvider provider) { throw new NotImplementedException(); } public double ToDouble(IFormatProvider provider) { throw new NotImplementedException(); } public decimal ToDecimal(IFormatProvider provider) { throw new NotImplementedException(); } public DateTime ToDateTime(IFormatProvider provider) { throw new NotImplementedException(); } public object ToType(Type conversionType, IFormatProvider provider) { throw new NotImplementedException(); } #endregion }} | SecureString as SqlParameter value without GC concerns | c#;sql;security;memory management;securestring | null |
_unix.311731 | I'd like to limit the container to 25% of the system's total CPU bandwidth.Here's my setup:LXC version 1.0.2 kernel 3.2.45 one user created cgroup (foo) for an LXC container 40 available cores on the host the host and container have default values for every other cgroup subsystem except: /sys/fs/cgroup/cpu/lxc/foo/cpu.cfs_quota_us = 400000/sys/fs/cgroup/cpu/lxc/foo/cpu.cfs_period_us = 100000/sys/fs/cgroup/cpuset/lxc/foo/cpuset.cpus = 0-15I calculated the quota using this formula:(# of cpus available to container) * (cpu.cfs_period_us) * (.25) so 16 * 100000 * .25 = 400000I ran a basic stress-ng inside and outside the container at the same time to get a gauge of how many operations per second were being allowed inside and out and the results were basically the same as running with a quota of -1, which is to say no quota. Outside Run:$ ./stress-ng --cpu-load 50 -c 40 --timeout 20s --metrics-briefstress-ng: info: [25649] dispatching hogs: 40 cpu stress-ng: info: [25649] successful run completed in 20.44s stress-ng: info: [25649] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s stress-ng: info: [25649] (secs) (secs) (secs) (real time) (usr+sys time) stress-ng: info: [25649] cpu 37348 20.18 380.56 0.58 1850.85 97.99 Inside Run:$ ./stress-ng --cpu-load 100 -c 16 --timeout 20s --metrics-brief stress-ng: info: [34256] dispatching hogs: 16 cpu stress-ng: info: [34256] successful run completed in 20.10s stress-ng: info: [34256] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s stress-ng: info: [34256] (secs) (secs) (secs) (real time) (usr+sys time) stress-ng: info: [34256] cpu 24147 20.03 205.20 0.17 1205.67 117.58 Based on the ops/s I'm getting 39%. Why does this happen? Shouldn't it be limited by cpu.cfs_quota_us?Thanks for the help in advance. | Why is cpu.cfs_quota_us not limiting CPU bandwidth of LXC container? | linux;lxc;cgroups | null |
_webmaster.20037 | I'm hosting a URL with NameCheap, and it is a URL frame for a blogspot.com account. I'm using Google Analytics to track the blogspot.com account, but I'd like to be able to track the NameCheap frame URL as well. How do I do this? | How do I track a hosted URL on Namecheap.com with Google Analytics | google analytics;url | null |
_unix.325202 | I work as a sysadmin in a large company and have to maintain several windows and Linux (Ubuntu 16.04) VMs. Since I want to use zsh instead of bash on the Linux VMs, I have to change my default shell. Now, I log in on Linux with my Windows domain account which enforces the AD settings; that means I can't change the passwd file or use chsh to change my default shell, so I had to find another way. This way was to enforce the shell in AD with the loginShell attribute.The question is, what happens if I log in on a Linux VM which does not have zsh installed, what happens? Does it fallback to bash/sh, does it get stuck or something else? | What happens if a users default shell is not installed? | bash;ubuntu;ssh;zsh;active directory | Let's try!Shell changed on the server:[myserver ~]% getent passwd myusermyuser:x:150:150:myuser:/home/myuser:/fooLet's log in:[myclient ~]% ssh myserverReceived disconnect from myserver: 2: Too many authentication failures for myuserFrom the SSH logs on the server:Nov 22 09:30:27 myserver sshd[20719]: Accepted gssapi-with-mic for myuser from myclient port 33808 ssh2Nov 22 09:30:27 myserver sshd[20719]: pam_unix(sshd:session): session opened for user myuser by (uid=0)Nov 22 09:31:18 myserver sshd[20727]: Received disconnect from myclient: 11: disconnected by userNov 22 09:31:18 myserver sshd[20719]: pam_unix(sshd:session): session closed for user myuserNov 22 09:31:20 myserver sshd[20828]: User myuser not allowed because shell /foo does not existNov 22 09:31:20 myserver sshd[20835]: input_userauth_request: invalid user myuserNov 22 09:31:20 myserver sshd[20835]: Disconnecting: Too many authentication failures for myuserKey line: User myuser not allowed because shell /foo does not exist. So you can't log in if you don't have a valid shell set. |
_vi.11195 | I'd like to set some file-type dependent mappings to quickly run files. For example, I have some mappings like these:nnoremap <silent><leader>z :w<CR> :!clear; gcc %; ./a.out<cr>nnoremap <silent><leader>z :w<CR> :!clear; g++ %; ./a.out<cr>nnoremap <silent><leader>z :w<CR> :!clear; ruby %<cr>How can I set each mapping to its corresponding file type? | Set mappings depending on file type | key bindings;filetype | You can use the FileType autocmd.autocmd FileType c nnoremap <buffer><silent><leader>z :w<CR> :!clear; gcc %; ./a.out<cr>autocmd FileType cpp nnoremap <buffer><silent><leader>z :w<CR> :!clear; g++ %; ./a.out<cr>autocmd FileType ruby nnoremap <buffer><silent><leader>z :w<CR> :!clear; ruby %<cr>See :h autocmd and :h FileType for more info. |
_softwareengineering.20607 | I was wondering if there are obvious advantages and disadvantages to using Ruby on Rails to develop a desktop application.RoR has great infrastructure for rapid development, proper implementation of specs and automated acceptance tests, an immense number of popular libraries and the promise of being actively developed and maintained in the future.The downsides I can see are mostly about usability - installation of a Rails app as a local service and launching of a browser when it needs to be active may not come naturally to many users... or be technically easy to implement and support for different platforms. | Is Ruby on Rails a suitable framework for a desktop application? | ruby on rails;desktop | Rails is a web framework, I'd use it for that or if you really want to produce a desktop application then pick something else. You might be able to get it working as a desktop platform now but that's clearly not how it's seen by the community so who's to say it won't be changed in the future to make your implementation harder or impossible?I'd also suggest that if you're going to be constrained by a browser based UI, why not just host it on a server and get the benefits rather than having to deal with support of local installs?The best desktop applications will be ones written in a language which is intended for that purpose and ideally which are native (or in the case of .NET native-ish) to the operating system so they can adopt all the usual UI components, metaphors and functionality users are used to seeing on that OS. |
_unix.340903 | When running wget -r -k -l 1 http://econ.ucsb.edu/~tedb/Courses/GraduateTheoryUCSB/TheoryF16.html`the process successfully completes but a number of files are not downloaded and a number of absolute links are not converted.For example, the file BlumeSimonCh21.pdf is linked twice in the html source code, one as relative and another as absolute path, both belonging to the same host. The latter links to the actual website over the internet rather than linking to the local file. Moreover, the file Bernoulli.pdf is not downloaded by wget despite being in the same host directory. I tried adding -H to the wget command, these problems still occur. Is it a bug?Some other thoguhts: The manual says when -r is specified, wget downloads simply overwrite the old file with the new one if they are the same file. Maybe this has to do with redownloading the files?EDIT: I am running the newest wget release to date, 1.18 on Arch Linux. | Wget not converting links and downloading properly? | wget | null |
_cstheory.609 | Input: a graph with n nodes,Output: A clique of size $O(\log n)$,Providing links to references would be great | What are the best known upper bounds and lower bounds for computing O(log n)-Clique? | ds.algorithms;reference request;graph theory;lower bounds;upper bounds | The best known upper bound is essentially $n^{O(\log n)}$. You can improve a little on the constant factor in the big-O using fast matrix multiplication, but that's about it. There are a lot of algorithmic references on the $k$-clique problem which describe this reduction, it originates from papers of Itai and Rodeh and Nesetril and Poljak. (Apologies to Czech readers, I am ignorant of the proper diacritical marks.) See http://en.wikipedia.org/wiki/Clique_problemIf you could solve $\log n$-clique in $n^{\varepsilon \log n}$ for every $\varepsilon > 0$, then you could also solve 3SAT in subexponential time. This can be seen as a lower bound to further progress. One way to prove this is to first show that if $\log n$-clique in $n^{\varepsilon \log n}$ for every $\varepsilon > 0$, then MaxCut on $n$ nodes is in $2^{\varepsilon n}$ time for every $\varepsilon > 0$. This follows directly from a theorem in my ICALP'04 paper that relates the time complexity of MaxCut to the time complexity of $k$-clique. From there, one can appeal to standard reductions to reduce 3SAT to MaxCut, showing that subexponential MaxCut implies subexponential 3SAT.In terms of unconditional lower bounds, nothing nontrivial is known, to my knowledge. We don't even know how to show that $O(\log n)$-clique isn't solvable with an algorithm that runs in linear time and uses only logarithmic workspace. |
_softwareengineering.197107 | In divide and conquer algorithms such as quicksort and mergesort, the input is usually (at least in introductory texts) split in two, and the two smaller data sets are then dealt with recursively. It does make sense to me that this makes it faster to solve a problem if the two halves takes less than half the work of dealing with the whole data set. But why not split the data set in three parts? Four? n?I guess the work of splitting the data in many, many sub sets makes it not worth it, but I am lacking the intuition to see that one should stop at two sub sets.I have also seen many references to 3-way quicksort. When is this faster? What is used in practice? | Divide and Conquer algorithms Why not split in more parts than two? | algorithms;algorithm analysis | It does make sense to me that this makes it faster to solve a problem if the two halves takes less than half the work of dealing with the whole data set.That is not the essence of divide-and-conquer algorithms. Usually the point is that the algorithms cannot deal with the whole data set at all. Instead, it is divided into pieces that are trivial to solve (like sorting two numbers), then those are solved trivially and the results recombined in a way that yields a solution for the full data set.But why not split the data set in three parts? Four? n?Mainly because splitting it into more than two parts and recombining more than two resultsresults in a more complex implementation but doesn't change the fundamental (Big O) characteristic of the algorithm - the difference is a constant factor, and may result in a slowdown if the division and recombination of more than 2 subsets creates additional overhead.For example, if you do a 3-way merge sort, then in the recombination phase you now have to find the biggest of 3 elements for every element, which requires 2 comparisons instead of 1, so you'll do twice as many comparisons overall. In exchange, you reduce the recursion depth by a factor of ln(2)/ln(3) == 0.63, so you have 37% fewer swaps, but 2*0.63 == 26% more comparisons (and memory accesses). Whether that is good or bad depends on which is more expensive in your hardware.I have also seen many references to 3-way quicksort. When is this faster? Apparently a dual pivot variant of quicksort can be proven to require the same number of comparisons but on average 20% fewer swaps, so it's a net gain. What is used in practice?These days hardly anyone programs their own sorting algorithms anymore; they use one provided by a library. For example, the Java 7 API actually uses the dual-pivot quicksort.People who actually do program their own sorting algorithm for some reason will tend to stick to the simple 2-way variant because less potential for errors beats 20% better performance most of the time. Remember: by far the most important performance improvement is when the code goes from not working to working. |
_unix.158867 | Put simply, I can't work out why the code below won't carry out a file transfer? I have been assured by the sysadmin staff on the remote server that there are no permissions/firewall issues that could be affecting it and that the login details are correct. I've logged file output using -O and the resulting file (--ftp-user=ITParts) is blank. What could be wrong?exec('wget -o --ftp-user=xxxx --ftp-password=xxxx \ ftp.eurosimm.com/StocklistDealSOss.csv \ /home/design/public_html/itpartsandspares/'); | Why can't I transfer a file via wget (FTP) using exec() function in PHP? | ftp;wget | null |
_unix.243661 | What is the difference between Master and PCM channels in Alsa, and which one should I manipulate for controlling the output volume?I have three sound cards (Intel PantherPoint, HRT HeadStreamer and Fiio E10 DACs). The Intel is integrated and comes with both Master and PCM, whereas the other two are external and only have the PCM channel with no Master.I'm writing a script to toggle between the different soundcards and I'd like to figure out what is the exact setting to fiddle with.Thanks for your help | What's the difference between Master and PCM channels in Alsa? | audio;alsa | With more complex devices, PCM affects the audio data played by software, while Master also affects everything else going to the speakers.With devices that do not have an analog mixer, this distinction would not make sense. |
_unix.120188 | I have a USB HDD. I want to mount it with compression enabled. I can do in the fstab of my system or even using udev rules. The problem is that I won't mount my USB HDD on my computer only. Up to now, I used to trigger a terminal each time I mounted it.Then, I discovered chattr +c. This is working very well but I want to use LZO instead of ZLIB. Is there any way to be more specific and define the compression algorithm once for all? | Is there any way in BTRFS to set compression permanently? | mount;compression;btrfs | null |
_codereview.11978 | I did this as an exercise just to practice/improve using generics.Independent of how useful this implementation of a Singleton is, how is my coding in terms of using generics and any other aspect of class design of code style?void Main(){ var a = Singleton<MyClass>.Value; var b = Singleton<MyClass, MyClassFactory>.Value; var c = Singleton<MyClass>.Value; var d = Singleton<MyClass, MyClassFactory>.Value; var e = Singleton<MyOtherClass>.Value; var f = Singleton<MyOtherClass>.Value; var g = Singleton<MyOtherClass, MyOtherFactory>.Value; var h = Singleton<MyOtherClass, MyOtherFactory>.Value;}class SingletonBase{ protected static object Locker = new LockerObject();}class Singleton<T> : SingletonBase where T : new() { static T StaticT; public static T Value { get { lock (Locker) { if(StaticT == null) { StaticT = Activator.CreateInstance<Factory<T>>().Create(); } else { Console.WriteLine (Singleton<T>::Value + typeof(T).Name + is already created); } } return StaticT; } }}class Singleton<T, F> : SingletonBase where T : new() where F : IFactory<T>, new(){ static T StaticT; public static T Value { get { lock (Locker) { if(StaticT == null) { StaticT = new F().Create(); } else { Console.WriteLine (Singleton<T, F>::Value + typeof(T).Name + is already created); } } return StaticT; } }}class LockerObject{ Guid myGUID; public LockerObject() { this.myGUID = Guid.NewGuid(); Console.WriteLine (New LockerObject + this.myGUID.ToString()); }}interface IFactory<T>{ T Create();}class Factory<T> : IFactory<T> where T : new(){ public T Create() { Console.WriteLine (Factory<T>::Create()); return new T(); }}class MyClassFactory : IFactory<MyClass>{ public MyClass Create() { Console.WriteLine (MyClassFactory::Create()); return new MyClass(); }}class MyClass{ public MyClass() { Console.WriteLine (MyClass created); }}class MyOtherClass{ public MyOtherClass() { Console.WriteLine (MyOtherClass created); }}class MyOtherFactory : IFactory<MyOtherClass>{ public MyOtherClass Create() { Console.WriteLine (MyOtherFactory::Create()); return new MyOtherClass(); }}Output:New LockerObject 36aa2282-d745-43ca-84d2-998a78e39d51Factory<T>::Create()MyClass createdMyClassFactory::Create()MyClass createdSingleton<T>::ValueMyClass is already createdSingleton<T, F>::ValueMyClass is already created | Singleton implementation using generics | c#;generics;singleton | null |
_codereview.126847 | I'm working on an API that has a lot of controller functions like this:def create = Action.async { implicit request => if (request.body.asJson.isEmpty) { Future.successful(BadRequest(Missing body)) } else { val body = request.body.asJson.get.as[JsObject] val companyID = (body \ company \ id).validate[String] val parsedAccount = (body \ account).validate[Account] // Check that we have all of the fields we need if (parsedAccount.isError) { Future.successful(BadRequest(Missing account data)) } else if (companyID.isError) { Future.successful(BadRequest(Missing company data)) } else { // Insert the new account val account = parsedAccount.get (for { _ <- primaryDAO.insert(account, companyID.get) account <- primaryDAO.get(account.id) } yield account).map { case account => Created(account) }.recover { case e => BadRequest(e) } } }}I was hoping that I would be able to do something more like this (using early returns):def create = Action.async { implicit request => if (request.body.asJson.isEmpty) { return Future.successful(BadRequest(Missing body)) } val body = request.body.asJson.get.as[JsObject] val companyID = (body \ company \ id).validate[String] val parsedAccount = (body \ account).validate[Account] // Check that we have all of the fields we need if (parsedAccount.isError) { return Future.successful(BadRequest(Missing account data)) } if (companyID.isError) { return Future.successful(BadRequest(Missing company data)) } // Insert the new account val account = parsedAccount.get (for { _ <- primaryDAO.insert(account, companyID.get) account <- primaryDAO.get(account.id) } yield account).map { case account => Created(account) }.recover { case e => BadRequest(e) }}However this is not possible because the return statement only returns from my nested function (and back into the Action.async)I am wondering what I can do in place of early returns (which I would use in imperative programming languages) to make my code cleaner.The primary DAO implements a generic trait that I use for most of my DAOs and looks like this:trait DAOGet[A <: BaseModel] { def get(pk: String): Future[Option[A]] def all: Future[Seq[A]] def all(page: Int, perPage: Int): Future[Seq[A]]}trait DAOInsert[A <: BaseModel] extends DAOGet[A] { def insert(model: A): Future[Any]}trait DAOUpdate[A <: BaseModel] extends DAOGet[A] { def update(model: A): Future[Int]}trait DAODelete[A <: BaseModel] { def delete(pk: String): Future[Int]}trait CRUDDAO[A <: BaseModel] extends DAOGet[A] with DAOInsert[A] with DAOUpdate[A] with DAODelete[A] | Implementation of API to create a company account in a database | validation;error handling;scala;asynchronous | PreludeI'll assume the following code :trait BaseModel{ def pk:String def id:String=pk}case class Account(name:String) extends BaseModel{ override val pk = name}As the internal structure of the Account class. I made it extends BaseModel so it can be used with the following fake DAO and my sample compiles. I had to add an insert which takes both an Account and a company Id.class AccountDAO extends CRUDDAO[Account]{ override def insert(model: Account): Future[Any] = Future.successful(model) def insert(model: Account,companyId:String): Future[Any] = model.id match{ case account1|account2 => Future.successful(model) case _ => Future.failed(new RuntimeException(Unable to save account)) } override def update(model: Account): Future[Int] = ??? override def get(pk: String): Future[Option[Account]] = pk match { case account1 => Future.successful(Some(Account(pk))) case account2 => Future.successful(None) case _ => Future.failed(new RuntimeException(no such account)) } override def all: Future[Seq[Account]] = ??? override def all(page: Int, perPage: Int): Future[Seq[Account]] = ???}I made the original code sample compile by adapting the bottom of the code as Created(account) wouldn't compile here is what it looks like. Initial codeclass Companies @Inject()(primaryDAO:AccountDAO)(implicit ec:ExecutionContext) extends Controller { implicit val AccountReads = Json.format[Account] def create = Action.async { implicit request => if (request.body.asJson.isEmpty) { Future.successful(BadRequest(Missing body)) } else { val body = request.body.asJson.get.as[JsObject] val companyID = (body \ company \ id).validate[String] val parsedAccount = (body \ account).validate[Account] // Check that we have all of the fields we need if (parsedAccount.isError) { Future.successful(BadRequest(Missing account data)) } else if (companyID.isError) { Future.successful(BadRequest(Missing company data)) } else { // Insert the new account val account = parsedAccount.get (for { _ <- primaryDAO.insert(account, companyID.get) account <- primaryDAO.get(account.id) } yield account).map { case Some(a) => Created(a.id) case None => InternalServerError(Unable to create Account) }.recover { case e => BadRequest(e.getMessage) } } } }}object Companies extends Companies(new AccountDAO)(play.api.libs.concurrent.Execution.defaultContext)Following are some sample request/reponses using httpie: $> echo '{company:{id:1}, account:{name:account1}}'| http :9000/foobarHTTP/1.1 201 CreatedContent-Length: 8Content-Type: text/plain; charset=utf-8Date: Thu, 28 Apr 2016 20:17:19 GMTaccount1 $> echo '{company:{id:1}, account:{name:account2}}'| http :9000/foobarHTTP/1.1 500 Internal Server ErrorContent-Length: 24Content-Type: text/plain; charset=utf-8Date: Thu, 28 Apr 2016 20:18:45 GMTUnable to create Account$> echo '{company:{id:1}, account:{name:account3}}'| http :9000/foobarHTTP/1.1 400 Bad RequestContent-Length: 22Content-Type: text/plain; charset=utf-8Date: Thu, 28 Apr 2016 20:21:19 GMTUnable to save account$> echo '{ account:{name:account3}}'| http :9000/foobarHTTP/1.1 400 Bad RequestContent-Length: 20Content-Type: text/plain; charset=utf-8Date: Thu, 28 Apr 2016 20:21:44 GMTMissing company data$> echo '{company:{id:1}, account:{}}'| http :9000/foobarHTTP/1.1 400 Bad RequestContent-Length: 20Content-Type: text/plain; charset=utf-8Date: Thu, 28 Apr 2016 20:22:07 GMTMissing account dataEmbracing the HTTP frameworkNow there is a way to write this using early returns, I'll show it for completeness sake but you really really don't want to do that as I'll explain below. Return doesn't have a type so you are forced to explicitely provide a return type which is impossible in an anonymous function (the block after Action.async is just an anonymous function). You can easily extract your code to a named method with explicit types and use that as the action body: class Companies @Inject()(primaryDAO:AccountDAO)(implicit ec:ExecutionContext) extends Controller { implicit val AccountReads = Json.format[Account] def doCreate(implicit request:Request[AnyContent]):Future[Result]={ if (request.body.asJson.isEmpty) { return Future.successful(BadRequest(Missing body)) } val body = request.body.asJson.get.as[JsObject] val companyID = (body \ company \ id).validate[String] val parsedAccount = (body \ account).validate[Account] // Check that we have all of the fields we need if (parsedAccount.isError) { return Future.successful(BadRequest(Missing account data)) } if (companyID.isError) { return Future.successful(BadRequest(Missing company data)) } // Insert the new account val account = parsedAccount.get (for { _ <- primaryDAO.insert(account, companyID.get) account <- primaryDAO.get(account.id) } yield account).map { case Some(a) => Created(a.id) case None => InternalServerError(Unable to create Account) }.recover { case e => BadRequest(e.getMessage) } } def create = Action.async(doCreate)}object Companies extends Companies(new AccountDAO)(play.api.libs.concurrent.Execution.defaultContext)I said that doing this is wrong and you really don't want to do this. Quoting Rob Norris (tpolecat) : If you find yourself in a situation where you think you want to return early, you need to re-think the way you have defined your computationSo let's do some rethinking :) I'll posit that What you really want isn't so much using early returns as it is avoiding deep nesting of if/else clauses. Let's have a look at the types we are manipulating : request.body.asJson returns an Option[JsValue]. The current implementation tests to check if it is empty and returns a BadRequest. Play offers a similar and much cleaner way to check that you actually receive an application/json body for your endpoint (I'll leave the wrapping code to concentrate on the action itself for now) using a specific body parser:def create = Action.async(parse.json) { implicit request => val body = request.body val companyID = (body \ company \ id).validate[String] val parsedAccount = (body \ account).validate[Account] // Check that we have all of the fields we need if (parsedAccount.isError) { Future.successful(BadRequest(Missing account data)) } if (companyID.isError) { Future.successful(BadRequest(Missing company data)) } else { // Insert the new account val account = parsedAccount.get (for { _ <- primaryDAO.insert(account, companyID.get) account <- primaryDAO.get(account.id) } yield account).map { case Some(a) => Created(a.id) case None => InternalServerError(Unable to create Account) }.recover { case e => BadRequest(e.getMessage) } }}Using a body parser in the action will enforce the media type for the endpoint (in this case it will have to be a form of json). Trying to call it with a content type such as application/x-www-form-urlencoded will fail with a 415 Unsupported Media Type error, passing an invalid json body will yield a 400 BadRequest for you : $> echo 'coucou'| http --form :9000/foobarHTTP/1.1 415 Unsupported Media TypeContent-Length: 2163Content-Type: text/html; charset=utf-8Date: Thu, 28 Apr 2016 20:41:15 GMT$> echo coucou| http :9000/foobarHTTP/1.1 400 Bad RequestContent-Length: 2289Content-Type: text/html; charset=utf-8Date: Thu, 28 Apr 2016 20:43:30 GMT Embracing the Json libraryThe next step consists of leveraging play-json's validation facilities. First let's define a reader which enforces all the protocol constraints for your endpoint :import play.api.libs.json._import play.api.libs.functional.syntax._implicit val CreateDTOReads = ( (__ \ company \ id).read[String] and (__ \ account).read[Account] ).tupledNow we can use that to fully validate the incoming payload and reject it if it is incorrect: def create = Action.async(parse.json) { implicit request => val createDto:JsResult[(String,Account)] = request.body.validate(CreateDTOReads) // Check that we have all of the fields we need if (createDto.isError) { Future.successful(BadRequest(Missing account or company data)) } else { // Insert the new account val (companyId,account) = createDto.get (for { _ <- primaryDAO.insert(account, companyId) account <- primaryDAO.get(account.id) } yield account).map { case Some(a) => Created(a.id) case None => InternalServerError(Unable to create Account) }.recover { case e => BadRequest(e.getMessage) } }}Notice that at this point we have lost a bit of precision since I don't distinguish between the two errors anymore. The information is still there, captured in the errors of the JsResult. You could use pattern matching or even a cast to get a JsError out of the JsResult and once you have a JsError you get the list of all validation errors for each path which you can manipulate and translate as you like. For instance : def create = Action.async(parse.json) { implicit request => val createCommand:JsResult[(String,Account)] = request.body.validate(CreateDTOReads) // Check that we have all of the fields we need if (createCommand.isError) { val errors = createCommand.asInstanceOf[JsError] Json.prettyPrint(JsError.toJson(errors)) Future.successful(BadRequest(Json.prettyPrint(JsError.toJson(errors)))) } else { // Insert the new account val (companyId,account) = createCommand.get (for { _ <- primaryDAO.insert(account, companyId) account <- primaryDAO.get(account.id) } yield account).map { case Some(a) => Created(a.id) case None => InternalServerError(Unable to create Account) }.recover { case e => BadRequest(e.getMessage) } } }returns something like : $> echo '{coucou:}'| http :9000/foobarHTTP/1.1 400 Bad RequestContent-Length: 173Content-Type: text/plain; charset=utf-8Date: Fri, 29 Apr 2016 08:41:35 GMT{ obj.account : [ { msg : [ error.path.missing ], args : [ ] } ], obj.company.id : [ { msg : [ error.path.missing ], args : [ ] } ]}This is still not looking very nice since this is not the idiomatic way to extract information from a JsResult. The proper way is to fold over the JsResult. The fold method signature on a JsResult is fold[X](errors: (Seq[(JsPath, Seq[ValidationError])]) => X, valid: (A) => X): X). In our case we want X to be a Future[JsResult], and can write it like this :def create = Action.async(parse.json) { implicit request => val createCommandResult:JsResult[(String,Account)] = request.body.validate(CreateDTOReads) // Check that we have all of the fields we need createCommandResult.fold( errors => Future.successful(BadRequest(Json.prettyPrint(JsError.toJson(errors)))), createCommand => { val (companyId,account) = createCommand (for { _ <- primaryDAO.insert(account, companyId) account <- primaryDAO.get(account.id) } yield account).map { case Some(a) => Created(a.id) case None => InternalServerError(Unable to create Account) }.recover { case e => BadRequest(e.getMessage) } } )}Single Responsibility PrincipleNow we are getting there but the valid case is not looking so good. This is because the create action handles too many responsibilities. At the REST endpoint level you should only handle HTTP protocol concerns :content negotiation deserialization of the payload (can include some validation)serialization of the responsesLet's extract the business logic, however simple, to an AccountService class : @Singletonclass AccountService @Inject() (primaryDAO: AccountDAO){ def createAccount(companyId:String,account:Account)(implicit ec: ExecutionContext) : Future[Option[Account]] = for { _ <- primaryDAO.insert(account, companyId) account <- primaryDAO.get(account.id) } yield account}Now our endpoint only handles the HTTP translation logic :def create = Action.async(parse.json) { implicit request => val createCommandResult:JsResult[(String,Account)] = request.body.validate(CreateDTOReads) // Check that we have all of the fields we need createCommandResult.fold( errors => Future.successful(BadRequest(Json.prettyPrint(JsError.toJson(errors)))), createCommand => { val (companyId,account) = createCommand val createdAccountF: Future[Option[Account]] = accountService.createAccount(companyId, account) createdAccountF.map { case Some(a) => Created(a.id) case None => InternalServerError(Unable to create Account) }.recover { case e => BadRequest(e.getMessage) } } )}The error handling code: createdAccountF.map { case Some(a) => Created(a.id) case None => InternalServerError(Unable to create Account)}.recover { case e => BadRequest(e.getMessage)}Is a good candidate for abstraction. If you wanted to always return Json for instance you could have the following : object JsonResultMapper extends Results { import play.api.libs.json.Writes def jsonOk[A](subject: A)(implicit writer: Writes[A]) = Ok(Json.toJson(subject)) def jsonNotfound(msg: String) = NotFound(Json.obj(reason -> msg)) def exception2Location(exception: Exception): String = Option(exception.getStackTrace) .flatMap(_.headOption) .map(_.toString) .getOrElse(unknown) def jsonInternalServerError(msg: String, cause: Exception) = { val jsonMsg = Json.obj( reason -> msg, location -> exception2Location(cause) ) InternalServerError(jsonMsg) } def toJsonResult[A](subjectOptionFuture: Future[Option[A]],noneMsg: => String = NotFound) (implicit writer: Writes[A]): Future[SimpleResult] = { subjectOptionFuture.map { case Some(subject) => jsonOk(subject) case None => jsonNotfound(noneMsg) }.recover { case e: Exception => jsonInternalServerError(e.getMessage, e) } }}and then write your action as : def create = Action.async(parse.json) { implicit request => val createCommandResult:JsResult[(String,Account)] = request.body.validate(CreateDTOReads) // Check that we have all of the fields we need createCommandResult.fold( errors => Future.successful(BadRequest(Json.prettyPrint(JsError.toJson(errors)))), createCommand => { val (companyId,account) = createCommand val createdAccountF: Future[Option[Account]] = accountService.createAccount(companyId, account) JsonResultMapper.toJsonResult(createdAccountF, sUnable to create account) } )}You could stop there and I will for the purpose of this review, but there are still things which can probably be improved. I'll give you a couple leads to further improve:PrimaryDao protocolThe Future[Option[Account]] may not be a good signature for primaryDAO.get(account.id) or for AccountService#Create. As you can see, it has 2 errors paths and 1 happy path. However when serializing it, the happy path and first error path are processed together in the same block, then the second error path (exception raised) in a different block. Some would argue that one error path is a business error while the other is a technical error which makes it ok. Whether we hide this in the ResultMapper or not I personally don't like it. To get rid of it an depending on your team standards, you can go for:A BusinessException such as AccountNotFound which is thrown instead of returning an OptionA custom composition of Future and Option (see http://www.edofic.com/posts/2014-03-07-practical-future-option.html and http://loicdescotte.github.io/posts/scala-compose-option-future/ )A ScalaZ monad transformer which does the same as the previous option in a generic wayIn the same logic Future[Any] is not a very good signature for :trait DAOInsert[A <: BaseModel] extends DAOGet[A] { def insert(model: A): Future[Any]}I strongly suggest changing that to trait DAOInsert[A <: BaseModel] extends DAOGet[A] { def insert(model: A): Future[A]}and having the insert return the saved instance if it is possible. This would allow you to distinguish between : there was an error while inserting vs I couldn't read the instance which are not necessarily the same.Using specific typescustomerId is a string which carries very little information, creating and using a CustomerId type would probably prove very useful if it is used throughout your application. DisclaimerI don't know enough of the business to properly name things in my refactoring. Naming is probably the single most important thing when writing code and it is known to be one of the hardest (with invalidating caches). Final codepackage controllers.companyimport scala.concurrent.{ExecutionContext, Future}import com.google.inject.{Inject, Singleton}import play.api.libs.json.Jsonimport play.api.mvc.{Action, Controller}trait BaseModel { def pk: String def id: String = pk}case class Account(name: String) extends BaseModel { override val pk = name}trait DAOGet[A <: BaseModel] { def get(pk: String): Future[Option[A]] def all: Future[Seq[A]] def all(page: Int, perPage: Int): Future[Seq[A]]}trait DAOInsert[A <: BaseModel] extends DAOGet[A] { def insert(model: A): Future[Any]}trait DAOUpdate[A <: BaseModel] extends DAOGet[A] { def update(model: A): Future[Int]}trait DAODelete[A <: BaseModel] { def delete(pk: String): Future[Int]}trait CRUDDAO[A <: BaseModel] extends DAOGet[A] with DAOInsert[A] with DAOUpdate[A]class AccountDAO extends CRUDDAO[Account] { override def insert(model: Account): Future[Any] = Future.successful(model) def insert(model: Account, companyId: String): Future[Any] = model.id match { case account1 | account2 => Future.successful(model) case _ => Future.failed(new RuntimeException(Unable to save account)) } override def update(model: Account): Future[Int] = ??? override def get(pk: String): Future[Option[Account]] = pk match { case account1 => Future.successful(Some(Account(pk))) case account2 => Future.successful(None) case _ => Future.failed(new RuntimeException(no such account)) } override def all: Future[Seq[Account]] = ??? override def all(page: Int, perPage: Int): Future[Seq[Account]] = ???}@Singletonclass AccountService @Inject() (primaryDAO: AccountDAO){ def createAccount(companyId:String,account:Account)(implicit ec: ExecutionContext) : Future[Option[Account]] = for { _ <- primaryDAO.insert(account, companyId) account <- primaryDAO.get(account.id) } yield account}@Singletonclass Companies @Inject()(accountService: AccountService)(implicit ec: ExecutionContext) extends Controller { implicit val AccountReads = Json.format[Account] import play.api.libs.functional.syntax._ import play.api.libs.json._ implicit val CreateDTOReads = ( (__ \ company \ id).read[String] and (__ \ account).read[Account] ).tupled def create = Action.async(parse.json) { implicit request => val createCommandResult:JsResult[(String,Account)] = request.body.validate(CreateDTOReads) // Check that we have all of the fields we need createCommandResult.fold( errors => Future.successful(BadRequest(Json.prettyPrint(JsError.toJson(errors)))), createCommand => { val (companyId,account) = createCommand val createdAccountF: Future[Option[Account]] = accountService.createAccount(companyId, account) createdAccountF.map { case Some(a) => Created(a.id) case None => InternalServerError(Unable to create Account) }.recover { case e => BadRequest(e.getMessage) } } ) }}object Companies extends Companies(new AccountService(new AccountDAO()))(play.api.libs.concurrent.Execution.defaultContext) |
_softwareengineering.56690 | Aside from using IDEs such as MonoDevelop, what combination of tools do you use in Mono development to give you the same productivity boost that one would normally gain by using R# in VS2010?EDIT: I'm trying to kick the R# habit and switch to Mono development in Linux, but it's hard to kick the habit of using VS2010 + R#, and I need alternative tools to break that habit. | What development tools would you recommend for developing .NET apps in Mono that would give me the same productivity boost as Resharper? | mono;resharper;linux development | Currently, there is none. Here is the same question on Stackoverflow. Currently there are several bounties offering a reward for a R# port to MonoDevelop, but nothing yet, unfortunately. |
_datascience.10204 | When implementing mini-batch gradient descent for neural networks, is it important to take random elements in each mini-batch? Or is it enough to shuffle the elements at the beginning of the training once?(I'm also interested in sources which definitely say what they do.) | Should I take random elements for mini-batch gradient descent? | machine learning;neural network | It should be enough to shuffle the elements at the beginning of the training and then to read them sequentially. This really achieves the same objective as taking random elements every time, which is to break any sort of predefined structure that may exist in your original dataset (e.g. all positives in the beginning, sequential images, etc).While it would work to fetch random elements every time, this operation is typically not optimal performance-wise. Datasets are usually large and are not saved in your memory with fast random access, but rather in your slow HDD. This means sequential reads are pretty much the only option you have for good performance.Caffe for example uses LevelDB, which does not support efficient random seeking. See https://github.com/BVLC/caffe/issues/1087, which confirms that the dataset is trained with images always in the same order. |
_unix.320389 | One of our users mistakenly copied some system directories (e.g., /lib) to her home directory, using command cp -r /lib ., and then she cannot delete these directories. Command rm -rf ./lib returns a list of errors saying Permission denied (one for each file, I think). I am sure both the copy and delete commands use same username, and no permission changes of any kind happened in between.I can probably delete these directories using root privilege, but I would like to know why is this happening. Is this a bug of the Centos 6.8 we use? Or why a user cannot delete the directories she created in her home directory? | Unable to delete directories copied from elsewhere centos 6 | files;permissions;cp | cp -r copies permission modes by default. So if /lib was not owner-writable, ./lib will not be writable, either. Trying to remove the contents of a non-writable directory gets permission denied, even if you're the owner of it. You can fix the permissions with chmod -R u+w ./lib.Here's a demo:barmar@dev:~/test.dir$ mkdir subdirbarmar@dev:~/test.dir$ touch subdir/foobarmar@dev:~/test.dir$ chmod a-w subdirbarmar@dev:~/test.dir$ cp -r subdir newsubdirbarmar@dev:~/test.dir$ rm -rf newsubdirrm: cannot remove `newsubdir/foo': Permission deniedbarmar@dev:~/test.dir$ chmod a+w newsubdirbarmar@dev:~/test.dir$ rm -rf newsubdirbarmar@dev:~/test.dir$ |
_softwareengineering.87728 | When working on a product that needs to be done soon and work well, when is it OK to sacrifice maintainability and neatness of design in order to get the thing done and out the door quickly? And to what degree is it OK, especially when the techniques used to make it neat are new to me? | When is it OK to sacrifice the neatness of the design to get a project done? | design | Remember that you are employed in order to support a business. In some way, your software is affecting the bottom line of the business. You need to strike a balance between a technically perfect solution, and the time to market of your product and how much benefit the business will derive from it.In my experience developers often get hung up on worrying about technical perfection. But there are very valid reasons to sacrifice quality attributes such as maintainability or performance in order to get a product out the door quicker. It depends on the business and the product. But always strive for a perfect design is too simplistic an approach.At the end of the day, the only thing that matters is value to the business. Figure out how to maximize it in the short and long term and aim for that target. |
_webmaster.33275 | So I have a website/blog which I have hosted with a service which is offering me 2000mb of monthly bandwidth traffic.I recently have started getting a lot of traffic, or I suppose, enough to push me quite close to the cap. I am already at 79% for August and it is practically all due to the HTTP traffic component!I noticed that the bandwidth traffic really spikes during the days of a new post. Also, I do have videos from Vimeo embedded in my posts, so I am not sure if loading those on my page counts towards the bandwidth.So what can I possibly do? And is this high usage actually just from people visiting and loading my blog post in their browser? Do you have any suggestions which would significantly reduce the bandwidth usage?I can't afford to raise the bandwidth cap either. For reference, I have an installation of Wordpress running on my website.Thank you. | cPanel Monthly Bandwidth Traffic - HTTP Traffic Extremely High | traffic;http;bandwidth;high traffic;usage data | Have you checked your access log / webstats? Is this reporting a higher number of visitors to match the bandwidth usage? Are you getting an increased amount of bot traffic? Rogue bots can perhaps be blocked in .htaccess if this is an issue.Are your images optimised?Have you enabled gzip compression for your pages? This can drastically cut the size (and speed) of your HTML pages, CSS and JavaScript.Are you making the most of browser caching? Setting expiration date headers in the future for static content/resources.I do have videos from Vimeo embedded in my posts, so I am not sure if loading those on my page counts towards the bandwidth.This itself should not count directly against your bandwidth. |
_scicomp.7895 | The discrete analogue of the $L_p$ norm for the mesh function $V$ is $$\|V\|_{l^p(\bar{\Omega}^N)}=\left(\sum_{i=0}^NV_i^p\bar{h}_i\right)^{1/p}$$ where $\bar{\Omega}^N$ is an arbitrary mesh, $\bar{h}_i=(h_{i+1}+h_i)/2$ and $h_i=x_i-x_{i-1}$. The error $e$ of a certain approximation is such that $e_i=-e^{-x_i/\epsilon}$ for $1\leq i\leq N-1$ and 0 for $i=0$ or $i=N$ For any $p$, $1\leq p\leq \infty$ the $l^p(\bar{\Omega}^N)$ norm of the error on a general non-uniform mesh is $$\|e\|_{l^p(\bar{\Omega}^N)}=\left(\sum_{i=1}^{N-1}e^{-px_i/\epsilon}\bar{h}_i\right)^{1/p}$$My first question is why is this expression $O(N^{-1/P})$.On a uniform mesh, why is $$\left\vert {e_i}^p\right\vert \leq N\sum_{j=1}^{N-1}\left\vert e_j\right\vert^ph$$ How does this imply $$\|e\|_{\bar{\Omega}^N}\leq N^{1/P}\|e\|_{l^p(\bar{\Omega}^N)}$$ | discrete versions of Lp norm | singular perturbation | null |
_opensource.2782 | I have a question about the NOTICE and CHANGELOG files in Apache 2.0 license.Here is the situation: I based my work on an Apache 2.0 licensed project. I did some minor changes (compared to the original work). It seems that I have two problems:According to the Apache 2.0 license, I have to include NOTICE file if it was in the original work. The problem is, that the notice file in the original work was probably never updated and is possibly incorrect as it does not include notices about library dependencies. Do I have to fix this myself or is it enough to just add the dependencies that were introduced by my changes?The license also requires that I state changes. The original work did not have any CHANGELOG file and changes were tracked in git (which I guess is not sufficient for the needs of the license). How and in what detail do I state the changes in this situation?(Bonus question) Because my work will most likely never be merged back to original project git repository I would like to be stated as one of the authors if possible. I suppose, that I add the mandatory license header to the files that I added to the project, but what about the changes in the files that I only modified? Should I add myself to the NOTICE file? I am not sure how would this look like so any example would be appreciated.Also, I will not distribute it in binary form. Only the source code.I understand, that this has already been discussed, but I am not sure what to do in a situation like this.EDIT: As suggested, this is the project: https://github.com/brianfrankcooper/YCSBEDIT: Please see comments under Thomas' answer if you need more clarification :) | Apache 2.0 license - NOTICE, CHANGELOG | apache 2.0;license notice | Let's break this down.According to the Apache 2.0 license, I have to include NOTICE file if it was in the original work. The problem is, that the notice file in the original work was probably never updated and is possibly incorrect as it does not include notices about library dependencies. Do I have to fix this myself or is it enough to just add the dependencies that were introduced by my changes?You do not need to continue to carry a NOTICE file. Section 4d allows you to include the attribution notices in a NOTICE file, within the source or documentation if it is provided alongside the derivative work, or within a display generated by the derivative work. Specifically, the last sentence of 4d allows you to add your own attribution text alongside (by modifying the NOTICE file) or as an addendum to the NOTICE text (putting both in the same file, within another document, or in a display generated by the derivative work).Since you're stating that the original NOTICE file is wrong, I would consider doing an addendum by either adding a second NOTICE file that is fully correct, by using one file and calling out which was the NOTICE from the original project and which content is related to your derivative work, or in a display and calling out which was the original NOTICE text and which was your NOTICE text. Regardless - make sure that your NOTICE text is fully correct and don't modify the original project's NOTICE text at all.The license also requires that I state changes. The original work did not have any CHANGELOG file and changes were tracked in git (which I guess is not sufficient for the needs of the license). How and in what detail do I state the changes in this situation?A CHANGELOG is not required by Apache. The only mention of requirements related to changes is in 4b, which states that if you modify a file, that file must carry a prominent notice that the file has been changed. Typically, in a project that is using the Apache License (at least those released by the Apache Software Foundation), the top of each file will contain the boilerplate header. The method used to state that a file was changed is to add a new copyright line under the original one. If you are applying a new license to the changes, you would indicate this in the boilerplate header section as well.Assuming that the initial commit to your git repository was the original work and every revision was your contribution to a derivative work, I think that this is sufficient. You are meeting the requirement of 4b by stating that a file has changed. There is no requirement to further identify changes beyond a file level, but your version control repository would allow for that, if necessary.Because my work will most likely never be merged back to original project git repository I would like to be stated as one of the authors if possible. I suppose, that I add the mandatory license header to the files that I added to the project, but what about the changes in the files that I only modified? Should I add myself to the NOTICE file? I am not sure how would this look like so any example would be appreciated.When you modify the boilerplate header, you would add your name. If you are keeping the Apache License for your work, you don't need to do anything else. If you are going to apply a different license, then you do need to identify the license for your contributions to mark off what is Apache License and what is under the other license.The NOTICE file is only used for attribution to other people. For example, if you are including other projects in yours, the NOTICE file or NOTICE text somewhere else, clearly identifies these other projects and the license that they are under. Modifying the NOTICE file to point back to the original project that yours is a derivative work of would also be appropriate. |
_unix.322341 | I recently used a utility on Linux that reported the number of shutdowns / restarts that a hard disk had gone through (I believe the terminology used was power cycles) but I can't seem to recall which one.Is there a way to obtain to information on a given disk's age and number of shutdowns / restarts it has gone through ? | how to see how many power-down / power-up cycles a drive has gone through? | hard disk | Drives give this information via SMART. You can retrieve it using smartctl (in smartmontools):smartctl -a /dev/sdaThis will output quite a lot of information, including: 9 Power_On_Hours 0x0032 100 100 001 Old_age Always - 36065 12 Power_Cycle_Count 0x0032 100 100 001 Old_age Always - 175which shows that this particular drive has been powered on (in total) for 36,065 hours, and powered on 175 times. |
_softwareengineering.122022 | I am just starting out learning Visual Basic 2010. I have books and videos. The books all seem to be written for people who have some programming experience, even the books that say they are for beginners. The videos were great until they started talking about variables. I got the basics of them but they started into complicated variables and I dont see the need for them right away. Where can I go to see code for fairly intricate applications written out, with an over lay of definitions of which part of the code is a method as opposed to a class and so on? Also, I am working at a company that does not use SQL. So I need to use Access 2007 for all of my tables. Is there much of a difference to the coding? | Where can I get a definition of how the code is laid out in VB.NET 2010? | vb.net | null |
_unix.285319 | In bash how can I issue a command to a running process I just started?For example;# Start Bluez gatttool then Connect to bluetooth device gatttool -b $MAC -Iconnect # send 'connect' to the gatttool process?Currently my shell script doesn't get to the connect line because the gatttool process is running. | Send command to already running process in shell script | linux;bash;shell;expect | null |
_webapps.91014 | I am sure that you all know that Google offers the free backup of photos through their Google Photos App. I've since found out that the free backup, as Google advertises, with reduced photo quality only reduces quality for photos that are greater than 16MP - https://support.google.com/photos/answer/6220791?hl=enMy phone is my main source of backup and it shoots at 12MP so it should be full quality backup as far as I understand. The other thing is that Google Photos are now integrated into Google Drive. At the root of my Google Drive I have a folder called Google Photos, which organized all my backed up photos by year and month (this is awesome!). My question comes in here. Is there a way that I can tell the size on disk of my Google Photos folder? Basically I am going to start a sync to a computer of my Google Drive but I don't want to sync all of those Photos, it'll take forever and potentially a huge amount of disk space and bandwidth.To make matters worse I am using a third party application, called grive, that will do the sync to my Ubuntu laptop. I am not sure if the Windows and Mac official Google Drive sync apps have more options but this one doesn't have the option to exclude a folder. | Google Photos Backup Size | google drive;google photos;synchronization | null |
_unix.16460 | let's present my problem to explain better. I'm using cygwin, the installation is based on a setup.ini with the following format: @ package-name sdesc: short description, on one line ldesc: long description of arbitrary length, commonly multiple lines category: categories in which the packege belonges, one line requires: packages (libs etc) required by this package, one linethen comes the following package, & so forth.what I need is, given a package name output all packages required by this package (without the 'requires' prefix, if possible).I'm sure it's basic grep, but I'm new there. thanks. | find first line beginning with following | grep;search;cygwin | I am not sure how will you do it with grep, but for such tasks I prefer awk. It gives more control over what I want to do. though I am not expert in awk and still learning but here is how I would have achieved this.PKGNAM=package-name; awk /$PKGNAM\$/,/requires:/ { if ( \$0 ~ /requires:/ ) { sub( /^requires:.?/, \\ ); print } }UPDATE: updated the example awk command, now it uses the PKGNAM variable to match the pacakge name.HTH. |
_softwareengineering.80915 | Suppose I have two lists of N 3 by 3 vectors of integers.I need to find a quick way (say of running time at most N^(1+epsilon)) to find the vectors of the first list that have the same 1st coordinate with a vector of the second list.Of course, I could do the following naive copmarison:for u in list_1 dofor v in list_2if u[1] equals v[1] thenprint u;print v;end if;end for; end for;This, however, would require N^2 loops. I feel that sorting the two lists according to their first coordinateand then look up for collisions is perhaps a fast way. Bubbleshort, etc., would probably take logN time, but I can't really see how to code the search for collision between the sorted lists.Any help would be appreciated. | Fast algorithm for finding common elements of two sorted lists | sorting;pseudocode | null |
_softwareengineering.256566 | I'm making a class similar to the following:public class KeyValue{ public readonly string key; public readonly object value;}Value could be of any object type as a result of this design.Alternatively, I could just use dynamic for value and it'll make my life easier, because it would mean no type-casting, and also because, as far as I understand, I could use value types without needing to box/unbox.Is there any reason not to use dynamic and to use object instead? Because I can't think of any.Note: I realize generics are much more suited for this, but it doesn't work for my needs. The above is really a simplification just for the purposes of this question. | When to not use dynamic in C# | c#;object oriented;polymorphism;dynamic typing;boxing | If you can't think of a good reason TO use dynamic, then you are foregoing potential compile time checks for little to nothing in return.I prefer generics over object, and prefer object over dynamic, unless I need to interact with a dynamic language ScriptEngine or a dynamic container like a web form where its reasonable to handle the potential runtime exceptions when I access a missing field. Generics will perform best; and with object, you are at least signifying your intentions are to store any sort of object in the same container. Dynamic is not an actual type, and it isn't object so it should never be used to mean any sort of object, it should actually be used when you know what the object's contract is (method/property signature's); it is a runtime dispatch vehicle for keeping the same, convenient syntax for a type-unsafe (or at least dynamically bound) activity. It's like saying:OK, I know the activity I'm doing is subject to runtime dispatch and runtime errors, so give me the syntactical sugar anywayWhen I see dynamic, I usually assume there is an imminent:method call dispatch to a dynamically bound type (like an IronPython variable) access to dynamic form data with property syntax in an MVC controllerThough another legit use for it is to use the dynamic dispatch capability to implement the Visitor Pattern, although traditional virtual methods are likely faster.class FooVisitor { void Accept(Expression node) { ... } void Accept(Statement node) { ... }}foreach(dynamic node in ASTNodes) { visitor.Accept(obj); // will be dispatched at runtime based on type of each obj}Don't use it when:Performance is numero unoStatic type checking is desirable for more robust runtimeThe types can be derived at compile timeA generic type will doIf used unnecessarily, dynamic can actually reduce the robustness of your code by changing C# to a dynamic scripting language. |
_unix.334624 | On my system, when running the following snippet of C++ code compiled with either clang or gcc#include <cstdio>#include <SDL2/SDL.h>int main(int argc, char** args){ printf(Hi); SDL_Init(SDL_INIT_VIDEO); SDL_CreateWindow(, 0, 0, 800, 600, 0); printf(Bye);}then I get the following output at runtimeprocess 9360: arguments to dbus_connection_open_private() were incorrect, assertion address != NULL failed in file dbus-connection.c line 2664.This is normally a bug in some application using the D-Bus library.D-Bus not built with -rdynamic so unable to print a backtraceHiI have had this same problem when attempting to compile and run SDL2 code which has worked on another machine, although running the binary works if it is compiled on that machine.This leads me to believe it is a problem with this machine.I am running Antergos Linux and should be on the latest versions of SDL2 and D-Bus (I run updates regularly through pacman). I would appreciate any help and would be happy to answer any further questions, thank you. | D-Bus related runtime crash when trying to open SDL2 window | arch linux;c++;crash;d bus;antergos | null |
_unix.235329 | I ran nmap -Pn on all possible addresses for the local network and it took 50 minutes. If I limit the range to 100-200, for example, the same scan takes 3-4 minutes.Why is the full nmap scan taking so long and how can I make it quicker? | nmap scan takes 50 minutes | nmap | null |
_unix.371475 | I have just registered here.I am working on a script which puts data in an array into separate variables.Example: for((i=0; i < Counter; i++)); do while read -r Parmfilesjobid; do IFS=$'\n' read -d '' -r -a job$i < ${Parmfilesjobid[$i]} done <<< ${Parmfilesjobid} doneThe counter is a seperate variable because the number of times the for loop has to run can differ.As $I is incremented every time, I am trying to find out how I can turn the job$i into job0, job1, job2.This because every job$i contains seperate values.When i use:echo ${job1[@]}echo ${job2[@]}echo ${job3[@]}I can get the correct output per job$i (job0, job1, job2)But i want bash to convert the job$i into job0,job1,job2 so i can use them in another loop as seperate variables. | Increment the last part of a variable name | bash;shell;scripting | null |
_cs.60157 | I know the maths behind, I know if I do the algebra I can get the result of the 3 cases. I also have an intuition of the 3 cases: QuoraHowever, I just cannot memorize this simple 3 cases whenever I need to apply them in real life problems.I don't know if it is a shame that a CS graduate has to Google this theorem, which I learnt at the first year in University, just because I cannot memorize it. (Or is it actually no need to memorize it, please tell me, I will close the question at once)So assuming this basic theorem is important and I have to memorize it just like how we memorize F = ma in physics field, is there any way to aid memorizing these 3 cases in long term speaking? A way may means visualization, better intuition with clear reasoning behind, or even just die hard memorizing it, I just want to know how other CS people memorize this theorem. | How to memorize Master Theorem? | education;didactics | I have a confession for you. I often can't remember the Master theorem, either. Don't worry about it. It's not a big deal.Here's how I deal with it. In many situations, you can look it up each time you need it; and if so, no big deal.Occasionally, you might not be able to look it up. So, I taught myself how to derive the Master theorem. That might sound intimidating, but it's not as hard as it sounds. Personally, I find memorization hard, but if I can figure out how to re-derive the formula myself whenever I need it, I know I'm in good shape.So, my advice to you is: learn how to re-derive the Master theorem on your own, whenever you need it. Here's one way you could do that:First, learn the recursion tree method. Learn how to build the tree, how to count the number of leaves, and how to count the amount of extra work at each level, and how to sum them (by summing a series, e.g., a geometric series).Next, open up a textbook read a standard proof of the Master theorem. Work through each step and check that you understand what's happening.Now, close your textbook and put away all your resources. Put a blank piece of paper in front of you... and derive the Master theorem yourself. How do you do that? Well, you use the recursion tree method. Try working through it by yourself and try to solve the recurrence entirely on your own. If you get stuck, as a last resort you can open the textbook back up and see how to proceed from there... but then the next day, you should try this exercise again.If you understand the recursion tree method well, you should be able to get to the point where you can derive the Master theorem yourself, from scratch, using just a blank piece of paper and nothing more. |
_codereview.157169 | This code review request relates to this code review which covers the basic single REST call use case in this this REST client library.This code review covers the classes and unit tests for the multiple parallel REST call functionality provided by the library, which leverage the PHP cURL extension's curl_multi_* functionality.You might find it useful to read the library's README file before performing this review for further background and usage examples, which I have omitted here for brevity and to allow the question to focus on the code.RestMultiClient class<?phpnamespace MikeBrant\RestClientLib;/*** @desc Class which extendd RestClient to provide curl_multi capabilities, allowing for multiple REST calls to be made in parallel.*/class RestMultiClient extends RestClient{ /** * Store array of curl handles for multi_exec * * @var array */ private $curlHandles = array(); /** * Stores curl multi handle to which individual handles in curlHandles are added * * @var mixed */ private $curlMultiHandle = null; /** * Variable to store the maximum number of handles to be used for curl_multi_exec * * @var integer */ private $maxHandles = 10; /** * Variable to store an array of request headers sent in a multi_exec request * * @var array */ private $requestHeaders = array(); /** * Variable to store an array of request data sent for multi_exec POST/PUT requests. * * @var array */ private $requestDataArray = array(); /** * Variable to store CurlMultiHttpResponse object * * @var CurlMultiHttpResponse */ private $curlMultiHttpResponse = null; /** * Constructor method. Currently there is no instantiation logic. * * @return void */ public function __construct() {} /** * Method to perform multiple GET actions using curl_multi_exec. * * @param array $actions * @param integer $maxHandles * @return RestMultiClient * @throws \Exception * @throws \InvalidArgumentException * @throws \LengthException */ public function get($actions) { $this->validateActionArray($actions); // set up curl handles $this->curlMultiSetup(count($actions)); $this->setRequestUrls($actions); foreach($this->curlHandles as $curl) { curl_setopt($curl, CURLOPT_HTTPGET, true); // explicitly set the method to GET } $this->curlMultiExec(); return $this->curlMultiHttpResponse; } /** * Method to perform multiple POST actions using curl_multi_exec. * * @param array $actions * @param array $data * @return RestMultiClient * @throws \Exception * @throws \InvalidArgumentException * @throws \LengthException */ public function post($actions, $data) { $this->validateActionArray($actions); $this->validateDataArray($data); // verify that the number of data elements matches the number of action elements if (count($actions) !== count($data)) { throw new \LengthException('The number of actions requested does not match the number of data elements provided.'); } // set up curl handles $this->curlMultiSetup(count($actions)); $this->setRequestUrls($actions); $this->setRequestDataArray($data); foreach($this->curlHandles as $curl) { curl_setopt($curl, CURLOPT_POST, true); // explicitly set the method to POST } $this->curlMultiExec(); return $this->curlMultiHttpResponse; } /** * Method to perform multiple PUT actions using curl_multi_exec. * * @param array $actions * @param array $data * @return RestMultiClient * @throws \Exception * @throws \InvalidArgumentException * @throws \LengthException */ public function put($actions, $data) { $this->validateActionArray($actions); $this->validateDataArray($data); // verify that the number of data elements matches the number of action elements if (count($actions) !== count($data)) { throw new \LengthException('The number of actions requested does not match the number of data elements provided.'); } // set up curl handles $this->curlMultiSetup(count($actions)); $this->setRequestUrls($actions); $this->setRequestDataArray($data); foreach($this->curlHandles as $curl) { curl_setopt($curl, CURLOPT_CUSTOMREQUEST, 'PUT'); // explicitly set the method to PUT } $this->curlMultiExec(); return $this->curlMultiHttpResponse; } /** * Method to perform multiple DELETE actions using curl_multi_exec. * * @param array $actions * @param integer $maxHandles * @return RestMultiClient * @throws \Exception * @throws \InvalidArgumentException * @throws \LengthException */ public function delete($actions) { $this->validateActionArray($actions); // set up curl handles $this->curlMultiSetup(count($actions)); $this->setRequestUrls($actions); foreach($this->curlHandles as $curl) { curl_setopt($curl, CURLOPT_CUSTOMREQUEST, 'DELETE'); // explicitly set the method to DELETE } $this->curlMultiExec(); return $this->curlMultiHttpResponse; } /** * Method to perform multiple HEAD actions using curl_multi_exec. * * @param array $actions * @return RestMultiClient * @throws \Exception * @throws \InvalidArgumentException * @throws \LengthException */ public function head($actions) { $this->validateActionArray($actions); // set up curl handles $this->curlMultiSetup(count($actions)); $this->setRequestUrls($actions); foreach($this->curlHandles as $curl) { curl_setopt($curl, CURLOPT_CUSTOMREQUEST, 'HEAD'); curl_setopt($curl, CURLOPT_NOBODY, true); } $this->curlMultiExec(); return $this->curlMultiHttpResponse; } /** * Sets maximum number of handles that will be instantiated for curl_multi_exec calls * * @param integer $maxHandles * @return RestMultiClient * @throws \InvalidArgumentException */ public function setMaxHandles($maxHandles) { if (!is_integer($maxHandles) || $maxHandles <= 0) { throw new \InvalidArgumentException('A non-integer value was passed for max_handles parameter.'); } $this->maxHandles = $maxHandles; return $this->curlMultiHttpResponse; } /** * Getter for maxHandles setting * * @return integer */ public function getMaxHandles() { return $this->maxHandles; } /** * Method to set up a given number of curl handles for use with curl_multi_exec * * @param integer $handlesNeeded * @return void * @throws \Exception */ private function curlMultiSetup($handlesNeeded) { $multiCurl = curl_multi_init(); if($multiCurl === false) { throw new \Exception('multi_curl handle failed to initialize.'); } $this->curlMultiHandle = $multiCurl; for($i = 0; $i < $handlesNeeded; $i++) { $curl = $this->curlInit(); $this->curlHandles[$i] = $curl; curl_multi_add_handle($this->curlMultiHandle, $curl); } } /** * Method to reset the curlMultiHandle and all individual curlHandles related to it. * * @return void */ private function curlMultiTeardown() { foreach ($this->curlHandles as $curl) { curl_multi_remove_handle($this->curlMultiHandle, $curl); $this->curlClose($curl); } curl_multi_close($this->curlMultiHandle); $this->curlHandles = array(); $this->curlMultiHandle = null; } /** * Method to execute curl_multi call * * @return void * @throws \Exception */ private function curlMultiExec() { // start multi_exec execution do { $status = curl_multi_exec($this->curlMultiHandle, $active); } while ($status === CURLM_CALL_MULTI_PERFORM || $active); // see if there are any errors on the multi_exec call as a whole if($status !== CURLM_OK) { throw new \Exception('curl_multi_exec failed with status ' . $status . ''); } // process the results. Note there could be individual errors on specific calls $this->curlMultiHttpResponse = new CurlMultiHttpResponse(); foreach($this->curlHandles as $i => $curl) { try { $response = new CurlHttpResponse( curl_multi_getcontent($curl), curl_getinfo($curl) ); } catch (\InvalidArgumentException $e) { $this->curlMultiTeardown(); throw new \Exception( 'Unable to instantiate CurlHttpResponse. Message: ' . $e->getMessage() . '', $e->getCode(), $e ); } $this->curlMultiHttpResponse->addResponse($response); } $this->curlMultiTeardown(); } /** * Method to reset all properties specific to a particular request/response sequence. * * @return void */ protected function resetRequestResponseProperties() { $this->$curlMultiHttpResponse = null; $this->requestHeaders = array(); $this->requestDataArray = array(); } /** * Method to set the urls for multi_exec action * * @param array $actions * @return void */ private function setRequestUrls(array $actions) { for ($i = 0; $i < count($actions); $i++) { $url = $this->buildUrl($actions[$i]); $this->requestUrls[$i] = $url; curl_setopt($this->curlHandles[$i], CURLOPT_URL, $url); } } /** * Method to set array of data to be sent along with multi_exec POST/PUT requests * * @param array $data * @return void */ private function setRequestDataArray(array $data) { for ($i = 0; $i < count($data); $i++) { $data = $data[$i]; $this->requestDataArray[$i] = $data; curl_setopt($this->curlHandles[$i], CURLOPT_POSTFIELDS, $data); } } /** * Method to provide common validation for action array parameters * * @param array $actions * @return void * @throws \InvalidArgumentException * @throws \LengthException */ private function validateActionArray(array $actions) { if(empty($actions)) { throw new \InvalidArgumentException('An empty array was passed for actions parameter.'); } if(count($actions) > $this->maxHandles) { throw new \LengthException('Length of actions array exceeds maxHandles setting.'); } foreach($actions as $action) { $this->validateAction($action); } } /** * Method to provide common validation for data array parameters * * @param array $data * @return void * @throws \InvalidArgumentException * @throws \LengthException */ private function validateDataArray(array $data) { if(empty($data)) { throw new \InvalidArgumentException('An empty array was passed for data parameter'); } if(count($data) > $this->maxHandles) { throw new \LengthException('Length of data array exceeds maxHandles setting.'); } foreach($data as $item) { $this->validateData($item); } }}RestMultiClient unit tests<?phpnamespace MikeBrant\RestClientLib;use PHPUnit\Framework\TestCase;/** * Mock for curl_multi_init global function * * @return mixed */function curl_multi_init() { if (!is_null(RestMultiClientTest::$curlMultiInitResponse)) { return RestMultiClientTest::$curlMultiInitResponse; } return \curl_multi_init();}/** * Mock for curl_multi_exec global function * * @param resource curl_multi handle * @param integer flag indicating if there are still active handles. * @return integer */function curl_multi_exec($multiCurl, &$active) { if (is_null(RestMultiClientTest::$curlMultiExecResponse)) { return \curl_multi_exec($multiCurl, $active); } $active = 0; return RestMultiClientTest::$curlMultiExecResponse;}/** * Mock for curl_multi_getcontent global function * * @param resource curl handle * @return string */function curl_multi_getcontent($curl) { if (!is_null(RestMultiClientTest::$curlMultiGetcontentResponse)) { return RestMultiClientTest::$curlMultiGetcontentResponse; } return \curl_multi_getcontent($curl);}/** * This is hacky workaround for avoiding double definition of this global method override * when running full test suite on this library. */if(!function_exists('\MikeBrant\RestClientLib\curl_getinfo')) { /** * Mock for curl_getinfo function * * @param resource curl handle * @return mixed */ function curl_getinfo($curl) { $backtrace = debug_backtrace(); $testClass = $backtrace[1]['class'] . 'Test'; if (!is_null($testClass::$curlGetinfoResponse)) { return $testClass::$curlGetinfoResponse; } return \curl_getinfo($curl); }}class RestMultiClientTest extends TestCase{ public static $curlMultiInitResponse = null; public static $curlMultiExecResponse = null; public static $curlMultiGetcontentResponse = null; public static $curlGetinfoResponse = null; protected $client = null; protected $curlMultiExecFailedResponse = CURLM_INTERNAL_ERROR; protected $curlMultiExecCompleteResponse = CURLM_OK; protected $curlGetinfoMockResponse = array( 'url' => 'http://google.com/', 'content_type' => 'text/html; charset=UTF-8', 'http_code' => 200, 'header_size' => 321, 'request_size' => 49, 'filetime' => -1, 'ssl_verify_result' => 0, 'redirect_count' => 0, 'total_time' => 1.123264, 'namelookup_time' => 1.045272, 'connect_time' => 1.070183, 'pretransfer_time' => 1.071139, 'size_upload' => 0, 'size_download' => 219, 'speed_download' => 194, 'speed_upload' => 0, 'download_content_length' => 219, 'upload_content_length' => -1, 'starttransfer_time' => 1.122377, 'redirect_time' => 0, 'redirect_url' => 'http://www.google.com/', 'primary_ip' => '216.58.194.142', 'certinfo' => array(), 'primary_port' => 80, 'local_ip' => '192.168.1.74', 'local_port' => 59733, 'request_header' => GET / HTTP/1.1\nHost: google.com\nAccept: */*, ); protected function setUp() { self::$curlMultiInitResponse = null; self::$curlMultiExecResponse = null; self::$curlMultiGetcontentResponse = null; self::$curlGetinfoResponse = null; $this->client = new RestMultiClient(); } protected function tearDown() { $this->client = null; } /** * @expectedException \InvalidArgumentException * @covers MikeBrant\RestClientLib\RestMultiClient::validateActionArray */ public function testValidateActionArrayThrowsExceptionOnEmptyArray() { $this->client->get(array()); } /** * @expectedException \LengthException * @covers MikeBrant\RestClientLib\RestMultiClient::validateActionArray */ public function testValidateActionArrayThrowsExceptionOnOversizedArray() { $maxHandles = $this->client->getMaxHandles(); $this->client->get( array_fill(0, $maxHandles + 1, 'action') ); } /** * @expectedException \InvalidArgumentException * @covers MikeBrant\RestClientLib\RestMultiClient::validateDataArray */ public function testValidateDataArrayThrowsExceptionOnEmptyArray() { $this->client->get(array()); } /** * @expectedException \LengthException * @covers MikeBrant\RestClientLib\RestMultiClient::validateDataArray */ public function testValidateDataArrayThrowsExceptionOnOversizedArray() { $maxHandles = $this->client->getMaxHandles(); $this->client->post( array_fill(0, $maxHandles, 'action'), array_fill(0, $maxHandles + 1, 'data') ); } /** * @expectedException \Exception * @covers MikeBrant\RestClientLib\RestMultiClient::curlMultiSetup */ public function testCurlMultiSetupThrowsExceptionOnCurlMultiInitFailure() { self::$curlMultiInitResponse = false; $this->client->get( array_fill(0, 2, 'action') ); } /** * @expectedException \Exception * @covers MikeBrant\RestClientLib\RestMultiClient::curlMultiExec */ public function testCurlMultiExecThrowsExceptionOnMultiCurlFailure() { self::$curlMultiExecResponse = $this->curlMultiExecFailedResponse; $this->client->get( array_fill(0, 2, 'action') ); } /** * @expectedException \Exception * @covers MikeBrant\RestClientLib\RestMultiClient::curlMultiExec */ public function testCurlMultiExecThrowsExceptionOnMalformedCurlHttpResponse() { self::$curlMultiExecResponse = $this->curlMultiExecCompleteResponse; self::$curlMultiGetcontentResponse = 'test'; self::$curlGetinfoResponse = array(); $this->client->get( array_fill(0, 2, 'action') ); } /** * @covers MikeBrant\RestClientLib\RestMultiClient::get * @covers MikeBrant\RestClientLib\RestMultiClient::validateActionArray * @covers MikeBrant\RestClientLib\RestMultiClient::curlMultiSetup * @covers MikeBrant\RestClientLib\RestMultiClient::resetRequestResponseProperties * @covers MikeBrant\RestClientLib\RestMultiClient::setRequestUrls * @covers MikeBrant\RestClientLib\RestMultiClient::curlMultiExec * @covers MikeBrant\RestClientLib\RestMultiClient::curlMultiTeardown */ public function testGet() { self::$curlMultiExecResponse = $this->curlMultiExecCompleteResponse; self::$curlMultiGetcontentResponse = 'test'; self::$curlGetinfoResponse = $this->curlGetinfoMockResponse; $response = $this->client->get( array_fill(0, 2, 'action') ); $this->assertInstanceOf(CurlMultiHttpResponse::class, $response); $this->assertAttributeEquals(null, 'curlMultiHandle', $this->client); } /** * @expectedException \LengthException * @covers MikeBrant\RestClientLib\RestMultiClient::post */ public function testPostThrowsExceptionOnArraySizeMismatch() { $maxHandles = $this->client->getMaxHandles(); $this->client->post( array_fill(0, $maxHandles, 'action'), array_fill(0, $maxHandles - 1, 'data') ); } /** * @covers MikeBrant\RestClientLib\RestMultiClient::post * @covers MikeBrant\RestClientLib\RestMultiClient::validateData * @covers MikeBrant\RestClientLib\RestMultiClient::setRequestData */ public function testPost() { self::$curlMultiExecResponse = $this->curlMultiExecCompleteResponse; self::$curlMultiGetcontentResponse = 'test'; self::$curlGetinfoResponse = $this->curlGetinfoMockResponse; $response = $this->client->post( array_fill(0, 2, 'action'), array_fill(0, 2, 'data') ); $this->assertInstanceOf(CurlMultiHttpResponse::class, $response); $this->assertAttributeEquals(null, 'curlMultiHandle', $this->client); } /** * @expectedException \LengthException * @covers MikeBrant\RestClientLib\RestMultiClient::put */ public function testPutThrowsExceptionOnArraySizeMismatch() { $maxHandles = $this->client->getMaxHandles(); $this->client->put( array_fill(0, $maxHandles, 'action'), array_fill(0, $maxHandles - 1, 'data') ); } /** * @covers MikeBrant\RestClientLib\RestMultiClient::put */ public function testPut() { self::$curlMultiExecResponse = $this->curlMultiExecCompleteResponse; self::$curlMultiGetcontentResponse = 'test'; self::$curlGetinfoResponse = $this->curlGetinfoMockResponse; $response = $this->client->put( array_fill(0, 2, 'action'), array_fill(0, 2, 'data') ); $this->assertInstanceOf(CurlMultiHttpResponse::class, $response); $this->assertAttributeEquals(null, 'curlMultiHandle', $this->client); } /** * @covers MikeBrant\RestClientLib\RestMultiClient::delete */ public function testDelete() { self::$curlMultiExecResponse = $this->curlMultiExecCompleteResponse; self::$curlMultiGetcontentResponse = 'test'; self::$curlGetinfoResponse = $this->curlGetinfoMockResponse; $response = $this->client->delete( array_fill(0, 2, 'action') ); $this->assertInstanceOf(CurlMultiHttpResponse::class, $response); $this->assertAttributeEquals(null, 'curlMultiHandle', $this->client); } /** * @covers MikeBrant\RestClientLib\RestMultiClient::head */ public function testHead() { self::$curlMultiExecResponse = $this->curlMultiExecCompleteResponse; self::$curlMultiGetcontentResponse = 'test'; self::$curlGetinfoResponse = $this->curlGetinfoMockResponse; $response = $this->client->head( array_fill(0, 2, 'action') ); $this->assertInstanceOf(CurlMultiHttpResponse::class, $response); $this->assertAttributeEquals(null, 'curlMultiHandle', $this->client); }}CurlMultiHttpResponse class<?phpnamespace MikeBrant\RestClientLib;class CurlMultiHttpResponse{ /** * Variable to store individual CurlHttpResponse objects from curl_multi call * * @var array */ protected $curlHttpResponses = array(); /** * Constructor method. Currently there is no instantiation logic. */ public function __construct() {} /** * Method to add CurlHttpResponse object to collection * * @param CurlHttpResponse $response * @return void */ public function addResponse(CurlHttpResponse $response) { $this->curlHttpResponses[] = $response; } /** * Returns array of all CurlHttpResponse objects in collection. * * @return array */ public function getCurlHttpResponses() { return $this->curlHttpResponses; } /** * Alias for getCurlHttpResponses * * @return array */ public function getAll() { return $this->getCurlHttpResponses(); } /** * Returns array of response bodies for each response in collection. * * @return array */ public function getResponseBodies() { return array_map( function(CurlHttpResponse $value) { return $value->getBody(); }, $this->curlHttpResponses ); } /** * Returns array of response codes for each response in collection. * * @return array */ public function getHttpCodes() { return array_map( function(CurlHttpResponse $value) { return $value->getHttpCode(); }, $this->curlHttpResponses ); } /** * Returns array of URL's used for each response in collectoin as returned via curl_getinfo. * * @return array */ public function getRequestUrls() { return array_map( function(CurlHttpResponse $value) { return $value->getRequestUrl(); }, $this->curlHttpResponses ); } /** * Returns array of request headers for each response in collection as returned via curl_getinfo. * * @return array */ public function getRequestHeaders() { return array_map( function(CurlHttpResponse $value) { return $value->getRequestHeader(); }, $this->curlHttpResponses ); } /** * Returns array of curl_getinfo arrays for each response in collection. * See documentation at http://php.net/manual/en/function.curl-getinfo.php for expected format for each array element. * * @return array */ public function getCurlGetinfoArrays() { return array_map( function(CurlHttpResponse $value) { return $value->getCurlGetinfo(); }, $this->curlHttpResponses ); }}CurlMultiHttpResponse unit tests<?phpnamespace MikeBrant\RestClientLib;use PHPUnit\Framework\TestCase;class CurlMultiHttpResponseTest extends TestCase{ protected $curlMultiHttpResponse = null; protected $curlExecMockResponse = 'Test Response'; protected $curlGetinfoMockResponse = array( 'url' => 'http://google.com/', 'content_type' => 'text/html; charset=UTF-8', 'http_code' => 200, 'header_size' => 321, 'request_size' => 49, 'filetime' => -1, 'ssl_verify_result' => 0, 'redirect_count' => 0, 'total_time' => 1.123264, 'namelookup_time' => 1.045272, 'connect_time' => 1.070183, 'pretransfer_time' => 1.071139, 'size_upload' => 0, 'size_download' => 219, 'speed_download' => 194, 'speed_upload' => 0, 'download_content_length' => 219, 'upload_content_length' => -1, 'starttransfer_time' => 1.122377, 'redirect_time' => 0, 'redirect_url' => 'http://www.google.com/', 'primary_ip' => '216.58.194.142', 'certinfo' => array(), 'primary_port' => 80, 'local_ip' => '192.168.1.74', 'local_port' => 59733, 'request_header' => GET / HTTP/1.1\nHost: google.com\nAccept: */*, ); protected function setUp() { $this->curlMultiHttpResponse = new CurlMultiHttpResponse(); } public function curlHttpResponseProvider() { return array( array( new CurlHttpResponse($this->curlExecMockResponse, $this->curlGetinfoMockResponse) ) ); } /** * @dataProvider curlHttpResponseProvider * @covers MikeBrant\RestClientLib\CurlMultiHttpResponse::addResponse * @covers MikeBrant\RestClientLib\CurlMultiHttpResponse::getCurlHttpResponses * @covers MikeBrant\RestClientLib\CurlMultiHttpResponse::getAll */ public function testAddResponse($curlHttpResponse) { $responseArray = array_fill(0, 5, $curlHttpResponse); for ($i = 0; $i < count($responseArray); $i++) { $this->curlMultiHttpResponse->addResponse($curlHttpResponse); } $this->assertEquals($responseArray, $this->curlMultiHttpResponse->getCurlHttpResponses()); $this->assertEquals($responseArray, $this->curlMultiHttpResponse->getAll()); } /** * @dataProvider curlHttpResponseProvider * @covers MikeBrant\RestClientLib\CurlMultiHttpResponse::getResponseBodies */ public function testGetRepsonseBodies($curlHttpResponse) { $responseArray = array_fill(0, 5, $curlHttpResponse); for ($i = 0; $i < count($responseArray); $i++) { $this->curlMultiHttpResponse->addResponse($curlHttpResponse); } $responseBodies = array_map( function($val) { return $val->getBody(); }, $responseArray ); $this->assertEquals($responseBodies, $this->curlMultiHttpResponse->getResponseBodies()); } /** * @dataProvider curlHttpResponseProvider * @covers MikeBrant\RestClientLib\CurlMultiHttpResponse::getHttpCodes */ public function testgetHttpCodes($curlHttpResponse) { $responseArray = array_fill(0, 5, $curlHttpResponse); for ($i = 0; $i < count($responseArray); $i++) { $this->curlMultiHttpResponse->addResponse($curlHttpResponse); } $responseCodes = array_map( function($val) { return $val->getHttpCode(); }, $responseArray ); $this->assertEquals($responseCodes, $this->curlMultiHttpResponse->getHttpCodes()); } /** * @dataProvider curlHttpResponseProvider * @covers MikeBrant\RestClientLib\CurlMultiHttpResponse::getRequestUrls */ public function testGetRequestUrls($curlHttpResponse) { $responseArray = array_fill(0, 5, $curlHttpResponse); for ($i = 0; $i < count($responseArray); $i++) { $this->curlMultiHttpResponse->addResponse($curlHttpResponse); } $requestUrls = array_map( function($val) { return $val->getRequestUrl(); }, $responseArray ); $this->assertEquals($requestUrls, $this->curlMultiHttpResponse->getRequestUrls()); } /** * @dataProvider curlHttpResponseProvider * @covers MikeBrant\RestClientLib\CurlMultiHttpResponse::getRequestHeaders */ public function testGetRequestHeaders($curlHttpResponse) { $responseArray = array_fill(0, 5, $curlHttpResponse); for ($i = 0; $i < count($responseArray); $i++) { $this->curlMultiHttpResponse->addResponse($curlHttpResponse); } $requestHeaders = array_map( function($val) { return $val->getRequestHeader(); }, $responseArray ); $this->assertEquals($requestHeaders, $this->curlMultiHttpResponse->getRequestHeaders()); } /** * @dataProvider curlHttpResponseProvider * @covers MikeBrant\RestClientLib\CurlMultiHttpResponse::getCurlGetinfoArrays */ public function testGetCurlGetinfoArrays($curlHttpResponse) { $responseArray = array_fill(0, 5, $curlHttpResponse); for ($i = 0; $i < count($responseArray); $i++) { $this->curlMultiHttpResponse->addResponse($curlHttpResponse); } $requestInfoArrays = array_map( function($val) { return $val->getCurlGetinfo(); }, $responseArray ); $this->assertEquals($requestInfoArrays, $this->curlMultiHttpResponse->getCurlGetinfoArrays()); }} | Curl-based REST Client Library (round 3) | php;rest;curl;phpunit | null |
_cstheory.27364 | I am quite certain that I am not the first to entertain the idea that I am going to present. However, it would be helpful if I can find any literature related to the idea.The idea is to construct a Turing Machine M with the property that if P=NP then M will solve 3-SAT in polynomial time. (The choice of 3-SAT is arbitrary. It could be really any problem in NP).Just to be clear, this is not a claim that P=NP. In fact, I believe the opposite. I merely state that if P=NP, then M will provide a polynomial-time solution. If you are looking for an efficient solution, I should warn that this is far from efficient.M is constructed as follows:first, assume a canonical encoding for all Turing Machines, and apply a numbering to these machines. So, there is a Turing Machine number 1, a number 2, etc. The idea of a Universal Turing Machine that can read the format for a provided machine and then simulate that machine's running on separate input is pretty well known. M will employ a Universal Turing Machine to construct and simulate each Turing Machine in turn.It first simulates the running of Turing Machine 1 for a single step.It then looks at the output of Turing Machine 1.It the simulates the running of Turing Machine 1 for two steps and looks at output,then proceeds to simulate Turing Machine 2 for 2 steps.It continues and loops in this fashion, in turn running Turing Machine 1 for k steps then 2 for k steps ... then eventually machine k for k steps.After each simulation run, it examines the output of the run. If the output is an assignment of variables satisfying the 3-SAT problem instance, M halts in an accept state. If on the other hand, the output is a proof-string in some verifiable proof-language with the proven result that the problem instance is not satisfiable, M halts in a reject state. (For a proof-language, we could for example, use the Peano Axioms with Second-Order logic and the basic Hilbert-style logical axioms. I leave it as an exercise for the reader to figure out that if P=NP, a valid proof-language exists and is polynomial-time verifiable).I will claim here that M will solve 3-SAT in polynomial time if and only if P=NP.Eventually, the algorithm will find some magical Turing Machine with number K, which just so happens to be an efficient solver for the 3-SAT problem, and is able to provide a proof of its results for either success or failure. K will eventually be simulated running poly(strlen(input)) steps for some polynomial. The polynomial for M is roughly the square of the polynomial for k in the largest factor, but with some terrible constants in the polynomial.To reiterate my question here: I want to know if there is a literature source that employs this idea. I am somewhat less interested in discussing the idea itself. | Looking for Literature Source for Following idea | reference request;turing machines;p vs np | It seems that this idea is attributed to Levin (It is called optimal search). I believe this fact is well known. A similar algorithm is described in wikipedia for instance, although using the subset sum problem. In this article from scholarpedia you can find several references on the subject, including a pointer to the original algorithm and to some other optimal search algorithms. Comment 1: Levin's optimal search guarantees that if $\varphi$ is a satisfiable instance then a solution will be found in polynomial time assuming $P=NP$. If $\varphi$ is not satisfiable the algorithm may not terminate.Comment 2: As Jaroslaw Blasiok pointed out in another answer, this algorithm does not decide Sat only assuming P=NP. |
_unix.333878 | I extracting a column from a file with different values, some of them are 11 character to 13, but whenever the value is 11 I need to add a 0 in front. awk -F, '{print $1 }' $FILE | \ awk '{printf(%04d%s\n, NR, $0)}' | \ awk '{printf(%-12s\n, $0) }'825449900788254499075789918800173893374020027239337402002686933740200274781215301073385227100500389000118359It should look like this:082544990078082544990757899188001738933740200272393374020026869337402002747812153010733852271005003089000118359 | Add 0 when ever the value is 12 character | text processing;awk;sed;numeric data | You can use awk for this:$ awk 'length() == 11 { $0 = 0 $0 } 1' < input082544990078082544990757899188001738933740200272393374020026869337402002747812153010733852271005003089000118359 |
_unix.350177 | I am trying to build lumify project with the following commandmvn package -e -P web-war -pl web/war -am -DskipTests -Dsource.skip=trueI am getting following compilation errors[INFO] Lumify ............................................ SUCCESS [1.818s][INFO] Lumify: Web ....................................... SUCCESS [0.051s][INFO] Lumify: Web: Client API ........................... SUCCESS [1.676s][INFO] Lumify: Core ...................................... SUCCESS [0.053s][INFO] Lumify: Core: Core ................................ SUCCESS [2.755s][INFO] Lumify: Core: Plugins ............................. SUCCESS [0.048s][INFO] Lumify: Core: Plugin: Model: BigTable ............. SUCCESS [1.888s][INFO] Lumify: Core: Plugin: Model: RabbitMQ ............. SUCCESS [0.883s][INFO] Lumify: Core: Plugin: Model: Secure Graph ......... SUCCESS [14.303s][INFO] Lumify: Web: Base ................................. FAILURE [12.883s][INFO] Lumify: Web: War .................................. SKIPPED[INFO] ------------------------------------------------------------------------[INFO] BUILD FAILURE[INFO] ------------------------------------------------------------------------[INFO] Total time: 38.403s[INFO] Finished at: Wed Mar 08 22:59:18 PST 2017[INFO] Final Memory: 49M/145M[INFO] ------------------------------------------------------------------------[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project lumify-web: Compilation failure: Compilation failure:[ERROR] /home/ziontest/lumify/lumify/web/web-base/src/main/java/io/lumify/web/routes/admin/AdminUploadOntology.java:[76,48] cannot find symbol[ERROR] symbol: method getParts()[ERROR] location: variable request of type javax.servlet.http.HttpServletRequest[ERROR] /home/ziontest/lumify/lumify/web/web-base/src/main/java/io/lumify/web/routes/config/Plugin.java:[39,33] cannot find symbol[ERROR] symbol: method getServletContext()[ERROR] location: variable request of type javax.servlet.http.HttpServletRequest[ERROR] /home/ziontest/lumify/lumify/web/web-base/src/main/java/io/lumify/web/routes/Index.java:[50,85] cannot find symbol[ERROR] symbol: method getServletContext()[ERROR] location: variable request of type javax.servlet.http.HttpServletRequest[ERROR] /home/ziontest/lumify/lumify/web/web-base/src/main/java/io/lumify/web/routes/vertex/VertexImport.java:[99,33] cannot find symbol[ERROR] symbol: method getParts()[ERROR] location: variable request of type javax.servlet.http.HttpServletRequest[ERROR] /home/ziontest/lumify/lumify/web/web-base/src/main/java/io/lumify/web/routes/vertex/VertexUploadImage.java:[96,60] cannot find symbol[ERROR] symbol: method getParts()[ERROR] location: variable request of type javax.servlet.http.HttpServletRequest[ERROR] /home/ziontest/lumify/lumify/web/web-base/src/main/java/io/lumify/web/ApplicationBootstrap.java:[140,54] cannot find symbol[ERROR] symbol: method addServlet(java.lang.String,io.lumify.web.Router)[ERROR] location: variable context of type javax.servlet.ServletContext[ERROR] /home/ziontest/lumify/lumify/web/web-base/src/main/java/io/lumify/web/ApplicationBootstrap.java:[151,54] cannot find symbol[ERROR] symbol: method addServlet(java.lang.String,java.lang.Class<org.atmosphere.cpr.AtmosphereServlet>)[ERROR] location: variable context of type javax.servlet.ServletContext[ERROR] /home/ziontest/lumify/lumify/web/web-base/src/main/java/io/lumify/web/ApplicationBootstrap.java:[152,16] cannot find symbol[ERROR] symbol: method addListener(java.lang.Class<org.atmosphere.cpr.SessionSupport>)[ERROR] location: variable context of type javax.servlet.ServletContext[ERROR] /home/ziontest/lumify/lumify/web/web-base/src/main/java/io/lumify/web/ApplicationBootstrap.java:[169,52] cannot find symbol[ERROR] symbol: method addFilter(java.lang.String,java.lang.Class<io.lumify.web.RequestDebugFilter>)[ERROR] location: variable context of type javax.servlet.ServletContext[ERROR] /home/ziontest/lumify/lumify/web/web-base/src/main/java/io/lumify/web/ApplicationBootstrap.java:[175,52] cannot find symbol[ERROR] symbol: method addFilter(java.lang.String,java.lang.Class<io.lumify.web.CacheServletFilter>)[ERROR] location: variable context of type javax.servlet.ServletContext[ERROR] -> [Help 1]org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project lumify-web: Compilation failureat org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)Caused by: org.apache.maven.plugin.compiler.CompilationFailureException: Compilation failureat org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:858)at org.apache.maven.plugin.compiler.CompilerMojo.execute(CompilerMojo.java:129)at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)... 19 more[ERROR] [ERROR] Re-run Maven using the -X switch to enable full debug logging.[ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles:[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException[ERROR] [ERROR] After correcting the problems, you can resume the build with the command[ERROR] mvn <goals> -rf :lumify-webI added javax-servlet-api 3.1.0 dependency in pom file. My java version is java 7. I set JAVA_HOME =/usr/lib/jvm/java-7-openjdk-amd64Still I am getting this error Please help me to resolve this issue as soon as possibleNote: I am building the project through terminal in ubuntu. I am not using any IDE | Maven Compilation error while building Lumify web-base project | ubuntu;java;maven | null |
_softwareengineering.46913 | We are integrating a testing process in our SCRUM process. My new role is to write acceptance tests of our web applications in order to automate them later. I have read a lot about how tests cases should be written, but none gave me practical advices to write test cases for complex web applications, and instead they threw conflicting principles that I found hard to apply:Test cases should be short: Take the example of a CMS. Short test cases are easy to maintain and to identify the inputs and outputs. But what if I want to test a long series of operations (eg. adding a document, sending a notification to another user, the other user replies, the document changes state, the user gets a notice). It rather seems to me that test cases should represent complete scenarios. But I can see how this will produce overtly complex test documents.Tests should identify inputs and outputs:: What if I have a long form with many interacting fields, with different behaviors. Do I write one test for everything, or one for each?Test cases should be independent: But how can I apply that if testing the upload operation requires that the connect operation is successful? And how does it apply to writing test cases? Should I write a test for each operation, but each test declares its dependencies, or should I rewrite the whole scenario for each test?Test cases should be lightly-documented: This principles is specific to Agile projects. So do you have any advice on how to implement this principle?Although I thought that writing acceptance test cases was going to be simple, I found myself overwhelmed by every decision I had to make (FYI: I am a developer and not a professional tester). So my main question is: What steps or advices do you have in order to write maintainable acceptance test cases for complex applications. Thank you.Edit: To clarify my question: I am aware that Acceptance testing should start from the requirement and regard the whole application as a black box. My question relates to the practical steps to write the testing document, identify the test cases, deal with dependencies between tests...for complex web applications | Writing Acceptance test cases | testing;documentation;acceptance testing | In my acceptance suites I have stayed away from using technology specific controls i.e for web applications don't use css dont use html elements if you need to fill in a form do the specifics in the steps to setup the SUT not the actual acceptance tests I use cucumber for my acceptance and have the followingGiven A xxx And I am on the xxx pageAnd a clear email queueAnd I should see Total Payable xxxxAnd I supply my credit card detailsWhen I the payment has been processedThen my purchase should be completeAnd I should receive an emailWhen I open the email with subject xxxThen I should see the email delivered from xxAnd there should be an attachment of type application/pdfAnd attachment 1 should be named xxxxAnd I should be on the xxx pageAnd I should see my receiptthis example is back by a web application but I can still use the test to test against a desktop application as the steps are used to setup the SUT not the acceptance teststhis test sits at the end of a purchase which goesGenerate -> Confirm -> Payment -> Print Receiptthe test above is for the payment step the other steps are setup in other testsdue to the application being able to setup into these states with data or http actions in this case the payment has a given which does the confirm steps and the confirm does the generate steps so they are a bit brittle at the minute |
_webmaster.24487 | I'm using Windows 7 x64 and IIS 7 to serve several websites. What I want to do is set-up a mail server for every domain name (like in Linux hostings), and web-interface for example domain1.com/webmail, domain2.com/webmail ... Is that possible on Windows? Any suggestions? | Mail server for every website on IIS | email;iis7;webserver;webmail;windows 7 | null |
_unix.121685 | Above system call, there are library routines, utilities and applications. Do daemons fall into any of these categories or they have their own category? | Daemons fall into what category? | daemon;architecture | null |
_unix.364999 | aircraftdeMacBook-Pro:~ ldl$ ssh [email protected] The authenticity of host '103.35.202.76 (103.32.202.71)' can't be established. RSA key fingerprint is SHA256:w9u+mNFvkMg8lNydqJ/ZT6tV0lX/pwGIf1rWfYW1w0s.Are you sure you want to continue connecting (yes/no)?What does this mean: RSA key fingerprint is SHA256:?Why does this show up?If I choose the yes then I get the below information:Warning: Permanently added '103.35.202.76' (RSA) to the list of known hosts. Connection to 103.35.202.76 closed by remote host. Connection to 103.35.202.76 closed.I searched Unix & Linux, and I found How does SSH display the message The authenticity of host .. can't be established?.Testing in my terminal:aircraftdeMacBook-Pro:~ ldl$ open(/dev/tty, O_RDWR) = 4 -bash: syntax error near unexpected token `/dev/tty,' | The authenticity of host '103.35.202.76 (103.32.202.71)' can't be established | linux;centos;ssh | As with any kind of secure connection, you not only want to know that your connection, once established, is private between the parties communicating as well as resistant to tampering, but you also want to know that you're talking to the endpoint you thought you were talking to in the first place. Cryptography is great at solving the first problem, but it doesn't solve the second one at all (although it provides tool to help solve it). You need a PKI to solve the second problem.SSH doesn't have an elaborate PKI, neither the web of trust system popularized by PGP nor the top-down certification system popularized by HTTPS & SSL. As a result, it can't guarantee that the serve responding at the other end of a connection really was the one you tried to connect to. The connection might have been redirected at the TCP or IP level underneath the crypto without the crypto being able to notice.So the first time you connect to any SSH server, it asks you to confirm, by external means, whether the fingerprint offered by the server indeed corresponds to the fingerprint of the server you intended to connect to. You can verify this securely out of band, or, of course, at your own risk, you can answer yes anyway and take your chances.SSH does have a kind of mini-PKI: after the first time, it remembers the server's fingerprint, so that when you connect again to the same server it can check if it's the same one as before. |
_softwareengineering.141215 | In an oriented-services enterprise application, isn't it an antipattern to mix Service APIs (containing interface that external users depends on) with Model objects (entities, custom exceptions objects etc...) ?According to me, Services should only depends on Model layer but never mixed with it. In fact, my colleague told me that it doesn't make sense to separate it since client need both. (model and service interfaces)But I notice that everytime a client asks for some changes, like adding a new method in some interface (means a new service), Model layer has to be also delivered...Thus, client who has not interested by this addition is constrained to be concerned by this update of Model... and in a large enterprise application, this kind of delivery is known to be very risked...What is the best practice ? Separate services(only interfaces so) and model objects or mix it ? | Should Business Interfaces be part of the Model layer? | java;design;enterprise architecture | It depends on whether client is using exactly the same domain model as the server/service. Typically it doesn't, so they should be kept separate IMO. |
_computerscience.3826 | I am running this fragment shader on every pixel on screen. I am trying to make it as efficient as possible. BTW I am running open gl es 2.0.The code segment below is only a sample from the code, there are about 56 different calls to Gaussian() spread across different functions. I am wondering wether it would be valuable to replace the calls to Gaussian() with there appropriate resulting float value.I know a lot of times stuff like this is pre-calculated on compilation of the shader, or calculated only once because the gpu realizes it is the same calculation for all fragments.So would it be worthwhile for me to manually calculate each of these and replace them with their values?uniform float scalar;varying float scalart;float deviationScale = 1.2;float finalScalar = 1.1;float Gaussian(float x, float deviation){ return (1.0 / sqrt(2.0 * 3.141592 * deviation)) * exp(-((x * x) / (2.0 * deviation)));}vec3 blurS5(){ vec3 blr = vec3(0.0); blr += texture2D(s_texture, (v_texcoord + vec2(2.0 * scalart, 0.0))).xyz * finalScalar * Gaussian(2.0, 2.0 * deviationScale) ; blr += texture2D(s_texture, (v_texcoord + vec2(1.0 * scalart, 0.0))).xyz * finalScalar * Gaussian(1.0, 2.0 * deviationScale) ; blr += texture2D(s_texture, (v_texcoord + vec2(0.0 * scalart, 0.0))).xyz * finalScalar * Gaussian(0.0, 2.0 * deviationScale) ; blr += texture2D(s_texture, (v_texcoord + vec2(-1.0 * scalart, 0.0))).xyz * finalScalar * Gaussian(-1.0, 2.0 * deviationScale) ; blr += texture2D(s_texture, (v_texcoord + vec2(-2.0 * scalart, 0.0))).xyz * finalScalar * Gaussian(-2.0, 2.0 * deviationScale) ; return blr;}void main(){ if (scalar == 2.) { gl_FragColor = vec4(blurS5(), 1.0); } else if (scalar == 3.) { gl_FragColor = vec4(blurS9(), 1.0); } //There are waaayyy more of these} | Will the gaussian kernels in this fragment shader be computed for every fragment? | shader;fragment shader;gaussian blur | null |
_codereview.24212 | The code allows you to select an area from the left column and another area from the right column followed by clicking on the Choose button which sends the chosen areas to the server:<html> <head> <style> body { overflow: hidden; } article.left { overflow: hidden; float: left; } article.left section { float: left; } section { border: 1px solid black; height: 6em; margin-right: 1em; width: 4em; } article.right section { border: 1px dashed black; } section.ice { transform:rotate(-90deg); -moz-transform:rotate(-90deg); -webkit-transform:rotate(-90deg); } article.right { float: right; } section.section-selected, section.right-selected { border-color: #EEE; } input.choose { display: none; } </style> <script src=http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js></script> <script> $(document).ready(function() { $('article.left section').click(function() { var was_selected = $(this).hasClass('section-selected'); $('article.left section').removeClass('section-selected'); if (!was_selected) { $(this).addClass('section-selected'); } }); $('article.right section').click(function() { $(this).toggleClass('right-selected'); if ($('section.right-selected')) { $(this).children('input.choose').toggle(); } }); $('input.choose').click(function() { var section = $('section.section-selected'); if (section.length) { console.log(section.attr('section-id') + ' ' + $(this).attr('location-type')); console.log($(this).parents('article').attr('article-id')); } else { console.log('none selected'); } }); }); </script> </head> <body> <article article-id=L class=left> <section section-id=A>A</section> <section section-id=B>B</section> </article> <article article-id=R class=right> <section section-id=C><input type=button class=choose location-type=vertical value=Choose /></section> <section section-id=D class=horizontal><input type=button class=choose location-type=horizontal value=Choose /></section> </article> </body></html>Here's a link to the jsfiddle http://jsfiddle.net/95WvB/. The code is working fine but I am wondering if the above is what is considered as spaghetti code that so many of those js frameworks like ember or angular are trying to solve. Is it best to use one of those framework to refactor the above code or use backbone?This code is a sample of a much larger web application which repeats more or less of the same interactions between front and backend. | Is this spaghetti javascript code? How can it be refactored with a javascript library or framework? | javascript;jquery;html | null |
_softwareengineering.114085 | I am working in a software company where we are mostly working on websites which are based on open source like dotnetnuke or nopcommerce or any other and i am totally bored with all this as there is no scope for doing something new as most of the code is already done in these open source i know the open source are good for company as they save their time and earn more money but working on open source project is bad from a programmer view ?i learn some good things from these open source also...like entity framework from nopcommerce 1.9 | working on open source project is bad from a programmer view? | open source | Being bored for a prolonged period is a good sign that it is time to move on, but I do not think that this has anything to do with working on an open source project or not.The phrase there is nothing new in the world... applies to software as easily as it does to movies or books. Open or closed source, finding something new to do is a relative question. What I mean by this is: if I am writing a website for a clothing company using dotCMS, am I doing something new? No: dotCMS (an open source CMS) is not new.No: plenty of clothing companies have websites.No: I am probably not going to implement new features in dotCMS as part of the project.But more importantly:YES: it is a new project for me and I will hopefully learn a thing or two.orNO: this is the fifteenth dotCMS project in a row and I am bored.I think the above answers will be exactly the same whether you are talking about using open or closed source software.In every job I have taken, I have been doing new things on previously established frameworks, APIs etc. They were open source, but that isn't the point. What I was doing was new to me.. so if you are bored, look around for something that is new. :) |
_unix.196063 | I would like to know how to have a process that starts with the X server and automatically restarts when it stops running. I am running Ubuntu 14.04 with a KDE 4.13.3 desktop.I am running the program 'touchegg' to support multitouch gestures.I currently have the program setup to run automatically using the autostart section of KDE's control panel. However the process occasionally stops working, often after hibernation and I would like it to automatically restart if this happens. It seems like it should be fairly straight forward but all the information I can find about automatically restarting processes is for processes that run at boot time rather than after Xorg starts. | Automatically restarting process in the X server | linux;scripting;xorg;kubuntu | null |
_opensource.4753 | I was wondering if I copy someone else's software and make some design changes and try to make money from it. Is it legal?Which licenses provides a facility for that? | Is it legal to monetize from someone else's software? | licensing;gpl;license recommendation;mit;copyright | null |
_unix.220853 | So, I have a shell script for updating a MySQL database that looks something like this:#!/bin/shmysql -h localhost -u root -p******** database < update.sqlsleep 5sh $0It sleeps for 5 seconds and then the sh $0 reruns the script infinitely, without my intervention. However, my question is about memory:I am relatively new to shell scripts, but is the memory slowly piling up in a loop like this? Does the remote server recycle the memory, or will the script eventually reach a cut-off? (Or, will it crash from a memory leak?) | Memory usage of an infinitely looping shell script | shell script;memory | This is not a loop, but recursion and the memory increases linear over the time, which is what you don't want.If you want a loop with constant memory usage, you can do it this way:#!/bin/shwhile 1; do mysql -h localhost -u root -p******** database < update.sql sleep 5done |
_webmaster.93087 | I'm moving an ecommerce store from a subdomain using CubeCart to the root directory using WooCommerce. All product urls will also change.This site has over 1000 products, all of which are indexed. From what I can tell though every product page only has a pagerank of 1 according to Mozilla.So question is should I even bother redirecting these pages to their new locations? If so, any suggested methods other than one at a time?Also any other suggestions for maintaining SEO value would be much appreciated! | Moving eCommerce site to new domain. Should I redirect all product pages to new location? | ecommerce;woocommerce | null |
_softwareengineering.294857 | I recently ran in to this common invalid operation Collection was modified in C#, and while I understand it fully, it seems to be such a common problem (google, about 300k results!). But it also seems to be a logical and straightforward thing to modify a list while you go through it.List<Book> myBooks = new List<Book>();public void RemoveAllBooks(){ foreach(Book book in myBooks){ RemoveBook(book); }}RemoveBook(Book book){ if(myBooks.Contains(book)){ myBooks.Remove(book); if(OnBookEvent != null) OnBookEvent(this, new EventArgs(Removed)); }}Some people will create another list to iterate through, but this is just dodging the issue. What's the real solution, or what is the actual design issue here? We all seem to want to do this, but is it indicative of a design flaw? | Is creating a new List to modify a collection in a for each loop a design flaw? | c# | Is creating a new List to modify a collection in a for each loop a design flaw?The short answer: noSimply spoken, you produce undefined behaviour, when you iterate through a collection and modify it at the same time. Think of deleting the next element in a sequence. What would happen, if MoveNext() is called?An enumerator remains valid as long as the collection remains unchanged. If changes are made to the collection, such as adding, modifying, or deleting elements, the enumerator is irrecoverably invalidated and its behavior is undefined. The enumerator does not have exclusive access to the collection; therefore, enumerating through a collection is intrinsically not a thread-safe procedure.Source: MSDNBy the way, you could shorten your RemoveAllBooks to simply return new List<Book>()And to remove a book, I recommend returning a filtred collection: return books.Where(x => x.Author != Bob).ToList();A possible shelf-implementation would look like:public class Shelf{ List<Book> books=new List<Book> { new Book (Paul), new Book (Peter) }; public IEnumerable<Book> getAllBooks(){ foreach(Book b in books){ yield return b; } } public void RemovePetersBooks(){ books= books.Where(x=>x.Author!=Peter).ToList(); } public void EmptyShelf(){ books = new List<Book> (); } public Shelf () { }}public static void Main (string[] args){ Shelf s = new Shelf (); foreach (Book b in s.getAllBooks()) { Console.WriteLine (b.Author); } s.RemovePetersBooks (); foreach (Book b in s.getAllBooks()) { Console.WriteLine (b.Author); } s.EmptyShelf (); foreach (Book b in s.getAllBooks()) { Console.WriteLine (b.Author); }} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.