id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_cs.44562
In the paperQuick Detection of Brain Tumors and Edemas: A Bounding Box MethodUsing Symmetry, Saha et althe authors claim that the running time of the algorithm (Matlab implementation) is $O(h)$. I don't understand how they arrive at this.The algorithm works as follows:The algorithm does a vertical sweep of a whole image of size $h\times w$. For every $l$ between $0$ and $h$, a score function $s$ is computed. To compute $s$ we we calculate histogram computation of the four parts of the whole image : $[0:l,1:w/2]$, $[l:h,1:w/2]$, $[0:l,w/2:end]$ and $[l:end,w/2:end]$.How do they arrive at $O(h)$ time complexity for the algorithm? Do they consider histogram computation to be $O(1)$? PS: This question is related to What is the time-complexity of histogram computation? but I hope to get the exact answer in concrete case this time ...
Time complexity of a vertical sweep algorithm with histogram computations
time complexity;image processing
$O(h)$ sounds hard to believe, unless there's some pre computation that's not counted here, as it takes $\Theta(hw)$ time just to read every pixel of the image.Computing a histogram from scratch can't be done in $O(1)$ time. But updating a histogram can be done efficiently. If you already have a histogram for some set of pixels, and then you add or delete one pixel, you can update the histogram in $O(1)$ time. This kind of idea can be used to speed up the sort of thing you're talking about, because all of the histograms we want are closely related.In particular, when we increment $l$, we add or delete a row from a histogram. Thus you can update the histogram more efficiently than computing it from scratch.
_unix.204570
I have a file of three columns:1 A 0.52 B 0.73 A 104 C 45 B 4I want to sort the file by increasing order of column 3 and group by column 2 1 A 0.53 A 102 B 0.75 B 44 C 4I know how to sort only based on the 3rd column :sort -k3,3 file But can we group by the second column ?
sort file based one colum and group by another column
linux;files;awk;sort;group
null
_softwareengineering.39845
I'm currently tasked with writing a wcf service that, for now, will only used inside the company network, the problem is that I'm not sure how I should handle the operations it exposes.The software that will use this service will have to modify similar tables in different ways. For example a table, that has columns a,b,c and d. Program X only updates columns a and b, while program Y updates b,c and d.I feel that a generic Update method that accepts the whole record is easier to write and makes the service less bloated. But it does feel less secure, and would probably make it harder to understand for new developers.How do I best handle these situations on a service level?edit: Yes the tasks are in a sense unique, but the problem is that it's difficult to figure out how unique the service should be. Do I make a general service to allow access to the data, and let the details of those tasks be handled client-side? Security concerns are not that high. The biggest concern is maintainability and ease of understanding. At this moment we have 20+ databases where some of them have 100+ tables.
Best way to expose a data-oriented service
c#;wcf;services
null
_codereview.110752
When searching for information about the Binary Tree, the code on people's blogs are complicated, so here I'm trying to code a very simple Binary Tree.Am I missing anything? Is there a better way I could be implementing the BinTree::Remove?#ifndef _BINTREE_H__#define _BINTREE_H__using namespace std;class BinTree{private: struct tree_node { tree_node* left; tree_node* right; int data; }; tree_node* root;public: BinTree() { root = NULL; //Don't forget The constructor!It's Gonna kill the code } void Insert(int); //int nodes(); bool IsEmpty(){ return root == NULL; } void print_Preorder(); void Preorder(tree_node *); bool Search(int); void Remove(int);};void BinTree::Insert(int d){ tree_node * t = new tree_node; tree_node * parent = NULL; t->data = d; t->left = NULL; t->right = NULL; if (IsEmpty()){ root = t; } else{ tree_node * curr; curr = root; while (curr){ parent = curr; if (t->data > curr->data) curr = curr->right; else curr = curr->left; } if (parent->data > t->data) parent->left = t; else parent->right = t; }}void BinTree::print_Preorder(){ Preorder(root);}void BinTree::Preorder(tree_node* p){ if (p != NULL) { cout << << p->data << ; if (p->left) Preorder(p->left); if (p->right) Preorder(p->right); } else return;}bool BinTree::Search(int d){ bool found = false; if (IsEmpty()) { cout << This Tree is empty! << endl; return false; } tree_node* curr; tree_node* parent; curr = root; parent = (tree_node*)NULL; while (curr != NULL) { if (curr->data == d) { found = true; break; } else { parent = curr; if (d>curr->data) curr = curr->right; else curr = curr->left; } } if (!found) { cout << Data not found! << endl; } else { cout << Data found! << endl; } return found;}void BinTree::Remove(int d){ bool found = false; if (IsEmpty()) { cout << This Tree is empty! << endl; return; } tree_node* curr; tree_node* parent; curr = root; parent = NULL; while (curr != NULL) { if (curr->data == d) { found = true; break; } else { parent = curr; if (d>curr->data) curr = curr->right;//l else curr = curr->left; } } if (!found) { cout << Data not found! << endl; return; } // Node with single child if ((curr->left == NULL && curr->right != NULL) || (curr->left != NULL && curr->right == NULL)) { if (curr->left == NULL && curr->right != NULL) { if (parent->left == curr) { parent->left = curr->right; delete curr; } else { parent->right = curr->right; delete curr; } } else // left child present, no right child { if (parent->left == curr) { parent->left = curr->left; delete curr; } else { parent->right = curr->left; delete curr; } } return; } //We're looking at a leaf node if (curr->left == NULL && curr->right == NULL) { if (parent == NULL) { delete curr; } else if (parent->left == curr) parent->left = NULL; else parent->right = NULL; delete curr; return; } //Node with 2 children // replace node with smallest value in right subtree if (curr->left != NULL && curr->right != NULL) { tree_node* chkr; chkr = curr->right; if ((chkr->left == NULL) && (chkr->right == NULL)) { curr = chkr; delete chkr; curr->right = NULL; } else // right child has children { //if the node's right child has a left child // Move all the way down left to locate smallest element if ((curr->right)->left != NULL) { tree_node* lcurr; tree_node* lcurrp; lcurrp = curr->right; lcurr = (curr->right)->left; while (lcurr->left != NULL) { lcurrp = lcurr; lcurr = lcurr->left; } curr->data = lcurr->data; delete lcurr; lcurrp->left = NULL; } else { tree_node* tmp; tmp = curr->right; curr->data = tmp->data; curr->right = tmp->right; delete tmp; } } return; }}#endif
Coding And Binary Tree implementation C++
c++;tree
null
_unix.286898
We are using a load balancer for our web servers (3 debian clones).I was wondering if it was possible to push the same config onto the three web servers and restart each instance all in one shot so as to avoid differences in configs and human errors?I guess an ssh script could do it but is there no nice gui/webadmin/ready made thingy that does that?
Push same webserver config to multiple front ends
debian;configuration;webserver
null
_scicomp.10557
OK, I have a FORTRAN code which numerically integrates equations of motion for large data sets of initial conditions. I run this program in my PC and it requires about 1 day of computations per data set. So, I was wandering if there is any site with super-fast PCs (or better a grid of processors) in which I could upload the .exe and perform my calculations much faster.Any suggestions?!
Ways to speed up the computations
performance
null
_unix.375667
I need to drive a led matrix which acts as a monitor. therefore i want to connect it to my beaglebone, which of course runs on a headless debian.My first thought was to install some gui like lxde, to see if there is any output at all.but the real question is if i could simply control the hdmi directly, which means that i want to maybe give the driver or whatever is responsible for driving the hdmi output a picture or video signal which then should appear on the connected monitor. i couldn't find anything on that topic on google because the search term hdmi delivers only Home Theater like problems, which doesn't help me.is there a possibility to achieve the above mentioned? is it possible to control the hdmi via command line? which driver or module or whatever is driving the hdmi output on a beaglebone?thanks in advance!
Manually control hdmi output of a Beaglebone/Raspberry
debian;raspberry pi;hdmi;beagleboneblack
null
_softwareengineering.201699
I am looking for an I/O model, in any programming language, that is generic and type safe.By genericity, I mean there should not be separate functions for performing the same operations on different devices (read_file, read_socket, read_terminal). Instead, a single read operation works on all read-able devices, a single write operation works on all write-able devices, and so on.By type safety, I mean operations that do not make sense should not even be expressible in first place. Using the read operation on a non-read-able device ought to cause a type error at compile time, similarly for using the write operation on a non-write-able device, and so on.Is there any generic and type safe I/O model?
Generic and type safe I/O model in any language
design;io
null
_unix.292702
how can I see if my backups is saved after the minute assigned? how can I test it if it's ok?
how can I see if my cron is running and doing the job?
linux;cron
null
_webapps.43883
I have different people who email me about the same subject, e.g, WBR and AIB both email me about HP201. However, so far as I can see Google only allows the sub-label HP201 to be attached to one parent. Aside from making HP201 a parent itself is it possible to make it a sub label of WBR and AIB?
Is it possible to add the same sublabel to different parent labels?
gmail;gmail labels
No. A label can only have one parent (or no parent).You could probably make two different HP201 labels to put under their respective parents.Really, though, you have a use-case for not having HP201 as a sub-label. It should be a label unto itself. That way you can easily find all of the HP201 conversations, WBR conversations, and AIB conversations. A quick search (label:HP201 label:WBR) will easily find the conversations with both labels.
_unix.349676
I would like to know that is it possible to use Android studio in FreeBSD ?I tried to run it but I couldn't.I installed IntelliJ from ports but there was no option to select the Android SDK.
Android studio on FreeBSD
freebsd;android;adb
When Android Studio was still in Beta, I tried to get it running on FreeBSD (my preferred platform) but had nothing but issues.I did manage to compile a debug APK but could not get a full release version (weird). I ran Android Studio under Linux Emulation but there was still issues with the Java side of things (from memory).I even wrote a complex script for adb to help install the APKs as they would not install from the Run option of Android Studio. Not hard, but did speed things up a lot.In the end I gave up and tried a heap of Linux distro (Live CDs) until I found one I was comfortable with - then installed Android Studio without issues.Personally I still prefer FreeBSD for a lot of things but I am more than happy with a stable working environment for Android development.Not the answer you were looking for I know, just sharing my own experience. I guess things could have changed from the Beta to now (v2.3) - but I've decided that Android Studio is updated so often (too often to be honest) that I'm not going to risk issues with FreeBSD and just run Linux.
_cs.79859
I have a slightly modified version of a classical 01 knapsack problem. Specifically, the problem has an additional constraint which requires that if more than one feasible selection with equal value exists then the selection with minimum total weight be selected. For example, consider a knapsack with size 3 and the following set of items to choose from; tuple is defined as (name, weight, value):items = ((alpha, 3, 4), (beta, 1, 3), (gamma, 1, 1))There exist two possible selections with total weight <= 3 and value = 4:Selection 1:[alpha] with a total value of 4 and weight 3Selection 2:[beta, gamma] with a total value of 4 and weight 2Due to the additional constraint, selection 2 is the correct answer. However, the classical 01 knapsack doesn't ensure this. So my question is, does there exist a variant of 01 knapsack which handles this? If not how do I tackle this additional constraint.So far, I have the following two approaches which can possibly work:Calculate the density of the objects and then perform knapsack usingdensity as the value. This approach however is suited to fractionalknapsack where any arbitrary amount of an item can be taken. In thiscase, the item, if taken, must be taken in its entirety.Sort the items with respect to weights before running knapsack onthem.
01 Knapsack with selection of items with minimum total weight
knapsack problems
Sure. It's easy to take any algorithm to solve the ordinary knapsack problem, and apply it to your problem. In the ordinary knapsack problem, we specify an upper bound on the weight. Once we find the maximum value achievable with that weight, next we'll try to see if we can reduce its weight further. Do this by solving a new version of the knapsack problem, the same as the original, except the maximum weight has been reduced. See how far you can reduce the maximum weight while still achieving the same value. You can use binary search for that.
_webmaster.7084
I do my own hosting for a few clients on my own VPS server (Lindode). Since my clients so far have been extremely low traffic, I have not had to really dig into some of the considerations that I would need for a higher traffic site. Now I am bidding on a client whose site will be potentially higher (not Facebook or twitter, but higher than Joe's ice cream shop). Is there a list of things I need to think about that I may be missing?I am going to assume, at least at first, that I will be able to handle them on my shared Linode, but I could move to a dedicated Linode if need be. I am not thinking so far of multiple servers, but short of that there are still considerations. For example, mod_perl instead of straight CGI, better backups, etc. What else?In case it matters, the stack will be debian-linux / apache / Perl / mysql / Template Toolkit.
list of things to think about for hosting a potentially high traffic website
web hosting
null
_unix.220870
I have setup a PhalconPHP project on an AWS EC2 instance. To get thinks working I gave the cache directory 777 permissions. I realise this is bad practice - what would be the correct permissions for a cache directory on a webserver?I presume it must specific to the user the httpd service is running as (the output from the method given by the top voted answer to this question implies that the user is apache). But I obviously want to keep my ec2-user having write access to the directory (I don't want to have to use sudo the whole time).What would be the correct set of permissions for the directory? I have a www group that ec2-user is part of, should apache be added to that?
Correct permissions for a cache directory
permissions;apache httpd;aws
null
_cs.49491
Let $A$ and $B$ be two languages. If $A \le_{m} B$ ( reducible by mapping ) then I know that if $B$ is decidable so is $A$ and if $B$ is recognizable so is $A$. And if $A \le_{T} B$, then if $B$ is decidable so is $A$. But I can't say the same thing for recognizability which I can see by example when $A \equiv L_{\phi}$ ( the empty language ) and if $B \equiv A_{TM}$ ( acceptance language ). I know this is because of the fact that we assume the orcale to decide $B$ while proving $A \le_{T} B$. Why is it so that we assume the oracle to decide and not recognize $B$ while comparing relative difficulty of two two languages ? Sorry for vague terminology of relative difficulty, I have not yet begun reading about complexity theory and hierarchies and for now only have a vague idea about it.
Mapping reducibility vs. Turing reducibility
computability;reductions;semi decidability
You can define a notion of reducibility which will ensure that if $A$ reduces to $B$ and $B$ is recognizable then so is $A$. The definition allows an oracle call to $B$, but the semantics are different: if the contents of the oracle tape are in $B$ then the machine continues, and otherwise it gets stuck (never halts).The problem with this definition is that it is pretty weak. While the other property (with recognizability replaced by decidability) still holds, it never helps you that $B$ is decidable. Compare this with mapping reducibility: there you are using the full power of decidability.Still, the definition could be useful. Assuming you aren't the first to come up with this idea, the only conclusion is that this definition doesn't lead to a nice enough theory. It's also possible that you came up with a nice concept which you could develop to a nice theory. A third possibility is that a theory has been developed, but I am unaware of it.
_softwareengineering.299278
This is all done in Microsoft Access 2007 and SQL Server. We are creating a way for our users to quickly make notes on a customer. These quick-notes will contain tags that will prompt the user for data based on that tag. The tags are to be limited to a few select options for the user to pick from. The tags will be coded as [TAG] in the database.A couple examples:Order refund[[OrderNumber]] - Was refunded When the user selects the above example, he/she would be prompted with a list of the currently selected customer's orders and a field to allow the user to input a specific order. When the user chooses the order, the quick note would look like:[123456] - Was refundedGeneral NoteEmailed receipt for order [[OrderNumber]] to email address: [emailAddress]Would look like:Emailed receipt for order [123456] to email address: [email protected] ProblemThe main issue is that each tag invokes a different action. An order number tag calls up a function that will extract the order number from a few columns in an order table. An email address tag will show all the current customer's email addresses or allow the user to put a different email into a field.Our Purposed SolutionWe though about putting the data into a table with the following format: NoteID NoteTag columnReference tableReference 1 emailaddress email address emailAddTable Now this table may be a quick way to grab the data, but it will be awkward for function calling, and a lot will still be handled programmatically.The QuestionsWhat is a clean and simple way to invoke particular calls to action in a program based on the tags in a quick note? A big loop checking for tags in a string and then calling the specific functions?
SQL Table With A Call To Action?
database design;sql;sql server;microsoft access
null
_codereview.79256
I am taking Stanford's Introduction to Databases Self-Paced online course. I have gone through the videos in the SQL mini-course, and I am having trouble completing the exercises. The following is the question from the SQL Movie-Rating Query Exercises, Question 7:For each movie that has at least one rating, find the highest number of stars that movie received. Return the movie title and number of stars. Sort by movie title.The database can be found here. My answer to this question is as follows:SELECT distinct Movie.title, Rate.stars FROM Movie, Rating, (SELECT * FROM Rating R1 WHERE not exists (SELECT mID FROM Rating R2 WHERE R1.stars < R2.stars and R1.mID = R2.mID)) as Rate WHERE Movie.mID = Rate.mID and Rate.stars = Rating.starsorder by Movie.title; This seems like a very tortured query, and it seems to me like I am missing some important concepts. Can someone help me refactor this query?
Finding max rating for a movie
sql;mysql
SELECT distinct Movie.title, Rate.stars You should rarely use the DISTINCT keyword. In this particular case it's unnecessary and may do the wrong thing. What you want to do is to return the highest star rating for a given movie title:SELECT Movie.title, MAX(Rating.stars)There's the title and we use the MAX keyword to make sure that it's the highest stars. More on when we can use MAX later. FROM Movie, Rating, (SELECT * FROM Rating R1 WHERE not exists (SELECT mID FROM Rating R2 WHERE R1.stars < R2.stars and R1.mID = R2.mID)) as RateWHERE Movie.mID = Rate.mID and Rate.stars = Rating.starsWe can make this simpler:FROM Movie, RatingWHERE Movie.mID = Rating.mIDNo subselects required. GROUP BY Movie.titleORDER BY Movie.title;The GROUP BY will allow us to use MAX. It says to return only one row per Movie.title value. The other columns need to be aggregated with grouping functions, like MAX. You already had the ORDER BY clause and presumably it's correct.
_softwareengineering.234548
After reading about the HashLife algorithm, I found that it runs in O(log n). The Game of Life is also Turing Complete, so in theory we should be able to run any algorithm on a computer constructed in GoL.As a consequence of HashLife's time complexity, could algorithms run faster? e.g. if an algorithm takes 10 seconds to run on a pc, could it run faster in HashLife on that same pc?An example: An algorithm running takes a 1000 instructions to run. A certain computer can process 1 instruction per second. So that algorithm takes a 1000 seconds to run.Now, if we take that same algorithm and run it on the computer in GoL. It would, because of HashLife being O(log n) take 3 seconds? (assuming O(logn))I'm probably overlooking something, since this would be a very important discovery, but I still thought I'd ask about it here.
What are the consequences of Hash-Life running in O(log n)?
complexity;turing completeness
null
_codereview.162242
This library [GitHub] helps merging partial WEB API JSON payload with existing prepopulated DTO objects. Let's say that JSON is represented by JObject from JSON.NET package. Nave implementation will probably get us to the following code for the optional string property mapping:var jToken = jObject[optionalScalar];if(jToken!= null) dto.OptionalScalar = jToken.Value<string>();where required mapping would need throwing an exception:var jToken = jObject[requiredScalar ];if(jToken!= null) dto.RequiredScalar = jToken.Value<string>();else throw new MissingFieldException(requiredScalar);Collection mapping is even more verbose, as we need to instantiate/find necessary DTO for collection items and perform the same logic as above on their individual properties. JMap library allows to define this mapping declaratively and execute necessary requests to external data sources concurrently.Given the following example JSON loaded into the JObject job:{ id: 123, title: tester, company: { name: Microsoft, size: 50000, industries: [ { id: 1 }, { id: 2 } ] }, types: [ Full-Time, Part-Time ], locations: [ { city: Vancouver, country: Canada } ]}We could map it to the following DTO structure:public class DtoJob{ public int Id { get; set; } public string Title { get; set; } public DtoCompany Company { get; set; } public IList<string> Types { get; set; } public IList<DtoLocation> Locations { get; set; }}public class DtoCompany{ public string Name { get; set; } public int Size { get; set; } public IList<DtoIndustry> Industries { get; set; }}public class DtoLocation{ public string City { get; set; } public string Country { get; set; }}public interface IIndustryReader{ DtoIndustry ReadIndustry(int id); Task<DtoIndustry> ReadIndustryAsync(int id);}public class DtoIndustry{ ...}Using the following declaration:await job.MapAsync() .RequiredAssert((int id) => id == dtoJob.Id) .Optional((string title) => dtoJob.Title) .Optional((JObject company) => dtoJob.Company, (company, dtoCompany) => company .Optional((string name) => dtoCompany.Name) .Optional((int size) => dtoCompany.Size) .Optional((JObject[] industries) => dtoCompany.Industries, industry => IndustryReader.ReadIndustryAsync(industry.Id()))) .Optional((string[] types) => dtoJob.Types) .Optional((JObject[] locations) => dtoJob.Locations, (location, dtoLocation) => location .Required((string city) => dtoLocation.City) .Required((string country) => dtoLocation.Country));Where lines like this:.RequiredAssert((int id) => id == dtoJob.Id)Represent a necessary condition to check, while.Optional((string title) => dtoJob.Title)Defines an automatic coercion between string in JSON to the data type of the dtoJob.Title property, while.Optional((JObject[] industries) => dtoCompany.Industries, industry => IndustryReader.ReadIndustryAsync(industry.Id())))Defines a conversion from JObject array to collection of some DTO objects loaded concurrently from the database and assigned to the dtoCompany.Industries property, while .Optional((JObject company) => dtoJob.Company, (company, dtoCompany) => company .Optional((string name) => dtoCompany.Name) .Optional((int size) => dtoCompany.Size) .Optional((JObject[] industries) => dtoCompany.Industries, industry => IndustryReader.ReadIndustryAsync(industry.Id())))Defines a merge of JObject company field content to dtoJob.Company object, creating one if missing.As shown, there are the following permutations of mapping declarations:Optional/Required Coercion/Conversion/Merge JSON type for pattern matching could be:a string, int, long, float, doable, bool, DateTime;their nullable counterparts;JObject;An array of above.What would you say about this way to declare mapping?
JObject Pattern Matching: Merging partial WEB API JSON payload
c#;json;fluent interface;json.net;fluent assertions
null
_reverseengineering.3067
i have used iLSpy to decompile a .NET EXE file after the process is completed. i copied the content of xaml files from iLSpy one by one, saved the as unicode files and replaced the baml files in Visual studio project.With the binary files (baml) the program run perfectlyWith the new replaced xaml files there are many compiler errors.Where is the error exactly?Note: the errors are related to xaml design.all references are correct.
Help with baml to xaml conversion?
decompilation
null
_codereview.10453
Not sure if this question will be considered off topic. If it is, I'll remove it, but:I hadn't see this yet so I wrote it and would like to know if this is a good approach to it. Would anyone care to offer improvements to it, or point me to an example of where someone else has already written it better?function clwAjaxCall(path,method,data,asynch) { var xmlhttp; if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject(Microsoft.XMLHTTP); } if(asynch) { xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { //alert(xmlhttp.responseText); //var newaction=xmlhttp.responseText; //alert('Action becomes '+newaction); return xmlhttp.responseText; } } } if(method=='GET'){path=path+/?+data;} xmlhttp.open(method,path,asynch); if(method=='GET'){xmlhttp.send();}else{xmlhttp.send(data);} if (!asynch){return xmlhttp.responseText;} }I then called it likeJust Testing<script type=text/javascript src=/mypath/js/clwAjaxCall.js></script><script type=text/javascript> document.write(<br>More Testing); document.write(clwAjaxCall(http://www.mysite.com,'GET',var=val,false));</script>UPDATEI don't know if it's any better to anyone else than it was before, but I like it. :-)I have re-written it like this:// I wrote this with help from StackOverflow and have twaeked it some since then.// call it like this://// var holder={};// holder.text='';// watch(holder,function(){ //depends on watch.js. You might choose a different watch/observe method.// //do stuff with holder.text in here// });// var path='http://www.example.com/?key=value';// AjaxCall(path,get,null,true,holder);function AjaxCall(path,method,data,asynch,holder){ // holder is expected to be an object. It should be watched with watch.js, Object.observe or a similar method as they become available. var xmlhttp; if (window.XMLHttpRequest){// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else{// code for IE6, IE5 xmlhttp=new ActiveXObject(Microsoft.XMLHTTP); } if(asynch){ xmlhttp.onreadystatechange=function(){ if (xmlhttp.readyState==4 && xmlhttp.status==200){ holder.text=xmlhttp.responseText; } } } if(method=='GET'){path=path+/?+data;} xmlhttp.open(method,path,asynch); if(method=='GET'){xmlhttp.send();}else{xmlhttp.send(data);} if (!asynch){return xmlhttp.responseText;}}
Generalized Ajax function
javascript;ajax
null
_webapps.101847
I have an 1080p HD video edited on a computer that I'd like to upload to instagram. The issue is that on my first attempt, even though I selected landscape format instead of the square format, instagram re-compressed the video and it looks terrible.How can I avoid compression when uploading a 1080p video to instagram ?(So far I found this post about the issue - I couldn't find any recommended video settings (dimensions/frame rate/bit rate/etc.) on instagram help)
How to upload a high quality video to instagram?
instagram
You can upload within the below limits:Size: Maximum width: 1080 pixels (any height)Frame Rate: 29.96 frames per secondCodec: H.264 codec / MP4Bit-rate: 3,500 kbps video bitrateAudio: AAC audio codec at 44.1 kHz monoLength: 60 secondsFilesize: 15MBHope it helps
_unix.368469
Can we enable Networking in single user mode of Linux ? If Yes then how ?
Can we enable Networking in single user mode of Linux?
rhel;runlevel
null
_softwareengineering.115979
I watched a Google Tech Talk presentation on Unit Testing, given by Misko Hevery, and he said to avoid using the new keyword in business logic code.I wrote a program, and I did end up using the new keyword here and there, but they were mostly for instantiating objects that hold data (ie, they didn't have any functions or methods).I'm wondering, did I do something wrong when I used the new keyword for my program. And where can we break that 'rule'?
When you should and should not use the 'new' keyword?
testing;unit testing
This is more guidance than hard-and-fast rule.By using new in your production code, you are coupling your class with its collaborators. If someone wants to use some other collaborator, for example some kind of mock collaborator for unit testing, they can't because the collaborator is created in your business logic.Of course, someone needs to create these new objects, but this is often best left to one of two places: a dependency injection framework like Spring, or else in whichever class is instantiating your business logic class, injected through the constructor.Of course, you can take this too far. If you want to return a new ArrayList, then you are probably OK especially if this is going to be an immutable List.The main question you should be asking yourself is is the main responsibility of this bit of code to create objects of this type, or is this just an implementation detail I could reasonably move somewhere else?
_unix.259685
I just bought a couple Dell R710s on Ebay for a home lab and want to install Debian as the main OS. Im new to dealing with server administration aside from Development stuff. I am wondering on where to install the OS since there is no internal drive but an internal USB. Is this the proper way to install the main OS on the USB internally? Im not planning on doing much with these but have some VMs for dev stuff and maybe camera storage. Any advice would be great.Thanks
Dell R710 OS install
linux;debian
The proper approach would be to install a couple of SATA drives using drive caddies, then install the OS using mirroring. VMs and camera images tend to be time consuming to re-create so you don't want a dead hard drive killing your entire environment. You might be able to get hardware mirroring working depending on your specific hardware, but if you can't, software mirroring on Linux is quite reliable.Info on Debian with software mirroring: https://wiki.debian.org/DebianInstaller/SoftwareRaidRootOne example of a 3.5 drive caddy: http://www.amazon.com/Drive-Caddy-Server-SASTu-Replacement/dp/B00524SIQ2 [$12]It is technically possible to install on USB, but not a good idea for performance and reliability reasons.
_codereview.104258
I've been given the following task:Given N rectangles with edges parallel to axis, calculate the area of the union of all rectangles. The input and output are files specified in program arguments. Input is represented by N lines with 4 numbers separated by spaces, defining 2 opposite vertices of the rectangle. Output file should contain 1 number - the resulting area of the rectangles' union.Additional constraints: \$1 \le N \le 100\$\$-10000 \le x1\$, \$y1\$, \$x2\$, \$y2 \le 10000\$Memory consumption < 16 MBProgram parameters should be validatedInput file format should be validated.Examples:Input:1 1 7 7Output:36Input:1 1 3 3 2 2 4 4Output:7My solution:public class Main { private List<Rectangle> rectangles = new ArrayList<>(); public static void main(String[] args) { if (args.length != 2) { throw new IllegalArgumentException(Invalid arguments number\nProgram usage: java Main input output); } Main computer = new Main(); long area = computer.computeAreaFromFile(args[0]); computer.writeAreaToFile(area, args[1]); } public long computeAreaFromFile(String inputFileName) { rectangles.clear(); String line; try (BufferedReader inputFileReader = new BufferedReader(new FileReader(inputFileName))) { long area = 0; while ((line = inputFileReader.readLine()) != null) { Rectangle rectangle = Rectangle.fromString(line); area += addRectangleArea(rectangle, false); } return area; } catch (FileNotFoundException e) { throw new IllegalArgumentException(Input file not found); } catch (IOException e) { throw new RuntimeException(e); } catch (NumberFormatException | IndexOutOfBoundsException e) { throw new IllegalArgumentException(Input file contains incorrect line); } } private int addRectangleArea(Rectangle newRectangle, boolean isIntersection) { int result = 0; boolean hasIntersections = false; for (Rectangle existingRectangle : rectangles) { if (!existingRectangle.contains(newRectangle)) { List<Rectangle> complements = existingRectangle.complementOf(newRectangle); if (complements.size() > 0) { hasIntersections = true; for (Rectangle complement : complements) { result += addRectangleArea(complement, true); } break; } } } if (!hasIntersections) { result += newRectangle.area(); } if (!isIntersection) { rectangles.add(newRectangle); } return result; } private void writeAreaToFile(long area, String outputFileName) { try (BufferedWriter writer = new BufferedWriter(new FileWriter(outputFileName))) { writer.write(String.valueOf(area)); } catch (IOException e) { throw new RuntimeException(Could not open file + outputFileName); } }}class Rectangle { public final int x1; public final int y1; public final int x2; public final int y2; public static Rectangle fromString(String input) throws NumberFormatException, IndexOutOfBoundsException { String[] splitInput = input.split( ); if (splitInput.length != 4) { throw new IndexOutOfBoundsException(); } return new Rectangle(Integer.valueOf(splitInput[0]), Integer.valueOf(splitInput[1]), Integer.valueOf(splitInput[2]), Integer.valueOf(splitInput[3])); } public Rectangle(int x1, int y1, int x2, int y2) { this.x1 = Math.min(x1, x2); this.y1 = Math.min(y1, y2); this.x2 = Math.max(x1, x2); this.y2 = Math.max(y1, y2); } /** * Finds a relative complement of the specified rectangle. * * @param rectangle rectangle to find a complement of. * @return {@link List} of the rectangles forming the resulting complement. */ public List<Rectangle> complementOf(Rectangle rectangle) { List<Rectangle> intersections = new ArrayList<>(); if (rectangle.x2 > x1 && x2 > rectangle.x1 && rectangle.y2 > y1 && y2 > rectangle.y1) { if (rectangle.y1 <= y1) { intersections.add(new Rectangle(rectangle.x1, rectangle.y1, rectangle.x2, y1)); } if (y2 <= rectangle.y2) { intersections.add(new Rectangle(rectangle.x1, y2, rectangle.x2, rectangle.y2)); } if (rectangle.x1 <= x1) { intersections.add(new Rectangle(rectangle.x1, Math.max(y1, rectangle.y1), x1, Math.min(y2, rectangle.y2))); } if (x2 <= rectangle.x2) { intersections.add(new Rectangle(x2, Math.max(y1, rectangle.y1), rectangle.x2, Math.min(y2, rectangle.y2))); } } return intersections; } /** * Calculates area of this rectangle. * * @return area of this rectangle. */ public int area() { return Math.abs((x1 - x2) * (y1 - y2)); } /** * Checks if this rectangle contains the specified rectangle. * * @param rectangle rectangle to check for. * @return true if rectangle inside this, false otherwise. */ public boolean contains(Rectangle rectangle) { return x1 <= rectangle.x1 && rectangle.x2 <= x2 && y1 <= rectangle.y1 && rectangle.y2 <= y2; } @Override public boolean equals(Object o) { if (!(o instanceof Rectangle)) { return false; } Rectangle other = (Rectangle) o; return x1 == other.x1 && y1 == other.y1 && x2 == other.x2 && y2 == other.y2; } @Override public int hashCode() { int result = 17; result = 37 * result + x1; result = 37 * result + y1; result = 37 * result + x2; result = 37 * result + y2; return result; } @Override public String toString() { return String.format(Rectangle with x1: %s y1: %s x2: %s y2: %s, x1, y1, x2, y2); }}I have several questions/considerations: Is the provided solution correct?I assume the complexity of this algorithm is \$O(n^2)\$. Can it be improved?I've overridden toString(), equals() and hashCode() methods, although I've never called them (except toString() while debugging). Is it bad practise to do so?I feel that comments in javadoc are lame. How can they be improved and are they necessary at all?computeAreaFromFile() does 2 things: it reads file content and performs actual calculations. I think this method should be split, however I'm not sure how I could do this.
Calculate the area of rectangles union
java;performance;algorithm;computational geometry
null
_unix.267361
Lots of programming-oriented editors will colorize source code. Is there a command that will colorize source code for viewing in the terminal? I could open a file with emacs -nw (which opens in the terminal instead of popping up a new window), but I'm looking for something that works like less (or that works with less -R, which passes through color escape sequences in its input).
Syntax highlighting in the terminal
terminal;colors;syntax highlighting
With highlight on a terminal that supports the same colour escape sequences as xterm:highlight -O xterm256 your-file | less -RWith ruby-rouge:rougify your-file | less -RWith python-pygments:pygmentize your-file | less -RWith GNU source-highlight:source-highlight -f esc256 -i your-file | less -RYou can also use vim as a pager with the help of macros/less.sh script shipped with vim (see :h less within vim for details):On my system:sh /usr/share/vim/vim74/macros/less.sh your-fileOr you could use any of the syntax highlighters that support HTML output and use elinks or w3m as the pager (or elinks -dump -dump-color-mode 3 | less -R) like with GNU source-highlight:source-highlight -o STDOUT -i your-file | elinks -dump -dump-color-mode 3 | less -R
_softwareengineering.212316
I am new to nopCommerce and ecommerce in general but I am involved in an ecommerce project. Now from my past experiences with RavenDB (which mostly were absolutely pleasant) and based on the needs of the business (fast changes with awkward business workflows) It seemed to be an appealing option to have RavenDB handling all sort of things related to the database. I do not understand design and architecture of nopCommerce fully so I did not reach to a conclusion on how to factor data parts, since it seems the services layer actually does not abstract data-layer concepts away; like bringing in EF working model to other layers.I have found another project which used NuDB as it's database as a nopCommerce fork. But it did not help because NuDB still has the feeling of a RDBMS and is not as different as RavenDB.Now first how can I learn about the internals of nopCommerce (other than investigating the code)? It's workflows? It's conventions?Second has anyone tried something similar before with a NoSQL database (say like MongoDB or RavenDB)? Is it possible to achieve this in a 1 (~2) month time frame?Thanks in advance;
How to factor out data layer in nopCommerce and replace MS SQL with RavenDB?
.net;sql server;nosql;e commerce;ravendb
I've tried both MongoDB and RavenDB for NopCommerce's DB. However, as Matt said, it's not a good idea. Here are some reasons:The shortest way is repository pattern, an anti-fan of RavenDB. It took me about 2 weeks.The search function of Nop is optimized for SQL. Raven can do it better and more beautiful, but you must change many methods of Nop. All paging and sorting functions will be rewritten. To change the data layer, you'll have to change a half of the business layer, too!
_cs.14163
Say you have n bottles. each with a ratio of $(a_i: b_i: c_i)$.($a_i/(a_i+b_i+c_i),\cdots$ swap out the numerator for $b_i$ and $c_i$ respectively).Now you are given a ratio of $(a:b:c)$. Use Linear programming to find an efficient algorithm to this problem. If it's yes, you can output the parts of bottles that gave you the answer. (e.g. 2 parts from one bottle, 1 part from another bottles.)So it's not clear how to do this. One of my first thoughts was to make a variable $z = \sum(\alpha_i*(a_i,b_i,c_i)) - (a,b,c)$. This, I'm thinking could be the objective function we minimize. Not sure if this works. Or if it does, where to go from here. Also, I'm not sure if I can have the alphas in the thing I'm trying to minimize. What am I supposed to do with these alphas.Not sure what it means by algorithm either. That is, even if I could set up the problem as a linear problem, not sure what the algorithm would be.
Linear Programming: algorithm to check if ratios can be combined with n bottles to equal a given ratio
linear programming
null
_unix.198462
I have a data file I need to edit it is in the form:-8.915602898150751e-05-7.050591991128022e-05-4.361255125222242e-052.309505585477205e-05-2.223040239244275e-051.088544645124330e-011.000000000000000e-157.528375184423486e-062.558479420795495e-052.537280868441473e-04-5.119189471594489e-056.455268837875294e-054.463628820267331e-011.000000000000000e-15As you can the numbers have no spaces and I would like to edit the file in a very specific manner (I will be using it as an input file for simulation work). I would like the file to look like: -1.0000000000000001e-001 0.0000000000000000e+000 0.0000000000000000e+000 4.3052618410549812e+009 0.0000000000000000e+000 0.0000000000000000e+000 2.4853118072193338e-015 2.4106903033391415e-004 4.3586744793222273e-005 4.5561759893187341e-005 -4.0315591956328645e+007 -9.1758824977759705e+003 -2.5181138417225957e+004 2.4853118072193338e-015I have developed an algorithm to do such an edit and tried it in Notepad++ but, the programs adds invisible characters to the file which makes it invalid for my simulation. Here is the algorithm:find string -1. and replace with string -1. (there is one space in front of the negative sign in the replacement)Repeat step 1 for numbers 2-9.find string 1. and replace with string 1. (there are two spaces in front of the 1 in the replacement)Repeat step 3 for numbers 2-9.find string - 1. and replace with -1. (there are two spaces between the negative sign and 1 in the find string)Repeat step 5 for numbers 2-9.I want to do this in a UNIX shell (I am using macbook terminal) as I believe this will not add invisible characters and corrupt my data. Any help guys? Thanks in advance!!!!!
Search and replacing strings in a numerical data file
awk;grep;search
null
_codereview.140339
In a very specific application I have, I needed the ability to easily convert between different data sizes. I.e. when I give an input of 1,048,576KiB, I needed it to say 1GiB, etc.So, I built a struct for it.It's pretty robust, includes operations for subtraction, addition, multiplication and division, == and !=, IsSame etc.I'd like to think it might be useful for others as well.First bit is the struct:public struct DataSize{ public ulong SizeInBytes { get; } public SizeScale Scale { get; } public double Size => GetSize(Scale); public DataSize(ulong sizeInBytes) { Scale = SizeScale.Bytes; SizeInBytes = sizeInBytes; } public DataSize(ulong sizeInBytes, SizeScale scale) { Scale = scale; SizeInBytes = sizeInBytes; } public DataSize(double size, SizeScale scale) { Scale = scale; if (scale == SizeScale.Bits) { SizeInBytes = (uint)(size / 8); return; } if (((int)scale & 0x03) == (int)SizeScale.Bytes) { SizeInBytes = (uint)(size * Math.Pow(10, 3 * (((int)scale & 0xFF00) >> 8))); return; } SizeInBytes = (uint)(size * Math.Pow(2, 10 * (((int)scale & 0xFF00) >> 8))); } public double GetSize(SizeScale scale) { if (scale == SizeScale.Bits) { return SizeInBytes * 8.0; } if (((int)scale & 0x03) == (int)SizeScale.Bytes) { return SizeInBytes / Math.Pow(10, 3 * (((int)scale & 0xFF00) >> 8)); } return SizeInBytes / Math.Pow(2, 10 * (((int)scale & 0xFF00) >> 8)); } /// <summary> /// Returns a <see cref=DataSize/> that is the highest value which will have a non-zero whole-number <see cref=Size/> component. /// </summary> /// <param name=scaleType>When set to <see cref=SizeScale.Bytes/> the result will be a <code>B</code> type, when set to <see cref=SizeScale.Bits/> the result will be a <code>iB</code> type. If set to <see cref=SizeScale.None/> the same base unit as the source value will be used.</param> /// <returns>A <see cref=DataSize/> object.</returns> public DataSize GetLargestWholeSize(SizeScale scaleType = SizeScale.None) { var limit = 1000ul; if (scaleType == SizeScale.None) { scaleType = (SizeScale)((int)Scale & 0x00FF); } if (scaleType == SizeScale.Bits) { limit = 1024ul; } var iterations = 0; var currSize = (double)SizeInBytes; while (currSize >= limit) { currSize /= limit; iterations++; } return new DataSize(currSize, (SizeScale)((iterations << 8) | ((int)scaleType & 0x00FF))); } /// <summary> /// Returns a <see cref=DataSize/> that is the smallest value which will have a zero whole-number <see cref=Size/> component. /// </summary> /// <param name=scaleType>When set to <see cref=SizeScale.Bytes/> the result will be a <code>B</code> type, when set to <see cref=SizeScale.Bits/> the result will be a <code>iB</code> type. If set to <see cref=SizeScale.None/> the same base unit as the source value will be used.</param> /// <returns>A <see cref=DataSize/> object.</returns> public DataSize GetSmallestPartialSize(SizeScale scaleType = SizeScale.None) { var limit = 1000ul; if (scaleType == SizeScale.None) { scaleType = (SizeScale)((int)Scale & 0x00FF); } if (scaleType == SizeScale.Bits) { limit = 1024ul; } var iterations = 0; var currSize = (double)SizeInBytes; while (currSize >= limit) { currSize /= limit; iterations++; } iterations++; return new DataSize(currSize, (SizeScale)((iterations << 8) | ((int)scaleType & 0x00FF))); } public override bool Equals(object obj) => obj is DataSize && (DataSize)obj == this; public override int GetHashCode() => Size.GetHashCode(); public override string ToString() => ${Size} {Scale.Abbreviation()}; public string ToString(string numberFormat) => ${Size.ToString(numberFormat)} {Scale.Abbreviation()}; public string ToString(SizeScale scale) => ${GetSize(scale)} {scale.Abbreviation()}; public string ToString(string numberFormat, SizeScale scale) => ${GetSize(scale).ToString(numberFormat)} {scale.Abbreviation()}; public bool IsSame(DataSize comparison) => SizeInBytes == comparison.SizeInBytes && Scale == comparison.Scale; public static bool IsSame(DataSize left, DataSize right) => left.SizeInBytes == right.SizeInBytes && left.Scale == right.Scale; public static bool operator ==(DataSize left, DataSize right) => left.SizeInBytes == right.SizeInBytes; public static bool operator !=(DataSize left, DataSize right) => left.SizeInBytes != right.SizeInBytes; public static DataSize operator +(DataSize left, DataSize right) => new DataSize(left.SizeInBytes + right.SizeInBytes, left.Scale); public static DataSize operator -(DataSize left, DataSize right) => new DataSize(left.SizeInBytes - right.SizeInBytes, left.Scale); public static DataSize operator *(DataSize left, ulong right) => new DataSize(left.SizeInBytes * right, left.Scale); public static DataSize operator /(DataSize left, ulong right) => new DataSize(left.SizeInBytes / right, left.Scale); public static DataSize operator *(DataSize left, double right) => new DataSize((ulong)(left.SizeInBytes * right), left.Scale); public static DataSize operator /(DataSize left, double right) => new DataSize((ulong)(left.SizeInBytes / right), left.Scale);}Next I have a SizeScale enum:public static class SizeScaleExtensions{ public static string Abbreviation(this SizeScale scale) { if (scale == SizeScale.None) { return null; } if (scale == SizeScale.Bytes) { return B; } if (scale == SizeScale.Bits) { return b; } var firstLetter = scale.ToString()[0] + ; if (((int)scale & 0x00FF) == (int)SizeScale.Bits) { return firstLetter + iB; } return firstLetter + B; }}public enum SizeScale : int{ None = 0x0000, Bytes = 0x0001, Bits = 0x0002, Kilobytes = 0x0101, Kibibytes = 0x0102, Megabytes = 0x0201, Mebibytes = 0x0202, Gigabytes = 0x0301, Gibibytes = 0x0302, Terabytes = 0x0401, Tibibytes = 0x0402, Petabytes = 0x0501, Pibibytes = 0x0502, Exabytes = 0x0601, Exbibytes = 0x0602, Zettabyts = 0x0701, Zebibytes = 0x0702, Yottabytes = 0x0801, Yobibytes = 0x0802,}Both the extensions and that enum declaration are in the same file, which means that the extension method is easily available.Lastly, I have some tests (I know I need a lot more):[TestClass]public class DataSizeTests{ [TestMethod, TestCategory(Data Size Tests)] public void GetSize_1_SizeScale_Bits() { var expected = 8.0; var input = 1u; var actual = new DataSize(input).GetSize(SizeScale.Bits); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void GetSize_1_SizeScale_Bytes() { var expected = 1.0; var input = 1u; var actual = new DataSize(input).GetSize(SizeScale.Bytes); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void GetSize_1000_SizeScale_Bytes() { var expected = 1000.0; var input = 1000u; var actual = new DataSize(input).GetSize(SizeScale.Bytes); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void GetSize_1024_SizeScale_Bytes() { var expected = 1024.0; var input = 1024u; var actual = new DataSize(input).GetSize(SizeScale.Bytes); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void GetSize_1000_SizeScale_Kilobytes() { var expected = 1.0; var input = 1000u; var actual = new DataSize(input).GetSize(SizeScale.Kilobytes); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void GetSize_1024_SizeScale_Kilobytes() { var expected = 1.024; var input = 1024u; var actual = new DataSize(input).GetSize(SizeScale.Kilobytes); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void GetSize_1000_SizeScale_Kibibytes() { var expected = 0.9765625; var input = 1000u; var actual = new DataSize(input).GetSize(SizeScale.Kibibytes); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void GetSize_1024_SizeScale_Kibibytes() { var expected = 1.0; var input = 1024u; var actual = new DataSize(input).GetSize(SizeScale.Kibibytes); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void GetSize_1000000000_SizeScale_Gigabytes() { var expected = 1.0; var input = 1000000000u; var actual = new DataSize(input).GetSize(SizeScale.Gigabytes); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void GetSize_1073741824_SizeScale_Gigabytes() { var expected = 1.073741824; var input = 1073741824ul; var actual = new DataSize(input).GetSize(SizeScale.Gigabytes); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void GetSize_1000000000_SizeScale_Gibibytes() { var expected = 0.931322574615478515625; var input = 1000000000u; var actual = new DataSize(input).GetSize(SizeScale.Gibibytes); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void GetSize_1073741824_SizeScale_Gibibytes() { var expected = 1.0; var input = 1073741824ul; var actual = new DataSize(input).GetSize(SizeScale.Gibibytes); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void Construct_8_SizeScale_Bits() { var expected = new DataSize(1u); var input = 8u; var actual = new DataSize((double)input, SizeScale.Bits); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void Construct_1_SizeScale_Bytes() { var expected = new DataSize(1u); var input = 1u; var actual = new DataSize(input, SizeScale.Bytes); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void GetLargestWholeSize_SizeScale_Bits_1024_SizeScale_Kibibytes() { var expected = new DataSize(1.0, SizeScale.Mebibytes); var input = 1024u; var actual = new DataSize(input, SizeScale.Kibibytes).GetLargestWholeSize(SizeScale.Bits); Assert.AreEqual(expected.Size, actual.Size); } [TestMethod, TestCategory(Data Size Tests)] public void GetLargestWholeSize_SizeScale_Bytes_1000_SizeScale_Kilobytes() { var expected = new DataSize(1.0, SizeScale.Megabytes); var input = 1000u; var actual = new DataSize(input, SizeScale.Kilobytes).GetLargestWholeSize(SizeScale.Bytes); Assert.AreEqual(expected.Size, actual.Size); } [TestMethod, TestCategory(Data Size Tests)] public void Subtract_2_SizeScale_Bytes_1_SizeScale_Bytes() { var expected = new DataSize(1u, SizeScale.Bytes); var initial = new DataSize(2u, SizeScale.Bytes); var subtract = new DataSize(1u, SizeScale.Bytes); var actual = initial - subtract; Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Data Size Tests)] public void Add_1_SizeScale_Bytes_1_SizeScale_Bytes() { var expected = new DataSize(2u, SizeScale.Bytes); var initial = new DataSize(1u, SizeScale.Bytes); var add = new DataSize(1u, SizeScale.Bytes); var actual = initial + add; Assert.AreEqual(expected, actual); }}Here are the tests for SizeScale:[TestClass]public class SizeScaleTests{ [TestMethod, TestCategory(Size Scale Tests)] public void Abbreviation_None() { var input = SizeScale.None; var actual = input.Abbreviation(); Assert.IsNull(actual); } [TestMethod, TestCategory(Size Scale Tests)] public void Abbreviation_Bytes() { var expected = B; var input = SizeScale.Bytes; var actual = input.Abbreviation(); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Size Scale Tests)] public void Abbreviation_Bits() { var expected = b; var input = SizeScale.Bits; var actual = input.Abbreviation(); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Size Scale Tests)] public void Abbreviation_Kilobytes() { var expected = KB; var input = SizeScale.Kilobytes; var actual = input.Abbreviation(); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Size Scale Tests)] public void Abbreviation_Kibibytes() { var expected = KiB; var input = SizeScale.Kibibytes; var actual = input.Abbreviation(); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Size Scale Tests)] public void Abbreviation_Megabytes() { var expected = MB; var input = SizeScale.Megabytes; var actual = input.Abbreviation(); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Size Scale Tests)] public void Abbreviation_Mebibytes() { var expected = MiB; var input = SizeScale.Mebibytes; var actual = input.Abbreviation(); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Size Scale Tests)] public void Abbreviation_Gigabytes() { var expected = GB; var input = SizeScale.Gigabytes; var actual = input.Abbreviation(); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Size Scale Tests)] public void Abbreviation_Gibibytes() { var expected = GiB; var input = SizeScale.Gibibytes; var actual = input.Abbreviation(); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Size Scale Tests)] public void Abbreviation_Terabytes() { var expected = TB; var input = SizeScale.Terabytes; var actual = input.Abbreviation(); Assert.AreEqual(expected, actual); } [TestMethod, TestCategory(Size Scale Tests)] public void Abbreviation_Tibibytes() { var expected = TiB; var input = SizeScale.Tibibytes; var actual = input.Abbreviation(); Assert.AreEqual(expected, actual); }}All the tests pass at the moment.
Representing and handling Data Sizes
c#;.net;unit testing
I would define API a little bit different. Lets go with couple types: SizeUnit and DataSize, so they can be used as:using static SizeUnit;using static Console;class Program{ static void Main(string[] args) { double one = 1.0; DataSize size = one.In(Kilobyte); WriteLine(size); // 1 kB SizeUnit unit = Byte; DataSize size2 = size.To(unit); WriteLine(size2); // 1024 B WriteLine(one.In(Byte) + one.In(Kilobyte)); // 1025 B WriteLine(one.In(Bit) + one.In(Byte)); // 9 b }}Where library code (a little bit simplified just to demonstrate api):public class SizeUnit{ public static readonly SizeUnit Bit = new SizeUnit(b, 0.125); public static readonly SizeUnit Byte = new SizeUnit(B, 1); public static readonly SizeUnit Kilobyte = new SizeUnit(kB, 1024); // etc... string Symbol { get; } double Value { get; } SizeUnit(string symbol, double value) { Symbol = symbol; Value = value; } public override string ToString() => Symbol; internal double ToSize(double bytes) => bytes / Value; internal double ToBytes(double size) => size * Value;}And:public struct DataSize{ public static DataSize operator +(DataSize left, DataSize right) => new DataSize(left.Bytes + right.Bytes).To(left.Unit); public static DataSize operator -(DataSize left, DataSize right) => new DataSize(left.Bytes - right.Bytes).To(left.Unit); public static DataSize operator *(DataSize left, ulong right) => new DataSize(left.Bytes * right).To(left.Unit); // etc... DataSize(double bytes) : this(bytes, Byte) { } public DataSize(double bytes, SizeUnit unit) { Bytes = bytes; Unit = unit; } public override string ToString() => ${Value} {Unit}; public double Bytes { get; } public double Value => Unit.ToSize(Bytes); public SizeUnit Unit { get; } public DataSize To(SizeUnit unit) => new DataSize(Bytes, unit);}And:public static class Conversions{ public static DataSize In(this double value, SizeUnit unit) => new DataSize(unit.ToBytes(value), unit);}
_unix.136854
I am trying to understand the Linux processes. I have confusion on the terms pid_max, ulimit -u and thread_max. What exactly is the difference between these terms? Can someone clarify me the differences?
Understanding the difference between pid_max, ulimit -u and thread_max
process;linux kernel;ulimit;pid;thread
Let us understand the difference between a process and a thread. As per this link,The typical difference is that threads (of the same process) run in a shared memory space, while processes run in separate memory spaces.Now, we have the pid_max parameter which can be determined as below. cat /proc/sys/kernel/pid_maxSo the above command returns 32,768 which means I can execute 32,768 processes simultaneously in my system that can run in separate memory spaces. Now, we have the threads-max parameter which can be determined as below. cat /proc/sys/kernel/threads-maxThe above command returns me the output as 126406 which means I can have 126406 threads in a shared memory space. Now, let us take the 3rd parameter ulimit -u which says the total processes a user can have at a particular time. The above command returns me the output as 63203. This means for all the processes that a user has created at a point of time the user can have 63203 processes running. Hypothetical caseSo assuming there are 2 processes simultaneously being run by 2 users and each process is consuming memory heavily, both the processes will effectively use the 63203 user limit on the processes. So, if that is the case, the 2 users will have effectively used up the entire 126406 threads-max size. Now, I need to determine how many processes an user can run at any point of time. This can be determined from the file, /etc/security/limits.conf. So, there are basically 2 settings in this file as explained over here.A soft limit is like a warning and hard limit is a real max limit. For example, following will prevent anyone in the student group from having more than 50 processes, and a warning will be given at 30 processes.@student hard nproc 50@student soft nproc 30Hard limits are maintained by the kernel while the soft limits are enforced by the shell.
_unix.329389
In my gnucash data I operate with several currencies, incl EUR, DKK and SGD.Now, interestingly, in all my reports, the data from the DKK accounts are included and (as intended) converted into EUR.However, SGD accounts are seemingly ignored. If I change the reporting currency, EUR&DKK are ignored, but SGD included. The only interesting warning I see is gnc:get-commodity-totalavg-prices: Sorry, currency exchange not yet implementedHow to proceed towards getting a report that integrates all three currencies?
gnucash reports seem to ignore a single currency entirely, other currencies not: how to include the currency?
finance;gnucash
null
_codereview.99150
In my question to learn F#, I've decided to get one step closer to creating a programming language, and implement this simple Reverse Polish Notation interpreter of sorts.It does not allow for parentheses in input ( ), and only accepts expressions containing the following valid tokens:0 1 2 3 4 5 6 7 8 9 + - * /Here's a small list of sample inputs and outputs for reference:2 2 + -> 410 10 + 5 * -> 1002 2 * 4 + 2 * -> 16I have a few concerns here:Is this written in a proper functional way?Is there any way to shorten evaluate_expr?Are there any glaring issues that I missed?open Systemopen System.Collections.Genericopen System.Text.RegularExpressions/// <summary>/// Evaluate an expression pair, like '2 2 +'./// <summary>/// <param name=operand>The operand to use.</param>let evaluate_expr_pair (a: string) (b: string) (operand: string) = match operand with | + -> (Int64.Parse(a) + Int64.Parse(b)).ToString() | - -> (Int64.Parse(a) - Int64.Parse(b)).ToString() | * -> (Int64.Parse(a) * Int64.Parse(b)).ToString() | / -> (Int64.Parse(a) / Int64.Parse(b)).ToString() | _ -> /// <summary>/// Evaluate a tokenized expression, such as '[| 2; 2; + |]',/// and return a result, in this case, '2'./// </summary>/// <param name=expr>The expression to evaluate.</param>let evaluate_expr (expr: string[]) = let program_stack = new Stack<string>() for token in expr do program_stack.Push(token) match token with | + -> let operand = program_stack.Pop() let b = program_stack.Pop() let a = program_stack.Pop() program_stack.Push(evaluate_expr_pair a b operand) | - -> let operand = program_stack.Pop() let b = program_stack.Pop() let a = program_stack.Pop() program_stack.Push(evaluate_expr_pair a b operand) | * -> let operand = program_stack.Pop() let b = program_stack.Pop() let a = program_stack.Pop() program_stack.Push(evaluate_expr_pair a b operand) | / -> let operand = program_stack.Pop() let b = program_stack.Pop() let a = program_stack.Pop() program_stack.Push(evaluate_expr_pair a b operand) | _ -> () program_stack.Pop()/// <summary>/// Tokenize an input expression, such as '2 2 + 5 *'./// </summary>/// <param name=expr>The expression to tokenize.</param>let tokenize_expr (expr: string) = let split_pattern = \s*(\+|\-|\*|\/)\s*|\s+ let split_regex = new Regex(split_pattern) let result = split_regex.Split(expr) result.[0..result.Length - 2]/// <summary>/// Check all the tokens of an expression to make sure/// that they are all legal./// </summary>/// <param name=expr>The expression to check.</param>let check_expr (expr: string) = let valid_token_pattern = [\s0-9\+\-\*\/] let valid_token_regex = new Regex(valid_token_pattern) for token in expr do if valid_token_regex.Match(token.ToString()).Success then () else Console.WriteLine(String.Format(Invalid token \{0}\., token))[<EntryPoint>]let main argv = while true do Console.Write(: ) let input_expr = Console.ReadLine() input_expr |> check_expr let tokenized_expr = input_expr |> tokenize_expr let result = tokenized_expr |> evaluate_expr Console.Write(\n ) Console.Write(result) Console.Write(\n\n) 0
Reverse Polish Notation in F#
f#;calculator;math expression eval;interpreter
evaluate_expr_pairYour calculator performs integer division, which is a bit surprising. To fix it to do floating-point or decimal arithmetic, though, you would have to do a search-and-replace of Int64.Parse, which has been written an unreasonable number of times in evaluate_expr_pair.evaluate_expr_pair also does a lot of Parse and ToString round trips. Not only is that inefficient, it could also cost you some loss of precision if you were doing floating-point arithmetic.operand should really be called operator. (The operands are a and b.) Ignoring unrecognized operators is a bad idea, even if you have validated the input in check_expr. If not doing exception handling, I'd rather leave out the default case and let it crash. I'd also prefer to detect errors while attempting to evaluate the expression than pre-validate, because there are some errors that you can't reasonably detect without evaluating the expression (such as stack underflow).evaluate_exprHere, with your use of the program_stack, you are venturing outside the realm of functional programming, because the .Push() and .Pop() operations cause mutation. A more FP approach would be to use a List as an immutable stack.Instead of blindly pushing tokens, some of which are operators, onto a stack of strings, you should only put numeric data in the stack.You would be better off treating the operators as functions that manipulate the entire stack directly, instead of as functions that take two operands and return one result. Otherwise, you would have to implement an operation like swap as a special case, and distinguish between binary operations and unary operations such as negate and ex.tokenize_exprThe simple strategy would be to use new Regex(\s+) as the delimiter pattern. You're treating the operators as captured delimiters presumably to make spaces optional in an expression like 1 2+ and discarding the last element of the resulting array. That works, as long as the expression ends with an operator. If the expression is just 5, for example, you'll discard a number.The regex could be written more succinctly using a [-+*/] character class. I wouldn't bother naming split_pattern and split_regex as variables.mainprintf is preferred over System.Console.Write.If you going to set up a |> pipeline, don't interrupt it by defining let tokenized_expr that just gets in the way.SummaryRPN calculators are much simpler when the operators work directly on the stack, rather than having a controller feed the operands to them. In fact, every function except main has traces of the operator definitions.Here's an implementation that I came up with: open Systemopen System.Text.RegularExpressionsexception InputError of stringlet tokenize (expr: string) = expr |> (new Regex(\s+|\s*([-+*/])\s*)).Split |> Array.toList |> List.filter(fun s -> s.Length > 0)let perform (op: string) (stack: decimal list) = match (op, stack) with | (+, a :: b :: cs) -> (b + a) :: cs | (-, a :: b :: cs) -> (b - a) :: cs | (*, a :: b :: cs) -> (b * a) :: cs | (/, a :: b :: cs) -> (b / a) :: cs | (swap, a :: b :: cs) -> b :: a :: cs | (drop, a :: cs) -> cs | (roll, a :: cs) -> cs @ [a] | (n, cs) -> try decimal n :: cs with | :? System.FormatException -> raise (InputError(n))let evaluate (expr: string list) = let rec evaluate' (expr: string list) (stack: decimal list) = match expr with | [] -> stack | op :: exp -> evaluate' exp (perform op stack) evaluate' expr [][<EntryPoint>]let main argv = while true do printf : try match Console.ReadLine() |> tokenize |> evaluate with | num :: [] -> num |> printfn %g // Single answer | stack -> stack |> printfn %O // Junk left on stack with | InputError(str) -> printfn Bad input: %s str 0
_unix.323743
I need to find a very efficient way of moving a file of matching -mtime from one directory tree to another directory, maintaining the same subdirectory path where it doesn't exist yet.eg. move /dirA/subdir1/subdir2/filename to /dirB/subdir1/subdir2/filenamewhere subdir1/subdir2 may or may not yet exist under dirB/ at time of move.And efficient meaning completing this on a tree of several million files before the next ice age (preferably sub-24 hrs).Rsync comes to mind but some say it's not all that efficient for such matching single-file calls.If this is in the same journaled filesystem, is a file move just a manipulation of filesystem catalogue metadata and not actual block re-writes, thus being that much more efficient?
find matching file and change dirname path
command line;files;find;move
null
_unix.323422
I was on Windows 10 for a while, decided to move to openSUSE, I thought I had my HDD, I installed openSUSE, everything is working fine but when I start my PC the only way to load my OS is to spam F9(choosing boot option) and then picking the openSUSE os.My default boot option is called operational system OS, I think and my wanted boot is openSUSE secure boot, when I don't spam F9->openSUSE my PC tries to get booted to Windows 10 and I jump to a bright blue screen.How can I completely remove the Windows and never see it again ?
Removing the Windows plague from boot options on openSUSE
boot;windows
null
_unix.38313
I have a small script that loops through all files of a folder and executes a (usually long lasting) command. Basically it'sfor file in ./folder/*;do ./bin/myProgram $file > ./done/$filedone(Please Ignore syntax errors, it's just pseudo code).I now wanted to run this script twice at the same time. Obviously, the execution is unnecessary if ./done/$file exists. So I changed the script tofor file in ./folder/*;do [ -f ./done/$file ] || ./bin/myProgram $file >./done/$filedoneSo basically the question is:Is it possible that both scripts (or in general more than one script) actually are at the same point and check for the existance of the done file which fails and the command runs twice? it would be just perfect, but I highly doubt it. This would be too easy :DIf it can happen that they process the same file, is it possible to somehow synchronize the scripts?
Parallel execution of a program on multiple files
shell script;terminal;scripting;parallel
This is possible and does occur in reality. Use a lock file to avoid this situation. An example, from said page:if mkdir /var/lock/mylock; then echo Locking succeeded >&2else echo Lock failed - exit >&2 exit 1fi# ... program code ...rmdir /var/lock/mylock
_cstheory.17082
Assume we operate in a finite field. We are given a large fixed polynomial p(x) (of, say, degree 1000) over this field. This polynomial is known beforehand and we are allowed to do computation using a lot of resources in the initial phase. These results may be stored in reasonably small look-up tables.At the end of the initial phase, we will be given a small unknown polynomial q(x) (of, say, degree 5 or less). Is there a fast way to compute p(x) mod q(x) given that we are allowed to do some complicated calculations in the initial phase? One obvious way is to calculate p(x) mod q(x) for all possible values of q(x). Is there a better way to do this?
Find the remainder of a large fixed polynomial when divided by a small unknown polynomial
cc.complexity theory;ds.algorithms;algebra;algebraic complexity;polynomials
null
_softwareengineering.37532
My contract has just ended and I'm wondering what possible jobs I might want to look at next.I've worked in banking and insurance industry for all my career (including one Fortune 500 company) and in my experience banking is the slowest (and most boring) industry to work for due to their strict business practices (which is fair enough). The upside is that they pay well.My questions are:What are the best and worst industries for developers to work in? That is, in the industries you have worked in, what was good and bad from a developer perspective (money, work, culture, benefits, colleagues, etc.)? How does working as a consultant affect your opinion of an industry?Seeing that I've mentioned boredom, which industry supports fast growth?
Best industry to work for as a developer
freelancing
I'd look at a software house or specialist IT consultancy. These are organisations where you as a developer are key to what they do - you're not a cost centre or a necessary evil, they exist because of you and have no business without you. As such they tend to have cultures and processes built with developers at their core and I suspect you'll find that when it comes to your working life, that's worth a lot on a day to day basis. That's not to say you'll always have the best kit (banks usually have better because they have more money), but they are the ones who are most likely to have standards, values and ways of working that most closely mirror what works well for the average developer (rather than, say, the average banker, lawyer, accountant or whoever). Plus you will tend to have management who have some experience of technology and get it more than average. Two caveats I'd add:1) There are sectors that will pay better - particularly finance. Only you can weigh up that against the work and the environment.2) Consultancies can be pretty demanding in terms of hours and geographical flexibility. Again, you need to work out whether that's something that matters (if you're young then the travel might be appealing) or not.
_softwareengineering.146075
The challenge proposed to me as to create a widget to apply in other sites that makes a website compliant with the cookie law[1].Can I do this without changing server code?I mean, if there's code on server-side that writes an affiliate cookie to the response and my JavaScript widget deletes it after on window.load event: will the site still be cookie law compliant?Then comes the Google Analytics and share buttons cookies. How would I stop those scripts and iframes from being executed in JavaScript?[1] The Information Commissioner's Office (ICO) : New ICO Cookie Law
If I drop cookies with JavaScript will it still be compliant with the EU ICO Cookie Law?
javascript;cookies
Your solution would probably end up being treated as malwareFrom your description, it appears that you want to create a JavaScript library that a website can include on their pages that will guarantee their compliance with the cookie law.Let's side-step the technical issues surrounding the actual implementation of this law when it comes to the location of the user, the client (remote session anyone?), the server and the owner of the web application running on the server. And, let's constrain further and only consider the UK guidance offered around the EU directive.From the Information Commissioner's Office:Cookies or similar devices must not be used unless the subscriber or user of the relevant terminal equipment:(a) is provided with clear and comprehensive information about the purposes of the storage of, or access to, that information; and(b) has given his or her consent.This implies that your widget will have to act as the clearing house for this user consent. If you do not have control over the server code then your software will have to block the cookies emanating from the server until such consent is obtained. This means that your software will be interfering with third-party libraries (your Google Analytics and Facebook Likes etc). Such interference, no matter how well-meant, is very likely to degrade the user experience and be looked upon extremely unfavourably by the owners of the third-party libraries. Thus it will be treated as malware. I would think again before going down this road.
_softwareengineering.252178
Many VMs execute a language of binary form, knows as 'bytecode', which is assembled down from a human readable 'assembly' language.For example the assembly instructions push 1 push 2 add are translated (I think) to a series of ones and zeroes, which is then executed by the VM.Why? Why don't VMs, and the JVM as an example, execute the assembly instructions directly?They don't have the limitation of physical computers that can only handle ones and zeroes. The JVM can very well take textual instructions such as push 1 push 2 and execute them as they are. Why the additional step of compilation?
Why do VMs not execute the assembly directly?
virtual machine
Here are a couple of reasons to think about:Using human readable assembly language would waste space on disk and in memory. That has an impact on caching, and therefore on performance. In your example the instruction 'push' takes up four bytes. Why not compress the program by using one byte tokens for all instructions instead of the human readable strings?It wastes cycles on the processor. Your VM probably has at least two instruction mnemonics that start with 'p'. In order for your VM to figure out whether an instruction is 'push' or 'pop' it has to compare at least two bytes. It's much more efficient if each instruction can be uniquely identified by looking at single byte. The argument to your instructions is a string representing a number. The string has to be converted to a binary format appropriate for they underlying CPU before it can be used in arithmetic. That conversion will take dozens of instructions all by itself. Why do that every time the program is run? It's much more efficient to do it in a one-time pass when the byte code is created.
_unix.365704
It seems GParted is giving problems with FAT32 partitions. I am using the latest Gparted Live Disk from their site.But whenever I try to resize my EFI system Partition which is formatted using Fat32, I get the error-:GNU Parted cannot resize this partition to this size. We're working onNow one of the solutions is to install dosfstools alongside Gparted but I don't know how that will be possible in my case since I am using a Live USB of GParted.Any other tools on linux that can deal with fat32 correctlty. Gparted seems to be able to deal with NTFS correctly.My current partition layout on my 1 TB HDD is as follows-:EFI System Partition (100 MB FAT32) Unallocated (250 MB)Windows Drive(199 GB NTFS)Unallocated (~720 GB)When I try to increase the size of my EFI partition my trying to merge it with the unallocated 250 MB I get the error that it is not possible.
Gparted gives problems with Fat32
gparted;fat32
null
_webapps.91431
I want to create a document that has ten pages in it (ten pages, each page a new set of unique information). I want that document to pull from ten unique, single page documents that users can update. I want the updates in the unique documents to populate in the conglomerate document.Is this possible with Google Docs?I have seen that dynamic text can be linked between Google Sheets, or between Google Sheets and Google Docs but I want to do dynamic text from Google Doc to Google Doc.
Reference dynamic text from another document within Google Docs
google documents
null
_softwareengineering.285096
In an OOP design phase strategy, Any physical/conceptual object of a system can be modeled(considered) as computational object in your OOP designed program based on below two conditions:First case: That physical/conceptual object of a system must have it's local state and that state changes over time.orSecond case: that physical/conceptual object of a system may or may not have it's local state but that same object must influence the state of other physical/conceptual object of a system through interactions.Above two cases are also supported here, which says: Viewing objects as finite state machinesTo illustrate above two cases, below is the object model diagram and class diagram of coin flipping game.class Coin and class Player type objects, have local state coinOption, as mentioned in first case. class CoinGame type object has no local state but influence the state of other objects(of type Player and Coin), as mentioned in second case.As per second case, class CoinGame type object influences the state of other objects of type Player and Coin through below interactions, but class CoinGame type object itself does not have local state on it's own.So, class CoinGame does not maintain any local state and has composite relation with Player and Coin, as per below java code.public class CoinGame { Player[] players = new Player[2]; Coin theCoin = new Coin(); CoinGame(String player1Name, String player2Name){ players[0] = new Player(player1Name); players[1] = new Player(player2Name); } .....}Here is the complete code in java.Above two cases are valid, when you select objects from real world.Is my understanding correct?
When is an object of real world a (computational) object in OOP world?
java;design;object oriented;object oriented design;class design
null
_codereview.69485
I made a program, but I not happy with the quantity of code. The end result is good, but I believe it can be made much easier, only I don't know how. The functionality: If equal items > 1 in list, then assign all equal items a unique set number. Below I made a unit test. I'm not happy with the class CreatSet. Can somebody advise me on how this can be implemented better?import unittestclass Curtain(object): def __init__(self, type, fabric, number): self.type = type self.fabric = fabric self.number = number self.set_number = None def __str__(self): return '%s %s %s %s' % (self.number, self.type, self.fabric, self.set_name) def __eq__(self, other): return self.type == other.type and self.fabric == other.fabricclass CreatSet(object): def make_unique(self, original_list): checked = [] for e in original_list: # If curtain: type and fabric is equal if e not in checked: checked.append(e) return checked def create_set(self, curtains): # Uniuqe items in list unique_list = self.make_unique(curtains) result = [] for x in unique_list: # Create set list set_range = [] for y in curtains: if y == x: set_range.append(y) # Add set range into list result.append(set_range) # Create set number set_result = [] set_number = 0 for x in result: if len(x) == 1: set_result.append(x[0]) else: set_number += 1 for y in x: y.set_number = set_number set_result.append(y) # Return list ordered by number return sorted(set_result, key=lambda curtain: curtain.number)class TestCreateSet(unittest.TestCase): def setUp(self): self.curtains = [] self.curtains.append(Curtain('pleatcurtain', 'pattern', 0)) self.curtains.append(Curtain('pleatcurtain', 'plain', 1)) self.curtains.append(Curtain('pleatcurtain', 'pattern', 2)) self.curtains.append(Curtain('foldcurtain', 'pattern', 3)) self.curtains.append(Curtain('pleatcurtain', 'plain', 4)) self.curtains.append(Curtain('foldcurtain', 'plain', 5)) self.curtains.append(Curtain('pleatcurtain', 'pattern', 6)) self.curtains.append(Curtain('foldcurtain', 'pattern', 7)) def test_auto_set(self): creat_set = CreatSet() result = creat_set.create_set(self.curtains) # Creating set self.assertEqual(result[0].set_number, 1) # pleatcurtain, pattern self.assertEqual(result[1].set_number, 2) # pleatcurtain, plain self.assertEqual(result[2].set_number, 1) # pleatcurtain, pattern self.assertEqual(result[3].set_number, 3) # foldcurtain, pattern self.assertEqual(result[4].set_number, 2) # pleatcurtain, plain self.assertEqual(result[5].set_number, None) # foldcurtain, plain self.assertEqual(result[6].set_number, 1) # pleatcurtain, pattern self.assertEqual(result[7].set_number, 3) # foldcurtain, patternif __name__ == '__main__': unittest.main()
Assign a unique number to duplicate items in a list
python
As you've already implemented Curtain.__eq__, if you also implement Curtain.__hash__ you can use it as a dictionary (or collections.Counter...) key, or a set member:def __hash__(self): return hash(self.type) ^ hash(self.fabric)Now make_unique is trivial:def make_unique(self, original_list): return set(original_list)(Note: if you require order to be retained, this will need additional work.)This also allows you to easily determine how many of each distinct Curtain you have:>>> from collections import Counter>>> Counter(curtains)Counter({<__main__.Curtain object at 0x02A15190>: 3, <__main__.Curtain object at 0x031A0CF0>: 2, <__main__.Curtain object at 0x031A0ED0>: 2, <__main__.Curtain object at 0x0329EA30>: 1})(Note: implementing Curtain.__repr__ would make this more readable!)As pointed out in the comments, the __hash__ implementation I suggest has an issue - if either attribute type or fabric is changed, the hash will be different. You could protect these attributes by making them read-only using properties:class Curtain(object): def __init__(self, type, ...): self._type = type # note leading underscore on attribute @property # defining setter but no getter def type(self): return self._typeAlternatively, you can implement Curtain.__lt__ etc., then sort the list and use itertools.groupby to get your groups of equal Curtains.Either way, I would not implement CreatSet as a class, there's no need (as should be clear from the fact that there is no __init__ and no class or instance attributes). Just have one class, and two functions:class Curtain(object): ...def create_set(curtains): ...def make_unique(curtains): ...Your code is generally compliant with the style guide (well done!) but you could do with some explanatory docstrings.
_webmaster.68972
I'm looking for some help with the Google Analytics Behavior Flow. I want to track all the traffic through a section of our site that ends up going to another particular section of my web site.Any advice on how to get this information.
Tracking Traffic Through a Certain Page
google analytics
Use segments to view this report for only the users that have viewed both sections of your site.Click + Add Segment from the behavior flow report.Click the red + New Segment button.Click Advanced conditions.Where it says Ad Content, change it to Page.Enter an expression that matches the URLs for the first section of your site.Hit the AND button to add another condition.Also use Page with an expression to match the URLs for the second section of your site.Name your segment where it says Segment Name Use the blue Save button to apply the segmentRemove the All sessions segment if it is still applied as wellNow your report should just have the traffic that you want to examine in more detail.
_softwareengineering.190737
I have a Java Web application backed by a database. Both are hosted in Amazon EC2. If the Internet is down, I need to allow internal users to be able to continue to work and somehow update the hosted service when the Internet is available again. Is this possible? How would I design such a solution.
How do I make a cloud based web app accessible internally in the event of an internet outage?
java;web services;web;applications
null
_unix.268690
To connect multiple tunnel endpoints to a common bridge interface, I have to create a Layer 2 tunnel over ssh. The server is Ubuntu 10.04, the client is Ubuntu 14.04. I have enabledPermitTunnel yesPermitRootLogin yesin the servers /etc/sshd_config. When I'm connecting with sudo ssh -w any:any -o Tunnel=ethernet root@remote I get a tun device instead of the expected tap device. If I change PermitTunnel yes to PermitTunnel ethernet on the server, I get a channel 0: open failed: administratively prohibited: open failed error message and no tunnel device at all.I'm at a loss, because I'm positive that this used to work at some point in the past (with different machines and probably Linux versions).
For some reason sudo ssh -w any -o Tunnel=ethernet root@remote creates tun devices instead of tap devices
linux;ssh;ssh tunneling;tap
I have the same problem. According to my tests, it is not related to the server, instead it has something to do with the client. Either ssh build and configuration, either due to the local network configuration.I've been able to create a tap interface between my laptop and all of my devices but when I tried to tunnel between the devices, only tun interfaces were created.[edit]The workaround consists in putting the -o before the -w like this :ssh -o Tunnel=ethernet -w any:any root@remoteinstead of :ssh -w any:any -o Tunnel=ethernet root@remoteI tried it myself, it works, here is the source : https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/1316017
_softwareengineering.311870
In C#, when you override a method, it is permitted to make the override async when the original method was not. This seems like poor form.The example that brought me to this was this — I was brought in to assist with a load test problem. At around 500 concurrent users, the login process would break down in a redirect loop. IIS was logging exceptions with the message An asynchronous module or handler completed while an asynchronous operation was still pending. Some searching led me to think that someone was abusing async void, but my quick searches through the source could find nothing.Sadly, I was searching for 'async\svoid' (regex search) when I should have been looking for something more like 'async\s[^T]' (assuming Task wasn't fully qualified… you get the point).What I later found was async override void onActionExecuting(... in a base controller. Clearly that had to be the problem, and it was. Fixing that up (making it synchronous for the moment) resolved the problem.Back to the question: Why oh why can you mark an override as async when the calling code could never await it?
Why does C# allow you to make an override async?
c#;async
The async keyword allows the method to use the await syntax within its definition. I can await on any method that returns a Task type regardless of whether it's an async method. void is a legal (albeit discouraged) return type for an async method, so why wouldn't it be allowed? From the outside, async isn't enabling anything you couldn't do without it. The method you are having trouble with could have been written to behave exactly the same without being async. Its definition just would have been more verbose.To callers, an async T method is a normal method that returns T (which is limited to void,Task, or Task<A>). That it is an async method is not part of the interface. Notice that the following code is illegal:interface IFoo { async void Bar();}It (or similar code in an abstract class) produces the following error message in VS2012:The 'async' modifier can only be used in methods that have a statement bodyIf I did intend a method in an interface or parent class to be typically asynchronous, I can't use async to communicate that. If I wanted to implement it with the await syntax, I would need be able to have async override methods (in the parent class case).
_cs.20152
My question concerns genetic algorithm searching along bit strings.Given:$N$ = population size$l$ = length of bit strings$p_c$ = probability that a single crossover occur (double crossover never occur)$p_m$ = probability for a given bit that a mutation occur$w(x)$, the fitness function is equal to the number of 1 in the strings. Therefore, the fitness can take any integer value between 0 and $l$ (the length of the strings).My question is (three ways of formulating the same question):What is the expected total number of possibilities explored in $G$ generations?orWhat is the expected proportion of the total possibility space (which equals $2^l$) that is explored in $G$ generations?orWhat is the expected size of the subset of strings that have ever existed in the population during a simulation that last $G$ generations?Secondary questions:How does the frequency distribution - of the total number of possibilities explored in $G$ generation - looks like?is it a normal (Gauss) distribution?Is it skewed?...I don't quite know how complex is my question. Here are two assumptions that one would like to consider in order to ease the problem.one might want to assume that the population at start is not randomly drawn from the possibility space. He could assume that the whole population is made of identical strings (only one instance). For example, the string 000000000 (which length equals $l$).one might want to assume that $p_c = 0$
Genetic algorithm: What is the expected number of strings that are explored?
algorithms;optimization;average case;genetic algorithms
As I pointed out in the comments, the primary feature of interest for understanding how a genetic algorithm (or evolution, even) explores the fitness landscape is the fitness function. In this case, you specified an extremely simple fitness function with no epistasis. This was a standard assumption for analytical tractability when biology first started out (i.e. most analyses that have Fisher somewhere in the name probably assume no epistasis), but it is seen as less reasonable now. Hence, if your goal was to gain biological intuition, this question will not provide it. Since landscape with epistasis have qualitatively different dynamics.Finally, you don't specify a selection function, so to give you a heuristic answer I will assume one that has strong selection. First note, that at any given time, the number of genotypes you explore is bounded above by the trivial bound of $GN$. To start building a better bound, lets look at Wilf & Evans (2010) that analyzed a no-epistasis model under strong selection is a sexual population (i.e. with recombination). They showed, that it converges to equilibrium (the all $1$s string in your case) in $\Theta(\log l)$ (they actually showed a tighter characterization with a careful analysis of radix-sort, but I will leave that exploration upto you) generations (I think we have to assume that $N \in \omega(l)$). Thus, you will reach the peak very quickly.On the way to the peak, you will explore $O(N \log l)$ genotypes. In fact, if the selection strength is very high, then we can assume that the population remains close-to monomorphic in terms of the phenotype specified as number of ones in the genome although polymorphic in terms of the genotype. This can give us a bound of about $O(l^2 \log l)$.Once you reach the peak, strong selection will keep you there, and any mutants will go extinct after each generation. This means that from there on out, the population will just stay at the string $1^l$ with transient (single-generation) mutants of Hamming-distance one.This gives us a pretty ridiculous bound $\min (O(GN), O(l^2 \log l))$. In other words, as long as you look at generations $G \in \Omega(\log l)$ you will only end up sampling an exponentially small fraction of the genome space $O(\frac{l^2 \log l}{2^l})$.
_scicomp.8763
Given a one-dimensional function (let's say infinitely differentiable) and a prescribed accuracy of an L2 (or H1) norm, what is the optimal mesh and (in general arbitrary) polynomial orders on each element so that the approximation has the least degrees of freedom? (For example if I have three elements, of orders $p_1=2$, $p_2=4$ and $p_3=10$, then there are $p_1+p_2+p_3+1=2+4+10+1=17$ degrees of freedom.)The answer are the polynomial orders and coordinates of the mesh.In particular, does the most efficient representation have equal polynomial orders on all elements or not?What is the algorithm to find this optimal representation?Naive algorithmFor each total number of elements N=1, 2, 3, ..., we pick all combinations of polynomial orders $p_i$ (e.g. for N=3, we have (1, 1, 1), (2, 1, 1), (1, 2, 1), (1, 1, 2), (2, 2, 1), ..., (100, 1, 1), ...) and we optimize the coordinates of the mesh in order to minimize the L1 norm. A candidate is such a combination of mesh+orders, that has a lower L1 norm than prescribed. We only keep the candidate with the lowest degrees of freedom. We skip combinations of N and $p_i$ which have more degrees of freedom, so the algorithm must eventually terminate.MotivationExamples of functions that I have in mind are numerical solutions of radial Schroedinger or Dirac equations on a finite interval (either as part of DFT or Hartree-Fock), as well as the various corresponding potentials. All these functions are infinitely differentiable and the numerical solver can solve those to arbitrary numerical accuracy. Then I set e.g. $10^{-6}$ in L2 norm and I want to know the most efficient representation in terms of $hp$-FEM degrees of freedom.
What is the most efficient way to represent a 1D function using $hp$-finite element basis functions
optimization;finite element;polynomials;mesh
I don't know whether that's already published, but Peter Minev of the University of South Carolina has developed algorithms that produce $hp$ meshes that are provably within a fixed factor of the optimal number of degrees of freedom to represent a given function with a prescribed accuracy $\varepsilon$.In general, it is quite clear that even for $C^\infty$ or analytic functions, the optimal mesh will not use the same polynomial degree everywhere. It is easy to conceive functions that will have large derivatives somewhere and be very smooth elsewhere, and if you don't ask for very small tolerances $\varepsilon$, then it's quite clear that you should use low and high polynomial degrees in these regions, respectively.
_unix.91663
I downloaded and installed s3fs 1.73 on my Debian Wheezy system. The specific steps I took were, all as root:apt-get -u install build-essential libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support./configure --prefix=/usr/localmakemake installThe installation went well and I proceeded to create a file /usr/local/etc/passwd-s3fs with my credentials copied from past notes (I'm pretty sure those are correct). That file is mode 0600 owner 0:0. Piecing together from the example on the web page and the man page, I then try a simple mount as a proof of concept to make sure everything works:$ sudo -i# s3fs mybucketname /mnt -o url=https://s3.amazonaws.com -o passwd_file=/usr/local/etc/passwd-s3fsIn short: it doesn't.The mount point exists with reasonable permissions, and I get no error output from s3fs. However, nothing gets mounted on /mnt, mount has no idea about anything of the sort, and if I try umount it says about the directory not mounted. The system logs say s3fs: ###curlCode: 51 msg: SSL peer certificate or SSH remote key was not OK, but how do I find out which SSL certificate it is talking about or in what way was it not OK? Firefox has no complaints when I connect to that URL but also redirects me to https://aws.amazon.com/s3/.How do I get s3fs to actually work?
s3fs complains about SSH key or SSL cert - how to fix?
linux;debian;amazon s3;s3fs
null
_unix.379113
I am upgrading the kernel on a platform using an ARM SOC (AT91SAM9G25) from 3.2 to 4.4. This is a sysv system. The previous 3.2 works fine, but when booting the 4.4 kernel, it hangs after the exec of /sbin/init. I can specify 'init=/bin/sh' on the U-Boot bootargs and it successfully execs the shell (I get a shell prompt). From there, things look proper; I can mount /proc, verify that the rootfs is mounted, bring up a NIC interface, etc.I have successfully performed this upgrade on a different platform running a different ARM SOC (AT91SAM9G45). I compared the kernel configs between this working other platform and the one that hangs. The only differences are those related to the different SOCs. Kernel configuration differences follow:300,301c300,301< CONFIG_PLATFORM_SLK1=y< # CONFIG_PLATFORM_SLK2 is not set---> # CONFIG_PLATFORM_SLK1 is not set> CONFIG_PLATFORM_SLK2=y421c421< CONFIG_ARM_APPENDED_DTB_FILE=arch/arm/boot/dts/slk1.dtb---> CONFIG_ARM_APPENDED_DTB_FILE=arch/arm/boot/dts/slk2.dtb1061c1061< # CONFIG_MTD_M25P80 is not set---> CONFIG_MTD_M25P80=y1311,1314c1311< CONFIG_NET_VENDOR_MICREL=y< # CONFIG_KS8842 is not set< # CONFIG_KS8851 is not set< # CONFIG_KS8851_MLL is not set---> # CONFIG_NET_VENDOR_MICREL is not set2414c2411,2450< # CONFIG_USB_GADGET is not set---> CONFIG_USB_GADGET=y> # CONFIG_USB_GADGET_DEBUG is not set> # CONFIG_USB_GADGET_DEBUG_FILES is not set> # CONFIG_USB_GADGET_DEBUG_FS is not set> CONFIG_USB_GADGET_VBUS_DRAW=2> CONFIG_USB_GADGET_STORAGE_NUM_BUFFERS=2> > #> # USB Peripheral Controller> #> # CONFIG_USB_AT91 is not set> CONFIG_USB_ATMEL_USBA=y> # CONFIG_USB_FUSB300 is not set> # CONFIG_USB_FOTG210_UDC is not set> # CONFIG_USB_GR_UDC is not set> # CONFIG_USB_R8A66597 is not set> # CONFIG_USB_PXA27X is not set> # CONFIG_USB_MV_UDC is not set> # CONFIG_USB_MV_U3D is not set> # CONFIG_USB_M66592 is not set> # CONFIG_USB_BDC_UDC is not set> # CONFIG_USB_NET2272 is not set> # CONFIG_USB_GADGET_XILINX is not set> # CONFIG_USB_DUMMY_HCD is not set> # CONFIG_USB_CONFIGFS is not set> # CONFIG_USB_ZERO is not set> # CONFIG_USB_AUDIO is not set> # CONFIG_USB_ETH is not set> # CONFIG_USB_G_NCM is not set> # CONFIG_USB_GADGETFS is not set> # CONFIG_USB_FUNCTIONFS is not set> # CONFIG_USB_MASS_STORAGE is not set> # CONFIG_USB_G_SERIAL is not set> # CONFIG_USB_MIDI_GADGET is not set> # CONFIG_USB_G_PRINTER is not set> # CONFIG_USB_CDC_COMPOSITE is not set> # CONFIG_USB_G_ACM_MS is not set> # CONFIG_USB_G_MULTI is not set> # CONFIG_USB_G_HID is not set> # CONFIG_USB_G_DBGP is not set2584,2585c2620,2621< # CONFIG_RTC_DRV_AT91RM9200 is not set< CONFIG_RTC_DRV_AT91SAM9=y---> CONFIG_RTC_DRV_AT91RM9200=y> # CONFIG_RTC_DRV_AT91SAM9 is not set2599c2635< # CONFIG_AT_HDMAC is not set---> CONFIG_AT_HDMAC=y3058c3094< CONFIG_DEBUG_UART_PHYS=0xffffee00---> CONFIG_DEBUG_UART_PHYS=0xfffff200If I boot to a shell (with init=/bin/sh), I can run '/sbin/init -i' and the system boots normally (init has a PID other than 1). However, 'exec /sbin/init' or 'exec /sbin/init -i' hang.Any ideas on how to figure out where init is hung?
/sbin/init hangs after upgrading linux kernel
linux;arm;init
null
_webmaster.33778
I have a business with two products, lets say boat engines and car engines. Direct from the home page one can click on either cars engines or boat engines. The subpages of each section are then not related. It is almost like two sites with one home page. However there are pages which should be common to both such as About us or Vacancies etc.My web-designer/programmer built this website with two CMSs. One for boat engines and one for car engines. Both were done with Wordpress. My question is: besides double-entering of content for the common pages, what are the pros and cons of having two CMSs. I expect 40% of my company sales to come through this website, therefore any information about the impact on SEO (positive or negative) would be welcome too.
One website two CMS
seo;cms
null
_unix.372749
After a press the prefix Ctrl+B in tmux, if I press multiple keys rapidly one after the other, they are registered as tmux commands. For example, if I press Ctrl+B, Down, Down, it will go down two panes.However, this interfere with Bash history so if I press Ctrl+B, Down, and then Up again to bring up the last typed command, it's going to go back to the previous pane instead. So I need to press Ctrl+B, Down, wait for a second or two, then Up.How can I disable this behaviour? Basically I'd like tmux to register the keypress after Ctrl+B but not the ones after that. Any idea if it can be done?
Don't allow multiple keypresses after prefix in tmux
keyboard shortcuts;tmux
The repeat-time option, at 500 milliseconds by default, controls how long to wait for the same command key, provided that key has been bound with bind-key -r option, which is the case for things like Down:bind-key -r Down select-pane -DSo you can either reduce the time or redo the bindings without -rset-option -g repeat-time 10# orbind-key Up select-pane -Ubind-key Down select-pane -Dbind-key Left select-pane -Lbind-key Right select-pane -Rbind-key M-Up resize-pane -U 5bind-key M-Down resize-pane -D 5bind-key M-Left resize-pane -L 5bind-key M-Right resize-pane -R 5bind-key C-Up resize-pane -Ubind-key C-Down resize-pane -Dbind-key C-Left resize-pane -Lbind-key C-Right resize-pane -R
_webapps.102131
Gmail has a maximum of 250 contacts on a single page. If I wanted to view or extract all the contacts to bypass this restriction what would I have to do? A method using Firebug or the Google Dev tools would be appreciated. Also, viewing the 2016 version of the contacts page is not an option.
How to view more than 250 contacts on Gmail
google contacts
null
_unix.326126
I followed this guide from DigitalOcean to install gnome for my CentOS 7 vps, and I got the gnome access with my user account with sudo privilege.But the message pops up: Authentication is required to install software or Authentication is required to create a color managed device Administrator password: I tried no password, my account's password, and root password, none of them worked.I tried to do this, and add X-GNOME-Autostart-enabled=false to gnome-software-service.desktop, did not help.There is no such user as 'Administrator', how do I remove the authentication check?
CentOS gnome vnc required authentication of administrator
centos;gnome;authentication;vnc;vps
null
_webmaster.90938
We are currently redirecting all website traffic to a static html page using javascript in the header of each page (the site will be like this for 4 days). I'm concerned about the potential SEO impact.The static HTML page currently has index 'no follow' and the other pages do still exist. It is just that the javascript when someone reaches those other pages sends them to the landing page via the below method. Here's the script:<script type=text/javascript> <!-- window.location = http://www.mydomain.co.uk/landing-page.html //--></script>Can anyone advise best practice in terms of this redirect? Thank you.
Redirecting all traffic to an HTML landing page
seo;redirects;landing page
null
_unix.172112
I am backing a file system with ufsdump.In order to gain space in the remote file system I backup to stdout then pipe the output to compress. I find is more efficient that backuping up and then running gzip.so basically i have this command:/usr/sbin/ufsdump 0uf - / | compress -c -v > /backup/$HOST/root-$DATE-full.dump.Z I would like to save the diagnostic information of ufsdump to a file for monitoring and checking out if the dump completed successfully. but this does not seem to be possible with the command above? please note that if I backup to a file instead of standard output i can capture the diagnostics of ufsdump without problem. thanks
capture log of ufsdump when backup to stdout
logs;solaris;stdout;stderr
null
_cogsci.15536
I learned about this but forget its name. The idea is that when people hear a word or think of a concept they tend to come up with certain examples that fit the idea more than others. For example, if people think of birds, they tend to think of sparrows or seagulls, not penguins, even though penguins fit the definition of bird perfectly. If people are told to imagine a chair, they tend to think of one with 4 legs, probably made of wood, not something like a movie theater chair. What's the name for this?
What's the term in psychology for the way people think of concepts using examples?
terminology
null
_softwareengineering.257142
Is it possible to act as a MySQL server in a Java or Android application, such that applications like HeidiSQL can connect to it? What I want to do is to write an application for Android which behaves as a MySQL server for an arbitrary database on that device. Since there are many MySQL clients that can connect to MySQL servers, it would be ideal if I could mimic such a server.As it currently stands, I have three options:Use some sort of MySQL server library (if that exists);Write the server myself, including implementing the (trivial/needed parts of the) protocol (is this feasible?);Drop the MySQL server approach altogether and write a specific client/server myself.In the end, the server (Android application) should be able to process and respond to simple queries a client sends to it.Which options are feasible, and best suitable?
Act as MySQL server in Java/Android
java;mysql;server
Sit down and write the client/server properly yourself.If you are not willing to implement all of the protocol, just doing part of it can leave you with significant and strange breakages when some application tries to:prepare a statementiterate over a result setdo a transaction (and a rollback!)do something 'at the same time' as another thing (can you do a update foo set bar = bar + 1 without getting into an infinite loop?)do nested statements (example:SELECT PLAYERS.PLAYERNO, NAME, (SELECT COUNT(*) FROM PENALTIES WHERE PLAYERS.PLAYERNO = PENALTIES.PLAYERNO) AS NUMBER_OF_PENALTIES, (SELECT COUNT(*) FROM TEAMS WHERE PLAYERS.PLAYERNO = TEAMS.PLAYERNO) AS NUMBER_OF_TEAMSFROM PLAYERSThis isn't just for trying to present the I'm a MySQL server to the world but just I'm an SQL server to the world...There are simpler things than a MySQL server. You could go for just I'm a database and use ODBC to connect to it rather than trying to mirror MySQL. Or something like GNOME-DB.All of this, however, presupposes that the data that you have behind this interface is actually relational and can be queried in a relational manner. If it isn't you may be finding yourself with the object-relational impedance mismatch in reverse (and then back again?).Thus, returning back to my original suggestion. Write the client and server in accordance with the data that you have modeled behind it. Trying to write something that matches the appearance of a database is going to involve nearly writing a database... and that is no little task.
_webmaster.47466
I have a web application that communicates to a web service deployed on the same server. The web app was written with Tibco General Interface and works well only when it is running locally on the development system. When I deploy the web app to the Apache server it fails with code 200 apparently due to cross domain data. I use Firefox as a browser. I have tried changing Internet Explorer to access cross domain data and it works however IE is not an option.Web application runs on 192.168.2.205 (port 80).Web service runs on 192.168.2.205:8040I have tried a number of things with proxypass inside Apache with no luck.
Setup basic proxypass in Apache
apache;configuration
null
_unix.192443
Can someone please help me with a command to search a file for a string in multiple directories? I am searching for the VIP in httpd.conf in multiple httpd instances.I am using:find ./ -name httpd.conf | xargs grep 10.22.0.141 cut -d: -f3- | cut -d ' ' -f4 | sort | uniq -c But its not working.charl@rom11-TEST $ ls -latrtotal 124...drwxrwxr-x 7 root root 4096 Mar 9 13:41 bofac-Wrapperdrwxr-xr-x 7 root root 4096 Jul 29 2014 bofac-admindrwxrwxr-x 7 root root 4096 Jul 29 2014 bofac-chas-testdrwxrwxr-x 7 root root 4096 Jul 29 2014 bofac-chasdps-testdrwxrwxr-x 7 root root 4096 Oct 10 14:09 bofac-vpn-chas-test...Basically the httpd instances are highlighted but I would ideally run the command against the entire directory.
Search a file for a string in multiple directories
command line;text processing;files;grep;find
null
_webapps.58017
I am not super computer literate. I am trying to use the equations to grade my Google quiz, the problem is that there is a quote in the answer. Is there a different symbol I need to use?This is what I have right now: =if(C2= Aubrey said, you are my favorite friend., 10, 0)The answer is Aubrey said, you are my favorite friend. I would just omit the quotes in the answer but it meant for an English quiz, so that may be sending the wrong message.
How do I output a string containing quotation marks?
google spreadsheets;formulas
null
_cstheory.8829
I have read in several papers that the existence of one-way functions is widely believed. Can someone shed light on why this is the case? What arguments do we have for supporting the existence of one-way functions?
Arguments for existence of one-way functions
cc.complexity theory;cr.crypto security;big picture;one way function
null
_vi.9412
I like having netrw loaded for browsing directory contents when I open '.' and for downloading webpages when I split http://host/path, but it's really getting in the way of opening files with gF or ^wf when netrw doesn't understand the protocol.I'm editing a sizable library of Salt statelists for a project, and they have a lot of Salt filesystem URLs in the configs (salt://path/path/file) that I'd like to be able to split into with gF and ^wf. There's no hope for netrw understanding these as built-ins, which is acceptable. Some local config would be required to explain to vim where the files really are.I'd like a local config to teach netrw to open them or some other method of bypassing netrw so I can open them.I tried to use:set includeexpr=substitute(v:fname, salt://, location/of/file_root/).It took me quite a while to figure out includeexpr was never being applied. What seems to happen is that netrw handles the URL, decides it's a filename, and fails to execute includeexpr.What are my options here? disable netrw in BufEnter for files I'm likely to see the salt urls? preemptively fire includeexpr before netrw can get to it? Also, where would you set events that happen before netrw fires?
open salt://whatever as somedir/whatever
vimscript;netrw
null
_softwareengineering.197891
Is there a way to show and save all final definitions entered into a Scheme REPL into a text file?Say, if I have defined in the REPL: (define (increase x) (+ 1 x)) (define (multbytwo x) (* 3 x)) ; let us say this was an error (define (multbytwo x) (* 2 x))I want to save this into an .scm-file with the content: (define (increase x) (+ 1 x)) (define (multbytwo x) (* 2 x))i.e. the multbytwo function that got defined erraneously shall be forgotten due to re-definition. Is this possible?
Final Scheme REPL definitions: how to save them?
scheme
null
_codereview.102246
I wrote this encrypter based on the idea of a Vigenere cipher, but instead of using only one key, it makes another key from the existing key. The length of the second key also depends on the different characters in the first key. And so it uses both keys in the shifting of letters.def scram(key): key2 = [] #make the length of the second key varying from key to key for harder cracking length = (key[0]-32)+(key[int(len(key)/2)]-32)+(key[len(key)-1]-32) #make the max length = 256 length = length % 256 #if the length is less than 64, multiply by two to make it longer while(length < 64): length*=2 #scrambles the letters around for x in range(length): #basically shifts the key according to the current character, #how many times it has looped the key, where it is in the key, #and the modulo of x and a few prime numbers to make sure that #an overlap/repeat doesn't happen. toapp = (x%(len(key)-1)) + (x/(len(key)-1)) + (key[x%(len(key)-1)]) + (x%3) +(x%5)+(x%53)+(x%7) toapp = int(toapp % 94) key2.append(toapp) return key2def cipher(mes,key,ac): #makes the second key key2 = scram(key) res=[] #do proper shifting in the keys for x in range(len(mes)): temp=mes[x] if(action == 2): temp = temp - key[x%(len(key)-1)] - key2[x%(len(key2)-1)] else: temp = temp + key[x%(len(key)-1)] + key2[x%(len(key2)-1)] temp = int(temp % 94) res.append(chr(temp+32)) return res#encrypt or decryptaction = input(type 1 to encypt. type 2 to decrypt:)#inputm= input(Text:)k= input(key - 4 or more char:)#changes the letters to ascii valuemes= []for x in m: mes.append(ord(x)-32)key= []for x in k: key.append(ord(x)-32)#encrypts it result = cipher(mes,key,action)for x in result: print(x,end=)print()y = input(Press enter to continue...)Are there more efficient ways to do it? Is this a safe way to encrypt text? Can you crack the text encrypted with this program?
Encrypter - Double Vigenere Cipher in Python
python;python 3.x;security;vigenere cipher
null
_softwareengineering.354973
I am currently developing my own project where I want to use an API to connect to the database on a web server.I understand that when requesting data, you should use GET and when you want to upload data, you should use POST / PUT. The problem occurs when you want to log a user in. You are requesting data, meaning it should be a GET request, however, you do not want the user credentials in the URL since that would be stored on the users history and other networking tools may be able to pick it up since it would not be secured with https.What would be the best way of requesting user data through an API? I am making the API with the Slim Framework and am requesting the data via an iOS application I am developing in swift. Apart from using the GET method, my API uses JSON to transmit the data.The credentials I am talking about are to log the user into their account and not API credentials.
How to transmit credentials in API
api;api design
The short answer is that you don't.When dealing with web services, a current common approach is to use JWT (it's specified with OAuth2 and other identity frameworks).Your authentication service validates the user's credentials, and provides a token for you to pass to your web services. With this approach, your web service never deals with the credentials directly.If for some reason you need to pass the credentials, the same JWT package allows you to have an encrypted envelope to provide the password, but I would argue against this if you can avoid it.The approach to pass the JWT is the same regardless of the method (GET, POST, PUT, DELETE, PATCH, etc.).Pass the JWT with the Authentication headerYour token is a Bearer token, so the value is Bearer: big.jwt.tokenJWT tokens can be validated, parsed, and used without round trips to authentication services, so they really help with the authorization process.In general web applications should avoid passing user credentials to the database directly. That makes it difficult to pool connections which are expensive to create, and have real world cost implications if you have per-user licensing agreements. I understand that some applications need to do data concealment based on user permissions, but there are several ways to handle that process. Just about everything you need can be contained in the JWT.
_codereview.71163
Is there any way to write is C function better so the procedure could spend less time to calculate the results?Assume that the array size is 1.000.000 and all the numbers are greater than 0.The function read the array backwards, saves the first number to maximum, then check the next number, a, and if it's greater than the maximum, it adds +1 to total. Then the maxinum takes the value of a and procide to next one until the end of array.static int total = 1;int chck_high( int *my_array, int *endp) { // this function should only be called if there is at least one value in the array int maximum = *(--endp); while ( endp > my_array ) { int a = *(--endp); if ( a > maximum ) { total++; maximum = a; } } return maximum;}I tried to use unsigned ints, but I don't know if it's worth it. Can someone please tell me if there is a better way to write that code?Previous version of the code:int process( int *my_array, int *endp) { int a, b; //static int total = 1; if ( my_array == 0 ) return 0; if ( my_array == endp ) return INT_MIN; else a = *my_array++; if ( (b= process( my_array, endp )) == INT_MIN ) return a; if ( a > b ) {total++; return printf( %d > %d and total now is %d\n, a, b, total ), a; } return b; } The two functions called with the following code:chck_high(my_array, my_array + count);where my_array is my array with nums and count is the number that says the size of the array.
Counting the out-of-order elements of an array
performance;c;array;comparative review
Avoid static variables as much as possible.Instead of static total, you could pass in a pointer to total and make the function modify the value it's pointing to.The names are not great:Instead of my_array, start would be betterInstead of endp, end would be betterInstead of chck_high, ... I don't know what would be better, because I don't really see the general logic this function represents. It counts the number of times a new local maximum is found going backwards from the end of the range, and sets the value of total. I'm wondering if this logic is really necessary in this form, or perhaps the overall logic of your program could be redesigned to simpler elements.Instead of a comment like this:// this function should only be called if there is at least one value in the arrayIt would be better to use an assertion:assert(start < end);Note that this requires to #include <assert.h>This maybe a matter of taste, but I would find a for loop would be more natural for this instead of while. Using a for loop, in C99 and above, you could declare the loop variable inside the for, which would have extra benefits:It would help you limit variables in the smallest scope necessary (inside the loop)It would force you to use a new local variable for looping, instead of reusing the function parameter, which is a good thingPutting it together, the function would become:int chck_high(int *start, int *end, int *total) { assert(start < end); int maximum = *--end; for (int * pos = end; pos > start; --pos) { int a = *pos; if ( a > maximum ) { ++*total; maximum = a; } } return maximum;}To enable C99 mode when compiling with gcc, use the -std=c99 flag.You can use the function like this, for example:int main() { int arr[] = {1, 2, 51, 41, 4, 5}; int total = 1; int maximum = chck_high(&arr[0], &arr[0] + 6, &total); printf(total=%d max=%d\n, total, maximum);}
_unix.294351
is there a way to wget a website and put it's tabular content in .csv? or maybe a cURL request a webpage, grab it's tabular content represented in numbers that consists of HTML to .csv?
wget a website to csv
linux;wget
PHP has a class DOMDocument that you can use to retrieve and parse html. this code will fetch and extract the rows from the webpage. There is still more work necessary to extract the specific items you want but if you are willing to learn some PHP this will get you started<?php$html = file_get_contents('http://currency.poe.trade/search?league=Prophecy&online=x&want=1&have=4');$doc = new DOMDocument;$doc->loadHTML($html);$xpath = new DOMXpath($doc);$rows = $xpath->query('//div[contains(@class, row)]'); //instance of DOMNodeListforeach ($rows as $row) { // var_dump($row); echo Found {$row->nodeValue};}You can run the code above by copying and pasting in this online PHP interpreterWhen I run it I get the following sample output (truncated)Found Currency market // Prophecy go to item trades Protip Arrows always point from what you pay to what you get. (You get You pay) Currency search Manage your shop Show search form League ProphecyHardcore ProphecyStandardHardcore Online only Off On What do you want? What do you have? Reset .... [more output]once you've extracted the info you want then its pretty simple to just make each item of interest delimited by a , then insert and newline for each record and then you'll have a CSV file.Note: for debugging you will need to dump a DOMelement in its HTML/XML markup format. You can use this:$xml = $domElement->ownerDocument->saveXML($domElement);or alternatively$html = $domElement->ownerDocument->saveHTML($domElement);more background at:http://php.net/manual/en/class.domelement.php
_scicomp.18908
I have two time-dependent coupled equations. One of which is several orders of magnitude more computationally demanding than the other. I am trying to use machine learning to reproduce the behavior of the more expensive equation. equation 1input: c(t), a(t)output a(t+dt)equation 2input: a(t)output: c(t+dt)So essentially I want to reconstruct the response of equation 2. Keep in mind that internally there are variables in equation 2 which retain 'memory' of the previous states. So the response depends on the history of input.Any advice on where to start or what methods have been developed for this type of system? OR if there is a more appropriate place to post this?edit: some more details, this is a multiscale simulationequation 1 is a simple finite difference equation$a(x,t+dt) = 2a(x,t) - a(x,t-dt) + \frac{dt^2}{dx^2} \left[ a(x+dx,t)-2a(x,t) + a(x-dx,t) \right] + dt^2 c(x,t)$for the second part at each x I have a time-dependent set U(t) to propogate to U(t+dt). This propagation depends on an input a(x,t) and produces c(x,t+dt) to be fed back into the first equation. The details of this part are a bit convoluted/involved, but the essential point is that I want to avoid explicitly storing or propagating U (very very very expensive e.g. 10,000+ cpu cores needed)EDIT2:A NARX network seems to be able to do almost what I want. However, I have a number of different 'examples' which I want the network to learn from. Maybe the only way I can do it is to stitch everything together into one big (input, output) set?http://www.mathworks.com/help/nnet/ug/design-time-series-narx-feedback-neural-networks.html
approximation of nonlinear time-dependent system with history
machine learning;time integration;approximation algorithms;support vector machines
null
_unix.363121
One of the study questions I'm doing is suggesting that I apply ACL's for the /root directory recursively for a regular user on the system. Obviously, this is bad security practice but it's just for practice inside a VM and I will remove the ACL once I'm done.I've tried like this:setfacl -R -m u:username:rwx /rootBut when I log in as username I just get every file as rwx permissions, including text files and other non executables.Is there a neater way that copies permissions into the ACL from regular ugo permissions, or would that involve a bit of bash scripting?
setfacl a whole directory containing assorted file types?
permissions;acl
I found that it wasn't necessary to use the recursive (-R) switch in this case.Just doing this:setfacl -m u:username:rwx /rootWas enough to give me execute access to /root as normal user and also tried copying some executables into the directory and subdirectories. They ran just as if I were accessing my home directory.Thanks to vfbsilva for the reply, which made me try the simpler approach. I have upvoted their comment.
_cs.2576
We are given a random number generator RandNum50 which generates a random integer uniformly in the range 150.We may use only this random number generator to generate and print all integers from 1 to 100 in a random order. Every number must come exactly once, and the probability of any number occurring at any place must be equal.What is the most efficient algorithm for this?
Most efficient algorithm to print 1-100 using a given random number generator
algorithms;integers;randomness;random number generator
I thought (so it can be wrong :-) of this $O(N^2)$ solution that uses the Fisher-Yates shuffle. In order to keep uniform distribution with good approximation (see EDIT section below) at every iteration you can use this trick to produce a value krand between $0$ and $k-1$: // return a random number in [0..k-1] with uniform distribution // using a uniform random generator in [1..50] funtion krand(k) { sum = 0 for i = 1 to k do sum = sum + RandNum50() - 1 krand = sum mod k }The Fisher-Yates algorithm becomes:arr : array[0..99]for i = 0 to 99 do arr[i] = i+1; // store 1..100 in the arrayfor i = 99 downto 1 { r = krand(i+1) // random value in [0..i] exchange the values of arr[i] and arr[r]}for i = 0 to 99 do print arr[i]EDIT:As pointed out by Erick the krand function above doesn't return a truly uniform distribution. There are other methods that can be used to get a better (arbitrarily better) and faster approximation; but (up to my knowledge) the only way to get a truly uniform distribution is to use the rejection sampling: pick $m = \lceil \log_2(k) \rceil$ random bits and if the number $r$ obtained is less than $k$ return it, otherwise generate another random number; a possible implementation:function trulyrand(k) { if (k <= 1) return 0 while (true) { // ... if you're really unlucky ... m = ceil(log_2 (k) ) // calculate m such that k < 2^m r = 0 // will hold the random value while (m >= 0) { // ... will add m bits if ( rand50() > 25 ) then b = 1 else b = 0 // random bit r = r * 2 + b // shift and add the random bit m = m - 1 } if (r < k) then return r // we have 0<=r<2^m ; accept it, if r < k }}
_computerscience.309
I'm trying to figure out what the best way is to generate an OpenGL texture using a compute shader. So far, I've read that pixel buffer objects are good for non-blocking CPU -> GPU transfers, and that compute shaders are capable of reading and writing buffers regardless of how they're bound. Ideally, I'd like to avoid as many copies as possible. In other words, I'd like to allocate a buffer on the GPU, write compressed texture data to it, and then use that buffer as a texture object in a shader.Currently, my code looks something like this:GLuint buffer;glGenBuffers(1, &buffer);glBindBuffer(GL_SHADER_STORAGE_BUFFER, buffer);glBufferStorage(GL_SHADER_STORAGE_BUFFER, tex_size_in_bytes, 0, 0);glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);// Bind buffer to resource in compute shader// execute compute shaderglBindBuffer(GL_PIXEL_UNPACK_BUFFER, buffer);glCompressedTexImage2D(GL_TEXTURE_2D, 0, fmt, w, h, 0, tex_size_in_bytes, 0);Is this correct? I read somewhere about guaranteeing synchronization, too. What do I need to add to make sure that my compute shader completes execution prior to copying from the buffer object?
Writing to a compressed texture using a compute shader, with no extra copies
opengl;texture;compression;compute shader
null
_webapps.74973
When I want to create an event, I usually want to do it with a secondary calendar, so I have to click on the combo-box to change the calendar as follows:How could I set another calendar as the default one?
Create events with a secondary calendar by default
google calendar
As far as I know, there is no way to set a secondary calendar as the default selection in Google Calendar.Here are two workarounds which could perhaps make your life easier.Workaround 1: Using keyboard shortcutsIn your screenshot, you have selected the area from 00:00 to 02:30 with your mouse. Now, the cursor is in the What-field.Type the event nameTab to select the calendar drop-down menuc to select the calendar called carlos helder (first letter of the calendar)Tab to select the Create event-buttonEnter to save the eventNot the optimal solution, but at least Tab, c, Tab, Enter is most of the time faster than using the mouse.Workaround 2: Using a client calendar applicationPerhaps you can connect your Google Calendar account to some client calendar application which provides the option to select a secondary calendar as the default calendar for events.For iOS there is a paid app called Week Calendar which is pretty awesome and allows to select a default calendar or allows you to create template events.Source.For OS X the default Calendar application provides the option to select a default calendar.Source.Of course, this is no solution if you prefer to use the Google Calendar web interface.
_cseducators.3318
Given the recent publicity surrounding the sacking of James Damore, and the contentious and heavily partisan nature of the debate around the memo; how would you recommend a teacher handles a student taking James Damore's stance on gender and computing?
gender and computing - handling the Google's Ideological Echo Chamber debate
social context;gender;psychology
null
_cs.49462
This is a homework problem I've been given and I've been raking my brain for hours (so I'm satisfied with some pointers). I know already that the approximation ratio cannot be worse than $2$. I have a wheel graph, where each edge has cost $1$ and the distance between all nodes which are not connected by edges is $2$. The wheel graph $W_6$ is this one:I have marked in blue what I believe to be the output of an MST heuristic algorithm. But I also think this is the optimal solution, since all nodes can only be visited once. So the cost of the tour would be $7$ for both optimal and MST.I do not see how this type of graph shows that the $2$-approximation bound of MST heuristic is tight (not necessarily this instance, but the graphs $W_n$ in general). Can someone enlighten me?
Why does this graph show the tightness of MST heuristic's 2-approximation bound?
algorithms;algorithm analysis;approximation;traveling salesman
The graph I posted is missing some edges, adjacent nodes in the cycle are supposed to be connected. (Edit: Fixed the graph with TSP tour).The actual graph is thisMy solution goes as follows.Now the MST computed for for the MST heuristic is obviouslyallowing many euler tours, among others these ones:where the cycle nodes can be visited in any order. Now any of the tours are valid so we can choose the worst one, for instance $v_0,v_4,v_0,v_1,v_0,v_3,v_0,v_6,v_0,v_2,v_0,v_5,v_0$. Based on that tour, MST heuristic finds this TSP solution:Now since all edges have cost $1$ and and all nodes not connected in the graph have distance $2$, the cost of the tour is $2(n-1) + 2$ where the optimal tour would have cost $n+1$. In the limit$\begin{equation*} \lim_{n\rightarrow\infty} \frac{MST(W_n)}{Opt(W_n)} = \lim_{n\rightarrow\infty}\frac{2n}{n+1} = 2\end{equation*}$
_unix.375900
We have Linux Vm in our environment (rhel 5.11) . It's showing average CPU consumption +90%(%sys,%usr consumed but not %iowait) checked by sar and top.Now when I see top or ps outputs , it's showing no offending processes.Checked the machine from esxi as well (using esxtop) , there also it's showing CPU consumption on machine.Any suggestions what to troubleshoot further ?
Linux CPU consumption
linux;vmware
null
_codereview.54253
Don't pay attention to the menu being awful above 789px. The theoretical task was to support only tablets and smartphones and I didn't bother to make the menu look fine on other devices. Just review it below 789px of VW.Here's a codepen to demonstrate.The page currently 100% matches the PSD designs and operates fine on all devices needed, though I am worried about the solutions I implemented to make it match.Setting the min-height to navbar to custom size and thus control the vertically centered looks on Collapse Button and Logo Image using padding. I had to set width: 70% below 482px to .navbar-brand so as the image resizes not to overflow.Maybe there's a way of more like automatic approach to the navbar size and menus being centered? I used some LESS to also count the paddings, but it also involved using paddings.Creating this second-container-helper class for the right section named All Kinds of Birds not to have the padding-left for the query above 768px, but have it for query below 768px.What is the better way to implement the looks? I mean, the All Kinds of Birds content not having the padding-left for above 768px, but have it below 768px so as it matches the PSD mockup.HTML<html lang=en><head> <meta charset=UTF-8> <meta charset=utf-8> <meta http-equiv=X-UA-Compatible content=IE=edge> <meta name=viewport content=width=device-width, initial-scale=1> <title>Document</title> <!-- Styles Embedded --> <link rel=stylesheet href=css/bootstrap.min.css> <link rel=stylesheet href=css/styles.css> <!-- Embedded Font Awesome for the right-caret icon --> <link href=http://netdna.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css rel=stylesheet></head><body> <nav class=navbar navbar-default role=navigation> <div class=navbar-header> <div class=container-fluid> <button type=button class=navbar-toggle data-toggle=collapse data-target=#navbar-collapsed> <span class=sr-only>Toggle navigation</span> <span class=icon-bar></span> <span class=icon-bar></span> <span class=icon-bar></span> </button> <a class=navbar-brand href=#> <img src=http://php.atservers.net/test/images/logo.png alt=> </a> </div> <!-- //.container-fluid (Brand and button wrapped together so as they don't affect navbar LIST items) --> </div><!-- //.navbar-header --> <div class=collapse navbar-collapse id=navbar-collapsed> <ul class=nav navbar-nav> <li class=active><a href=#>Home</a></li> <li><a href=#>About Us</a></li> <li><a href=#>Products</a></li> <li><a href=#>Bird Information</a></li> <li><a href=#>Contact</a></li> </ul><!-- //.navbar-nav --> </div> <!-- //.navbar-collapse --> </nav> <div class=row> <div class=col-xs-12 col-sm-12> <img src=http://php.atservers.net/test/images/bird-main.png class=custom-img alt=Hi, I'm a Bird> </div> <!-- //.column --> </div><!-- //.row --> <div class=container-fluid> <div class=row> <div class=col-xs-12 col-sm-12> <p class=section-header>Find Birds In Your Area</p> <select name= id=> <option value= selected>Select Your Region</option> <option value=>CA</option> <option value=>FL</option> <option value=>WA</option> </select> </div> </div> <!-- //.row --> </div> <!-- //.container-fluid --> <div class=container-fluid> <div class=row> <div class=col-xs-12 col-sm-12> <p class=section-subheader>We'r Really Into Birds</p> <p class=section-content><p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Perferendis vitae, tenetur, ullam animi, expedita facere enim deleniti excepturi dolor reprehenderit cupiditate saepe quidem voluptatem blanditiis ea dolore facilis totam fugit.</p><p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Perferendis vitae, tenetur, ullam animi, expedita facere enim deleniti excepturi dolor reprehenderit cupiditate saepe quidem voluptatem blanditiis ea dolore facilis totam fugit.</p></p> </div><!-- //.column --> </div> <!-- //.row--> </div> <!-- //.container-fluid --> <div class=row> <div class=col-xs-12 col-sm-6 section-column> <img src=http://php.atservers.net/test/images/tree-birds.png alt= class=custom-img> <div class=container-fluid> <div class=section> <p class=section-subheader>Even Birds in Trees</p> <p class=section-content>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Magnam excepturi voluptates harum fugit enim non, id porro repellendus soluta cupiditate consequuntur dignissimos dolorem sint corporis, illo aliquam blanditiis hic nam.</p> <p class=section-content> <a href=# class=section-link><i class=fa fa-caret-right></i>Learn more about trees (and birds)</a> </p> </div> <!-- //.section --> </div><!-- //.container-fluid --> </div> <!-- //.column--> <div class=col-xs-12 col-sm-6> <img src=http://php.atservers.net/test/images/all-birds.png alt= class=custom-img> <div class=second-container-helper> <!-- Need that not to have left paddings @iPad, but have them @iPhone --> <div class=section> <p class=section-subheader>All Kinds of Birds</p> <p class=section-content>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Est tenetur, reprehenderit at odio sit cumque neque placeat impedit praesentium soluta dolorum architecto qui molestias facilis voluptatibus, ut unde. Tenetur, provident.</p> <p class=section-content> <a href=# class=section-link><i class=fa fa-caret-right></i>Learn more about trees (and birds)</a> </p> </div><!-- //.section --> </div> <!-- //.second-container-helper --> </div> <!-- //.column --> </div><!-- //.row --> <footer> <div class=container-fluid> <div class=row> <div class=col-xs-12 col-sm-12> <div class=footer-credits> <p class=text-muted> Registered trademark of An Amazing Company Name, an affiliate of independent Canadian birds</p> <p class=text-muted> Copyright 2014 An Amazing Company Name</p> </div> <!-- //.footer-credits --> </div><!-- //.column --> <div class=col-xs-12 col-sm-12> <ul class=footer-nav> <li><a href=#>Home</a></li> <li><a href=#>Terms of Use</a></li> </ul> </div><!-- //.column --> </div> <!-- //.row --> </div><!-- //.container-fluid --> </footer> <script src=https://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js></script> <script src=http://php.atservers.net/test/js/bootstrap.min.js></script></body>CSS/* Global Styles */img{ height: auto; width: 100%; display: block;}a{ color: #ff6600;}.container-fluid{ padding-left: 40px; padding-right: 40px;}/* //Global Styles *//* Navbar Styles */.navbar{ min-height: 158px; background-color: #ffffff; margin-bottom: 0; border: none;}.navbar-default{ border: none;}.navbar-brand{ margin-top: 36px;}.navbar-default .navbar-toggle{ margin-top: 58px;}.navbar-default .navbar-collapse{ border-color: transparent;}.navbar-collapse.in{ margin-top: 65px;}.collapsing{ margin-top: 65px;}/* Navbar Colors and Fonts */.navbar-nav{ background-color: #ff6600; padding-left: 40px; padding-right: 40px; margin: 0 -15px;}.navbar-default .navbar-nav > li > a{ color: #ffffff; border-top: 1px solid #fff;}.navbar-default .navbar-nav>.active>a,.navbar-default .navbar-nav>.active>a:hover,.navbar-default .navbar-nav>.active>a:focus{ background-color: transparent; color: #fff; border-top: none;}.navbar-default .navbar-nav > li > a:hover, .navbar-default .navbar-nav > li > a:focus{ color: #fff;}/* //Navbar Colors and Fonts */.nav li:before{ font-family: 'Glyphicons Halflings'; content: \e080; float: right; color: #fff;$ padding-top: 10px;}/* //Navbar Styles *//* Select Styles */select{ width: 100%; background-color: #ffffff; height: 60px; font-size: 24px; color: #c7c7cc; border: 2px solid #c7c7cc; border-radius: 4px; -webkit-appearance: none; background: url('../images/dropdown-arrow.png') no-repeat right; background-position: 98%;}/* //Select Styles *//* Section Styles */.section{ margin-bottom: 30px;}.section-header{ font-weight: bold; font-size: 30px; color: #ff6600; padding-top: 30px; padding-bottom: 30px;}.section-subheader{ font-size: 28px; color: #ff6600; padding-top: 36px; padding-bottom: 16px;}.section-content{ color: #999999; font-size: 18px;}.section-link{ font-size: 14px;}.fa-caret-right{ padding-right: 10px;}.container-helper{ padding-right: 40px;}/* //Section Styles *//* Media Queries *//* Fix for iPhone select font size */@media screen and (max-width: 482px){ select{ font-size: 18px; }}/* // Fix for iPhone *//* Section Media Queries Helpers *//* container in 2nd section has left padding below 768px to match PSD design */@media screen and (max-width: 768px){ .second-container-helper{ padding-left: 40px; }}/* container in the left section doesnt have right padding after 768px, but has it below 768px */@media screen and (min-width: 768px){ .section-column .container-fluid{ padding-right: 0; }}/* //Section Media Queries Helpers *//* Logo img resizes below 482px */@media screen and (max-width: 482px){ .navbar-brand{ width: 70%; margin-top: 46px; height: auto; }}/* //Logo img resizes below 482px *//* //Media Queries *//* Footer Styles*/footer{ margin-top: 80px;}.footer-credits{ border-top: 1px solid #c7c7cc;}.footer-nav{ list-style: none; padding-left: 0;}.footer-nav li{ float: left; padding: 5px;}.footer-nav li a{ border-right: 1px solid #c7c7cc; padding-right: 10px;}.footer-nav li:last-child a{ border-right: none;}/* Footer Styles */
HTML & CSS code for small responsive test project based on Bootstrap 3
html;css;html5;twitter bootstrap
Inappropriate use of markupYou're using paragraphs when you should be using heading tags (h1-h6) to markup your headlines (you also have a spelling error).<p class=section-subheader>We'r Really Into Birds</p>Should be:<h1 class=section-subheader>We're Really Into Birds</h1>You have some invalid markup, which can cause unexpected things to happen. You're not allowed to place paragraphs inside other paragraphs:<p class=section-content><p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Perferendis vitae, tenetur, ullam animi, expedita facere enim deleniti excepturi dolor reprehenderit cupiditate saepe quidem voluptatem blanditiis ea dolore facilis totam fugit.</p><p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Perferendis vitae, tenetur, ullam animi, expedita facere enim deleniti excepturi dolor reprehenderit cupiditate saepe quidem voluptatem blanditiis ea dolore facilis totam fugit.</p></p>Should be:<div class=section-content><p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Perferendis vitae, tenetur, ullam animi, expedita facere enim deleniti excepturi dolor reprehenderit cupiditate saepe quidem voluptatem blanditiis ea dolore facilis totam fugit.</p><p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Perferendis vitae, tenetur, ullam animi, expedita facere enim deleniti excepturi dolor reprehenderit cupiditate saepe quidem voluptatem blanditiis ea dolore facilis totam fugit.</p></div>However, if all of your paragraphs have the same styles, the paragraph itself should be styled rather than apply classes to each and every one:p { /* styles from the section-content class */}Creating empty markup for styling purposes is dirty and probably one of the worst things that Twitter's Bootstrap encourages (classitis being the other). There are cleaner ways of creating this type of element:<button type=button class=navbar-toggle data-toggle=collapse data-target=#navbar-collapsed> <span class=sr-only>Toggle navigation</span> <span class=icon-bar></span> <span class=icon-bar></span> <span class=icon-bar></span></button>Should be something more like this:<button type=button class=navbar-toggle data-toggle=collapse data-target=#navbar-collapsed> <span class=sr-only>Toggle navigation</span></button>And.navbar-toggle:before { /* styles to make it look like the ubiquitous hamburger icon */}The second-container-helper classYou have a few things going on wrong here (whether its the implementation or the design, I can't tell because you haven't shown us the mock-up you're working from). Either way, you don't need that extra element.You have a page that looks like this:In my opinion, it would be more aesthetically appealing if it looked like this:or this:However, all of them can be done with markup as simple as this by fiddling with the margins:<ul> <li><p>A</p></li><!-- --><li><p>B</p></li></ul>1 http://codepen.io/cimmanon/pen/KxDbL2 http://codepen.io/cimmanon/pen/jtEGf (needs a bit of tweaking on the margins)3 http://codepen.io/cimmanon/pen/oryHJ
_codereview.129242
I've been doing some reading and tutorials. As such I've contrived a node based project to apply what I've learned. I thought this was a good example as it had some conditional async calls and error handling.Any and all criticism of not following current JS best practices welcome. /** * mkLocalCfg function, * makes a local config directory structure and empty config file. * in the form ~/.config/nobjs/nobjs_config.json * * @param {mkLocalCfgCallback} cb - The callback that handles the response. * */function mkLocalCfg(cb) { //validate root folder exists, if not make it. if (!admin.hasHomeConfigDir()) { fs.mkdir(os.homedir() + /.config, 775, function(err) { //stop here if you can't make it. if (err) {cb(err); return;} }); } fs.mkdir(admin.getConfigLocation(false), 775, function(err) { //can't make config folder end it here if (err) {cb(err); return;} //write config file, return null or err on error fs.writeFile(admin.getConfigLocation(true), JSON.stringify({blogs: {}}), function(err) { if (err) {cb(err); return;} cb(null); }); });}
Optionally creating a directory path based based on existence in NodeJS
javascript;node.js;file system
There's one main issue I currently see with your code and that is that if the admin does not have a home configuration directory (admin.hasHomeConfigDir()) then it will be created asynchronously. The issue with this it that your second call won't wait for this, leading to a race condition where you're trying to make (and write to) a path that might not exist yet.One solution for this is to have a function inside your function that handles the second half of your creation (i.e. the config location creation and file writing).function createConfiguration(callback) { // Make the configuration directory and write to the configuration. var writeConfiguration = function() { fs.mkdir(admin.getConfigLocation(false), 775, function(err) { if (err) { callback(err); return; } // Write the configuration, either forwarding an error to the // callback if an error occurs or nothing at all on success. fs.writeFile(admin.getConfigLocation(true), JSON.stringify({blogs: {}}), function(err) { if (err) { callback(err); return; } callback(null); }); }); }; // If the admin home configuration directory exists, don't attempt to // create it. if (admin.hasHomeConfigDir()) { writeConfiguration(); } else { // Otherwise create the directory and write the configuration. fs.mkdir(os.homedir() + /.config, 775, function(err) { if (err) { callback(err); return; } writeConfiguration(); }); }}I tidied up your code a bit, mostly renaming things (e.g. renaming the function to have a simpler name, renaming arguments to be more readable) and rewording the comments.Another issue I have noticed with your code is that if the directory that admin.getConfigLocation(false) returns already exists, an error will be thrown. You should probably check whether or not the configuration location exists before creating it.Overall it's rather good and well written code, although I don't really have much experience in the field of JS standards so to speak.
_unix.38560
I installed CUDA toolkit on my computer and started BOINC project on GPU. In BOINC I can see that it is running on GPU, but is there a tool that can show me more details about that what is running on GPU - GPU usage and memory usage?
GPU usage monitoring (CUDA)
monitoring;gpu
For Nvidia GPUs there is a tool nvidia-smi that can show memory usage, GPU utilization and temperature of GPU. There also is a list of compute processes and few more options but my graphic card (GeForce 9600 GT) is not fully supported.Sun May 13 20:02:49 2012 +------------------------------------------------------+ | NVIDIA-SMI 3.295.40 Driver Version: 295.40 | |-------------------------------+----------------------+----------------------+| Nb. Name | Bus Id Disp. | Volatile ECC SB / DB || Fan Temp Power Usage /Cap | Memory Usage | GPU Util. Compute M. ||===============================+======================+======================|| 0. GeForce 9600 GT | 0000:01:00.0 N/A | N/A N/A || 0% 51 C N/A N/A / N/A | 90% 459MB / 511MB | N/A Default ||-------------------------------+----------------------+----------------------|| Compute processes: GPU Memory || GPU PID Process name Usage ||=============================================================================|| 0. Not Supported |+-----------------------------------------------------------------------------+
_cstheory.14717
Given a connected arbitrary network $G = (V,E)$, where $V$ is a set of nodes (processors) and $E$ is the set of edges between the nodes. Each node $v _i$ is assigned a non-empty set $S(v _i)$, where $\bigcup _i S(v _i) = S$. The set $S$ is a universe of size $k$. The objective is to let each node $v _i$ find a subset $S'(v _i) \subseteq S(v _i)$ such that (by simply communicating with each other).$\bigcup _i S'(v _i) = S$For each $v_i$ and $v _j$ such that $i \neq j$, $S'(v _i) \cap S'(v _j) = \emptyset$. The idea is that no two set $S'(v_i)$ and $S'(v _j)$ share any common elements, while a set $S'(v _i)$ may be empty. Yet, every element of $S$ must be in a set $S'(v _i)$. The problem is quite simple as you notice. The problem is at least as costly as election. My questions though:Are there similar problems to this problem (note that the sets are not processors !) Do you think there are efficient randomized distributed algorithms for this problem ? Any hints for some techniques that may help break the election bound. (By randomized: I mean I may relax the Req2. such that some nodes have intersecting sets, but not many of them). Note: any computational model would be accepted (I m just looking for hints). But asynchronous computational model with unique ID's is perhaps the most preferred.
Distributed algorithms on sets
reference request;randomized algorithms;dc.distributed comp
null
_webapps.911
Question is in the title. What are the specific steps involved?
How do I add reCaptcha to my WordPress.com blog?
wordpress
As I know today that is not possible. Wordpress.com uses Askimet to protect you from spam.
_webmaster.103298
For example, if my website's listings properly display the breadcrumbs rich snippet, but do not show the aggregate star rating rich snippet, should I assume it is something in the structure or is Google intentionally suppressing these?
Does Google pick and choose which rich snippets to display on their search results?
google search;rich snippets
null
_unix.53503
In a GUI file manager it is possible to select a few files, press Ctrl-C (which supposedly copies come info about the files to clipboard), then navigate to another folder and press Ctrl-V, which will then copy the files into that directory.As an experiment, after copying files in the file manager, it is possible to switch to a text editor - pressing Ctrl-V there pastes a list of absolute filenames. The reverse process (copying a list of files from a text editor and pasting them to a file manager) does not work, which is supposedly due to different target atoms The goal of the exercise is to be able to copy some files from command line, for examplefind ${PWD} -name *.txt | xclip <magic parameters>then switch to a file manager and copy them all to a directory using File->Paste. So, the question is: What parameters of xclip (or other program) do I need to specify so file manager recognizes the selection as a list of files and enables its Paste menu item?Alternatively, is there a low-level tool which would allow to inspect the contents of X selection and see what data it currently contains?
Copying files from command line to clipboard
command line;x11;clipboard
Yes, basically, you'd need to offer the CLIPBOARD selection either as text/uri-list with the content being /path/to/file1/path/to/file2application/x-kde-cutselection or x-special/gnome-copied-files with content copy\nfile://$path1\nfile://$path2\0 or cut\nfile://$path1\nfile://$path2...\0With xclip you can achieve this with something likefind $PWD -name *.pdf| xclip -i -selection clipboard -t text/uri-listI've also found this loliclip command that looked promising, but though I could retrieve the values, I wasn't able to store them and have them retrieved from loliclip by pcmanfm successfully.You also should be able to implement it in a few lines of perl-tk.
_unix.385490
I've created a systemd job using systemd-run --on-calendar .... Now I'replaced it with proper .timer and .service files. But i'm not able to remove the old one. I can stop it and disable it, but when i call systemctl list-timers he still appears with his arbitrary name run-r0d0dc22.... I also looked for his .timer file, but I don't find it. Thanks for help.
Removing a timer created with systed-run --on-calendar
systemd;systemd timer
The transient files end up in /run/user/ and do not seem to ever be removed until the user logs out (for systemd-run --user) or until a reboot, when /run is recreated.For example, if you create a command to run once only at a given time:systemd-run --user --on-calendar '2017-08-12 14:46' /bin/bash -c 'echo done >/tmp/done'You will get files owned by you in /run:/run/user/1000/systemd/user/run-28810.service/run/user/1000/systemd/user/run-28810.service.d/50-Description.conf/run/user/1000/systemd/user/run-28810.service.d/50-ExecStart.conf/run/user/1000/systemd/user/run-28810.timer/run/user/1000/systemd/user/run-28810.timer.d/50-Description.conf/run/user/1000/systemd/user/run-28810.timer.d/50-OnCalendar.confFor non --user the files are in /run/systemd/system/You can remove the files, do a systemctl [--user] daemon-reload and then list-timers will show only the Unit name, with their last history if they have already run. This information is probably held within systemd's internal status or journal files.
_unix.343920
I wrote this simple script and I need some help.export http_proxy='http://proxy.test.cz:1234/'wget -nvq --proxy-user=test --proxy-password=test google.com &>/dev/null | grep -q 'You cant user internet' || echo Proxy isnt working. | mail -s Proxy isnt working -r No-reply<[email protected]> [email protected] taken:Export the address of our proxyDownload from www.google.com with wgetCheck result from proxy for 'You cant user internet' If found, then it should end but where not found it should send email to my address.Problem is that it sends email even if it finds 'You cant user internet'. Any help please ?
Trying to test if proxy is working
linux;bash;scripting;proxy;test
null
_cs.26317
If $B$ is a complexity class, then the class $P^B$ (for example) is defined as the set of problems that can be run in polynomial time, given an oracle to every problem in $B$. That's what they told me in my Theory of Computation class, anyways.Per this definition, it seems to me that $P^P$ captures every decidable language. Here's why I think that:Let $M$ be a TM that always halts. On input $x$, construct a new machine $M'$ that: (1) checks to see if the input is equal to $x$, (2) if its input is $x$ then it simulates $M$ on $x$, (3) if its input is not $x$ then it instantly rejects.This new machine $M'$ runs in $O(n)$ time, so in the class $P^P$, we have an oracle to it. We can now consult this oracle to find out if $M'$ accepts $x$, and we learn in only $O(n)$ time whether or not $M$ accepts $x$. So we have an algorithm that decides $M$ and runs in $TIME(O(n))$, given an oracle to some countable set of problems in $TIME(O(n))$ (e.g. every $M'$, constructed with respect to every possible finite bitstring $x$).What am I misunderstanding?Here is an attempt to clarify my confusion.This is the model that I think we are using:There are three tapes usable by the Oracle Turing Machine: a standard worktape, an oracle input tape, and an oracle machine tape. The OTM can enter a special state where it takes the machine description written on the oracle machine tape, performs a magical 1-step simulation of that machine on input equal to the contents of the oracle input tape, and then writes down a $1$ or a $0$ on the worktape according to the results of the simulation.Define: A language $L$ is in $P^P$ if there is an OTM that decides L and always halts in $p(n)$ steps on length $n$ input (for some polynomial $p$), and also for every Turing machine description $M$ used by the OTM, there exists a polynomial $q$ such that $M$ halts in $q(n)$ steps on input $M$.Then my argument is valid, I think: every machine that we write down runs in linear time overall (even though the constant associated with that machine will grow very quickly for increasingly-long inputs).This wouldn't be a problem if we flipped the quantifiers (i.e. there exists a polynomial $q(n)$ such that every machine description $M$ used by the OTM halts in $q(n)$ steps). But I'm not sure that still describes $P^P$...
Question about the definition of complexity class oracles
complexity theory;terminology;oracle machines
Either you're misunderstanding what they said, or you're not misunderstanding anything.If $B$ is a complexity class, then the class $P^B$ is defined as $\bigcup_{L \in B} P^{L}$. (That applies even when $B$ does not have a complete problem.)If $B$ is a complexity class and $K$ is complete for $B$ under polynomial-time Turing reductions, then $P^B = P^K$.Proof: If those hypotheses hold then for all languages $L$ in $B$, by using the reduction to simulate oracle queries to $L$, one has $P^L \subseteq P^K$.Thus, if those hypotheses hold then $P^K \subseteq \bigcup_{L \in B} P^{L} = P^B = \bigcup_{L \in B} P^{L} \subseteq \bigcup_{L \in B} P^{K} = P^K$.Therefore, if those hypotheses hold then $P^B = P^K$.
_codereview.140001
Which way should I prefer? How can I go about making the right decision when it comes to these types of things?Code based on a 'conditional sleep' where the condition needs to be determined by calling code. //Used throughout the examplespublic interface Conditional{ boolean condition();}//The abstract class...public abstract static class ConditionalSleep{ public abstract boolean condition(); public void sleep(long time, TimeUnit timeUnit){ if(!condition()) return; try { Thread.sleep(timeUnit.toMillis(time)); } catch (InterruptedException e) { e.printStackTrace(); } }}//The utility classpublic static final class ThreadUtils { public static void sleep(long time, TimeUnit timeUnit){ try { Thread.sleep(timeUnit.toMillis(time)); } catch (InterruptedException e) { e.printStackTrace(); } } public static void sleep(long time, TimeUnit t, Conditional c){ if(!c.condition()) return; ThreadUtils.sleep(time,t); }}//The sleeper class coupled to an interface..public static class Sleeper{ private Conditional conditional; public void sleep(long time,TimeUnit timeUnit){ if(conditional != null && !conditional.condition()) return; try { Thread.sleep(timeUnit.toMillis(time)); } catch (InterruptedException e) { e.printStackTrace(); } } public void sleep(long time, TimeUnit timeUnit, Conditional c){ if(c!= null && !c.condition()) return; sleep(time,timeUnit); } public void setConditional(Conditional conditional){ this.conditional = conditional; }}public static void main(String[] args) { //or even.. if(1>1){ try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } }}
Conditional sleeper
java;comparative review
null
_unix.351753
Like the title states, I have a simple apache2 server set up on my raspberry pi to run a local website at home.I'd like to be able to use PHP to read the contents of other folders on the Raspberry Pi, outside of the specified wwwroot. I actually keep my wwwroot in my samba drive on the raspberry pi, located in:/network-drive/websiteI'd like to be able to access other folders in the /network-drive directory using PHP scripts inside the website folder.Is this possible? I tried disabling open_basedir in php.ini, but that didn't change anything!
Raspberry Pi & Apache2, Accessing files outside of webroot
apache httpd;php
null
_reverseengineering.1531
I analyzed some binaries in x86/x86-64 using some obfuscation tricks. One was called overlapping instructions. Can someone explain how does this obfuscation work and how to work around?
What is overlapping instructions obfuscation?
obfuscation;binary analysis;deobfuscation
The paper Static Analysis of x86 Executables explains overlapping instructions quite well. The following example is taken from it (page 28):0000: B8 00 03 C1 BB mov eax, 0xBBC103000005: B9 00 00 00 05 mov ecx, 0x05000000000A: 03 C1 add eax, ecx000C: EB F4 jmp $-10000E: 03 C3 add eax, ebx0010: C3 retBy looking at the code, it is not apparent what the value of eax will be at the return instruction (or that the return instruction is ever reached, for that matter). This is due to the jump from 000C to 0002, an address which is not explicitly present in the listing (jmp $-10 denotes a relative jump from the current program counter value, which is 0xC, and 0xC10 = 2). This jump transfers control to the third byte of the five byte long move instruction at address 0000. Executing the byte sequence starting at address 0002 unfolds a completely new instruction stream:0000: B8 00 03 C1 BB mov eax, 0xBBC103000005: B9 00 00 00 05 mov ecx, 0x05000000000A: 03 C1 add eax, ecx000C: EB F4 jmp $-100002: 03 C1 add eax, ecx0004: BB B9 00 00 00 mov ebx, 0xB90009: 05 03 C1 EB F4 add eax, 0xF4EBC103000E: 03 C3 add eax, ebx0010: C3 retIt would be interesting to know if/how Ida Pro and especially the Hex Rays plugin handle this. Perhaps @IgorSkochinsky can comment on this...
_unix.333674
So in a previous version of ffmpeg it introduced a -safe 0 or -safe 1 option. I discovered this as I had a bash script that used ffmpeg with absolute paths i.e. in /dev/shm/. For the last 6 months or so I have worked around this by using the -safe 0 option.Today while playing around I found that the latest version of ffmpeg in Ubuntu 16.10 returns Option safe not foundwhen trying to use the -safe flag. Did they finally decide to do away with the -safe option? Or is this a mis-build?
ffmpeg changed safe option again?
ubuntu;ffmpeg
null
_softwareengineering.321676
When you consider Scala and its pass-by-name you could (if I am not mistaken) pack the argument to lambda and pass it by value to the function. Internally the function would use pass-by-name parameter as lambda.However in Algol you can change the parameter (so it is possible to write swap for example).My question is -- how full pass-by-name is implemented?
How full pass-by-name is implemented?
parameters
There are two possible implementations. The first is textual substitution, I.e. the called function is expanded inline with references to its parameters replaced with the code you supplies. This is basically a form of macro, and suffers from many of the problems inherent with that, most notably that a function cannot be made recursive when defined like this.The other is to use a pair of thunks, I.e. generated code that substitute for the operations of reading and writing the parameter. You can simulate this in an OOP language by defining an interface, e.g.public interface CallByNameArg<T>{ void set (T value); T get();}Then a function uses those methods when it wants to access its arguments, e.g. a function in a call-by-name language that looks like this:int sum (int index, int start, int end, int value) // call-by-name{ int r = 0; for (index = start; index < end; index ++) r += value; return r;}becomesint sum (CallByName<int> index, CallByName<int> start, CallByName<int> end, CallByName<int> value){ int r = 0; for (index.set (start.get ()); index.get() < end.get(); index.set (index.get () + 1)) r += value.get(); return r;}and a call site:int a[] = ....;int i;int s = sum(i, 0, a.length; a[i]);becomes (assuming a language with proper closures, so not Java even though I've been using Java-like syntax):int s = sum (new CallByNameArg { void set (int value) { i = value; } int get () { return i; } }, new CallByNameArg { void set (int value) { throw NotModifiable; } int get () { return 0; } } new CallByNameArg { void set (int value) { throw NotModifiable; } int get () { return a.length; } }, new CallByNameArg { void set (int value) { a[i] = value; } int get () { return a[i]; } });If you don't have actual closures in your language, you can simulate them by moving modifiable variables (in this case i and a) into an object and defining the CallByNameArg instances as inner classes or similar.Hope this makes it a bit clearer.
_unix.360800
I'd like to know what the minus (-) and the EOC in the command below means. I know some languages like Perl allows you to chose any combination of character (not bound to EOF) but is that the case here? And the minus is a complete mystery for me. Thanks in advance!ftp -v -n $SERVER >> $LOG_FILE <<-EOC user $USERNAME $PWD binary cd $DIR1 mkdir $dir_lock get $FILE byeEOC
What does <
linux;bash;shell script;io redirection
That's a here-document. command <<-wordhere-document contentswordThe word used to delimit the here-document is arbitrary, it's common, but not necessary, to use an upper-case word.The - in <<-word has the effect that tabs will be stripped from the beginning of each line in the contents of the here-document.cat <<-SERVICE_ANNOUNCEMENT hello worldSERVICE_ANNOUNCEMENTIf the above here-document was written with literal tabs at the start of each line, it would result in the outputhelloworldrather than hello worldTabs before the end delimiter are also stripped out with <<- (but not without the -):cat <<-SERVICE_ANNOUNCEMENT hello world SERVICE_ANNOUNCEMENT(same output)
_cs.48378
I am currently studying materials for my uni subject. There are two examples of Kleene algebras, but I don't see what is the difference between them.Class ${2^{\Sigma}}^{*}$ of all subsets of $\Sigma^{*}$ with constants $\emptyset$ and $\{\varepsilon\}$ and operations $\cup$, $\cdot$ and $*$.Class of all regular subsets of $\Sigma^{*}$ with constants $\emptyset$ and $\{\varepsilon\}$ and operations $\cup$, $\cdot$ and $*$.What is the difference between ${2^{\Sigma}}^{*}$ and all regular subsets of $\Sigma^{*}$? What is that difference I don't see? Thanks in advance.
Kleene algebra - powerset class vs class of all regular subsets
formal languages;regular languages
The difference between the two is that $2^{\Sigma^*}$ contains all languages over $\Sigma$, whereas the class of all regular subsets of $\Sigma^*$ contains only the regular languages over $\Sigma^*$. So if $L$ is a non-regular language over $\Sigma$ then $L \in 2^{\Sigma^*}$ while $L$ doesn't belong to the set of all regular languages over $\Sigma$. For any $\sigma \in \Sigma$, you can take $L = \{ \sigma^{n^2} : n \geq 0 \}$, for example.
_cs.50088
The isomorphic induced subgraph problem, is the problem of deciding whether, given two graphs $G$ and $H$, $G$ contains an induced subgraph isomorphic to $H$.Is there a proof using Courcelle's theorem, that this problem is fixed-parameter tractable when parameterized by $|H|$ and $\mathrm{tw}(G)$ (treewidth)?
Isomorphic induced subgraph problem using Courcelle's theorem
algorithms;algorithm analysis;graph isomorphism;parameterized complexity
null
_codereview.67668
Below are a few of my handlers and helper methods. This works perfectly fine, but I'm pretty sure I'm breaking about 1001 modern conventions and possibly even optimizations.Could you provide some insights as to how I can improve the structure and design of this code?function getSelectedId() { var grid = $(MainGrid).data(kendoGrid); var selected = grid.select(); var data = grid.dataItem(selected); return data.Id;}function changeGrid(e) { var id = getSelectedId(); if (id == 0) { hideSubGrid(); } else { refreshSubGrid(); showSubGrid(); setTitle(id); }}function hideSubGrid() { var subGrid = $(SubGrid).data(kendoGrid); subGrid.addClass(hidden);}function showSubGrid() { var subGrid = $(SubGrid).data(kendoGrid); subGrid.removeClass(hidden);}function refreshSubGrid(e) { var subGrid = $(SubGrid).data(kendoGrid); subGrid.dataSource.read(); subGrid.refresh();}function setTitle(id) { $(#Title).text(id);}Some quick searching on the subject, suggests a style similar to what I've tried in the below code. However, I don't think this looks a whole lot better. I guess it's a step in the right direction, where I won't have to repeat myself as much, though.var gridHandler = { mainGrid: $(MainGrid).data(kendoGrid), subGrid: $(SubGrid).data(kendoGrid), title: $(#Title), getSelectedId: function () { var selected = this.mainGrid.select(); var data = this.mainGrid.dataItem(selected); return data.Id; } changeGrid: function (e) { var id = this.getSelectedId(); if (id == 0) { this.hideSubGrid(); } else { this.refreshSubGrid(); this.showSubGrid(); this.setTitle(id); } } hideSubGrid: function () { this.subGrid.addClass(hidden); } showSubGrid: function () { this.subGrid.removeClass(hidden); } refreshSubGrid: function (e) { this.subGrid.dataSource.read(); this.subGrid.refresh(); } setTitle: function (id) { this.title.text(id); }}
Event handlers & helper methods for a Kendo Grid
javascript;jquery;design patterns
The second solution has two main improvements:Fewer lookups. Every time you call your original getSelectedId method,the lookup for $(MainGrid).data(kendoGrid); is performed again.The 2nd version caches this in gridHandler.mainGrid -- only one lookup.The same goes for subGrid.More compact. For example the subGrid local variable here was not particularly useful:function hideSubGrid() { var subGrid = $(SubGrid).data(kendoGrid); subGrid.addClass(hidden);}It would have been simpler and more natural this way:function hideSubGrid() { $(SubGrid).data(kendoGrid).addClass(hidden);}Reduced namespace clutter. By putting the methods in a gridHandler, you don't have many methods lying around in the global namespace, they are all neatly inside gridHandler.
_webapps.6058
I've got a spreadsheet where each row of the first column is a question, and the next 4 columns are the optional 4 answers to that question.I want to turn these questions into an online form (like as the one offered by google docs)Is there a web service that can offer something like this?
Creating an online form - by Importing the questions from a spreadsheet?
google spreadsheets;google forms
null
_unix.18840
time is a brilliant command if you want to figure out how much CPU time a given command takes.I am looking for something similar that can measure the disk I/O of the program and any children. Preferably it should distinguish between I/O that was cached (and thus did not cause the disk to spin) and I/O that was not cached.So I would like to do:iomeassure my_program my_argsand get output similar to:Cached read: 10233303 BytesCached write: 33303 Bytes # This was probably a tmp file that was erased before making it to the diskNon-cached read: 200002020 BytesNon-cached write: 202020 BytesI have looked at vmstat, iostat, and sar, but none of these are looking at a single process. Instead they look at the whole system.I have looked at iotop, but that only gives me a view this instant.--- edit ---snap's answer seems close.'File system inputs:' is the non-cached reads in 512-byte blocks.'File system outputs:' is the cached writes in 512-byte blocks.You can force the cache empty with:sync ; echo 3 | sudo tee /proc/sys/vm/drop_caches >/dev/nullI tested with: seq 10000000 > seq /usr/bin/time -v bash -c 'perl -e open(G,\>f\); print G <>;close G; unlink \f\; seq'
Measuring disk I/O usage of a program
io;time;measure
You did not specify which operating system you use.LinuxInstead of using time foo which is (usually) a shell built-in you could try the external command /usr/bin/time foo. It gives some additional information such as number of file system inputs and outputs (but no information about cache hits or byte amounts). See man time and man getrusage for further instructions. Note that this feature requires Linux kernel version 2.6.22 or newer.FreeBSDUse /usr/bin/time -l foo. It gives the number of inputs and outputs. See man time and man getrusage for further instructions.