id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_unix.337172
|
I have a server:CentOS Linux release 7.3.1611 (Core)3.10.0-514.2.2.el7.x86_64 #1 SMP Tue Dec 6 23:06:41 UTC 2016 x86_64 x86_64 x86_64 GNU/LinuxI think its network connection cutout at one point (its back now). I haven't been able to find anything in /var/log/messages- maybe I just don't know what to look for?Essesntially I'm looking for two things: If there was a problem with the nic, If the server lost its internet connection.The second one is obviously harder to figure out (maybe impossible?). Obviously I should have some external monitoring solution, but from an educational perspective where would you look (locally on the host) to solve this mystery?
|
What system logs might tell me if a server lost its internet connection?
|
linux;centos;networking;rhel;syslog
|
Check the ring kernel buffer (dmesg) - you should see information for network connectivity events.
|
_webmaster.74598
|
I am curious about information that gave me Google's Webmaster Tools. Because I see two different information about Google index of same webpage in same time.First information (sitemap files)I submitted sitemap.xml a couple of months ago, now I can see charts about how many pages were sent for indexation and how many of them were indexed. Right now it reports that 106 of total 113 pages were indexed. Second information (left menu -> Google index -> Index status)In Index status section there is information about number of indexed page, which is zero. In the lower chart I can see that it always was zero.I would like to ask is there any difference among these indices and what?
|
Two kinds of Google index
|
google;google search console;sitemap;indexing;google index
|
Google looks at things differently than we do. Sorry. That is just the way it is. Sometimes you have to see things from Google's perspective before the data makes sense.The Google Webmaster Tools data lags behind a couple of days and some elements a bit more. As far as the various Index Counts, this explains why the difference. However, there is no direct line between the sitemap and the number of pages indexed. The reason for this is simple. Google does not rely upon the sitemap exclusively and will find pages through it's spider. It is possible that the index has listed pages that no longer exist or pages due to bad links that are 404 pages or soft 404 pages. At one point, Google was reporting 2 times as many pages I actually had due to an error that created bad links. This should have resulted in hard 404 errors without a 404 page, but it took several months before Google began dropping these pages. It has not fully corrected itself after 6 months. This is because Google has not tried to hit all of these pages enough to de-list them.As for why Google is telling you that you have 0 index pages in the Index Status, I have no idea. You can check the number simply by doing a site:example.com style query in Google Search. But please understand that this number can fluctuate with index refreshes which happens several times a day. If the number based upon the search is not 0, then you have nothing to worry about. The GWT Index Status count has simply not been updated for what ever reason. Only God and Google knows why.
|
_softwareengineering.285591
|
I'm thinking of the following two requests:Request 1:Load all static HTML, JavaScript, images, etc. the website framework so to speak. Then fire a second request to get dynamic content (say news items, latest posts).Request 2Send new HTML, JavaScript, images, etc. as required by dynamic content.I'm expecting the first request to be cached for subsequent times a user visits the website, and thus to be a non-issue for returning users. But for first-time users should I maybe build the entire content server-side and send it together in one request? Since in the approach above, both requests need to be responded to in order for the user to be able to use the website.Can I just leave it as two requests (easier for me to program, since I'm building single page app that relies on Ajax requests)? Or is there an easy way to build everything server side and avoid two requests the for first time users?... but wouldn't that break the caching mechanism, since the same URL would return partially different content each time?Is this a known issue with a known solution?
|
Separate requests for static and dynamic page content on first page load?
|
web applications
|
There are a couple things to consider here, at least when it comes to html content:Method 1:Loading a portion of the site that will remain static for the foreseeable future is good when the same users are expected to frequent the website and SEO is less of an issue*. You also have some more control over perceived performance; the 'static' content can prepare the user with a basic UI that will lazily load content. I would recommend this method for websites that are more 'app' than 'site', where content is not the main reason visitors use your site.Method 2:This is better for lots of one-time website visitors because there are less round trips to worry about. It is also better SEO because crawlers will see what the server generates*. It's also easier to maintain and update. I would recommend this for general-purpose sites that target a larger audience and contain a lot of mostly static content.CSS and JS can be loaded and cached immediately. There are few reasons to load CSS or JS at any time other than when the website is first loaded.*I don't have any sources handy, but I believe there has been some effort to make web crawlers capable of reading AJAX-generated content.
|
_webapps.25538
|
Here's a screenshot of two long lists on one of the Trello boards we use at Stack Exchange:Both lists are fairly long - long enough to scroll. But one list looks longer than the other. What determines how long the gray container part of each list is? Why don't they just go down to the bottom of the window? They aren't stretching or compressing to display the same number of full cards - the left list shows 8 full cards while the right list shows 9 full cards. They also aren't stretching or compressing to round up or down to the nearest full card - you can see a portion of a card at the bottom of each list. (Both of these lists are scrolled all the way to the top, although to my eyes it does look like the list on the right is scrolled down a tiny bit. This is an optical illusion of some kind - I double checked.)So: what's the story here?
|
What determines the length of a Trello list?
|
trello
|
They should be given the same max height, i.e. they should be the same length if they would otherwise go off the board. If not, it's a bug.There's a known bug that you may be experiencing. If you are zoomed out and click the 'Add card', then click off, it can shorten the list.Most of this weirdness will go away with the new card composer, though. https://trello.com/c/pRlmLRWS
|
_webmaster.13961
|
I have read an article about sites like google and facebook using a redirect script to keep track of who clicks what links on the web. And since they have a lot of traffic, they are monitoring a lot users that are clicking a lot of links. However, they aren't the source of 100% of the links in the world, and thus they can not monitor every link that every user clicks on.However, we (the smaller websites) are also capable of monitoring what our visitors click on, though we have to do it on a much smaller scale (we don't have as many visitors as facebook and/or google). The only problem is, we don't collect enough data for it to be very useful to anyone but us. However, if someone were to pay owners of the small sites for their click data, and combine it into one big dataset, it could be very useful. In fact, it would probably be useful enough that some people would want to pay to have access to that data. I was wondering if there are any websites that use the business model described above (get small websites to sell you their data, and sell the combined data to other corporations). If anyone knows of such a site, it could be an interesting revenue source for webmasters. And if there is not such a site, then it could be an interesting idea for anyone wanting to start a business...NOTE: I'm asking this more out of curiosity than because I actually want to sell my user's click data (though I might consider it in the future).
|
Is it Possible to Earn Revenue by Selling Click Data?
|
redirects;revenue
|
I think if people want to find out this sort of thing they'd go to Alexa who track click data for people who install their toolbar.
|
_scicomp.26105
|
I have a question regarding quadric fit to a set of points and corresponding normals (or equivalently, tangents). Fitting quadric surfaces to point data is well explored. Some works are as follows:Type-Constrained Direct Fitting of Quadric Surfaces, James Andrews, Carlo H. Sequin Computer-Aided Design & Applications, 10(a), 2013, bbb-ccc Algebraic fitting of quadric surfaces to data, I. Al-Subaihi and G. A. Watson, University of DundeeFitting to projective contours is also covered by some works, such as this one.From all these works, I think Taubin's method for Quadric fitting is pretty popular:G. Taubin, Estimation of Planar Curves, Surfaces and Nonplanar Space Curves Defined by Implicit Equations, with Applications to Edge and Range Image Segmentation, IEEE Trans. PAMI, Vol. 13, 1991, pp1115-1138.Let me briefly summarize. A Quadric $Q$ can be written in the algebraic form:$$f(\mathbf{c},\mathbf{x}) = A x^2 + By^2 + Cz^2 + 2Dxy + 2Exz + 2Fyz + 2Gx + 2Hy + 2Iz + J$$where $\mathbf{c}$ is the coefficient vector and $\mathbf{x}$ are the 3D coordinates. Any point $\mathbf{x}$ lies on the quadric $Q$ if $\mathbf{x}^TQ\mathbf{x}=0$, where:$$Q = \begin{bmatrix}A & B & C & D \\B & E & F & G \\C & F & H & I \\D & G & I & J \\\end{bmatrix}$$Algebraic Fit In principle, we would like to solve for the parameters thatminimize the sum of squared geometric distances between the points and the quadratic surface. Unfortunately, it turns out that this is a non-convex optimization problem with no known analytical solutions. Instead, a standard approach is to solve for an algebraic fit, that is to solve for the parameters$\mathbf{c}$ that minimize:$$\sum\limits_{i=1}^{n} f(\mathbf{c},\mathbf{x}^i)^2 = \mathbf{c}^T M \mathbf{c}$$with$$M = \sum\limits_{i=1}^{n} l(\mathbf{x}^i)l(\mathbf{x}^i)^T$$where $\{\mathbf{x}^i\}$ are the points in the point cloud and$$l = [x^2, y^2, z^2, xy, xz, yz, x,y,z, 1]^T$$Notice that such direct minimization would yield the trivial solution with $\mathbf{c}$ at the origin. This question has been studied extensively in theliterature. One resolution that has been found to work well in practice is Taubins method (cited above), introducing the constraint:$$\| \nabla_x f(\mathbf{c},\mathbf{x}^i) \|^2 = 1$$This can be solved as follows: Let :$$N = \sum\limits_{i=1}^n l_x(\mathbf{x}^i)l_x(\mathbf{x}^i)^T+ l_y(\mathbf{x}^i)l_y(\mathbf{x}^i)^T + l_z(\mathbf{x}^i)l_z(\mathbf{x}^i)^T$$where subscripts denote the derivatives. The solution is given by the generalized Eigen decomposition, $(M \lambda N) \mathbf{c} = 0$. The best-fit parameter vector is equal to the Eigenvector corresponding to the smallest Eigenvalue. Main QuestionIn many applications, the normals of the point cloud are available (or computed). The normals of the quadric $\mathbf{N}(x)$ can also be calculated by differentiating and normalizing the implicit surface:$$\mathbf{N}(x) = \frac{\nabla f(\mathbf{c},\mathbf{x})}{\|\nabla f(\mathbf{c},\mathbf{x})\|}$$where$$\nabla f(\mathbf{c},\mathbf{x}) = 2\begin{bmatrix}Ax + Dy + Fz + G \\ By + Dx + Ez + H \\ Cz + Ey + Fx + I \\ \end{bmatrix}$$However, Taubin's method utilizes only the point geometry, and not the tangent space. And I am not aware of many methods, which are suitable for fitting quadrics such that the tangents of the quadric also match the tangents of underlying point cloud. I am looking for potential extensions of the method above, or any other to cover these first order derivatives.What I would like to achieve is maybe addressed partially in lower dimensional spaces, with more primitive surface (curve) types. For example, fitting lines to image edges, taking into consideration the gradient information is covered here. Fitting planes (a simple type of quadric) to 3D clouds is very common (link 1) or fitting spheres or cylinders can be fit to oriented point sets (link 2). So what I'm wondering is something similar, but the fitted primitive is a quadric.I would also welcome the analysis of the proposed method such as:What is the minimum number of oriented points required?What are the degenerate cases ?Can anything be said about robustness?Update:I would like to present a direction to follow. Formally, what I desire to achieve:$$\| \nabla f - \mathbf{n} \| = 0$$at the point $\mathbf{x}$. Maybe it might be possible to fuse it with Taubin's method to come up with an additional constraint and minimize using Lagrange multipliers?
|
Fitting Implicit Surfaces to Oriented Point Sets
|
computational geometry;regression;geometry;curve fitting;quadric
| null |
_cs.57814
|
A computer can only process numbers smaller than say $2^{64}$ in a single operation, so even an $O(1)$ algorithm only takes constant time if $n<2^{64}$. If I somehow had an array of $2^{1000}$ elements to process, even an $O(1)$ operation such as an index lookup will start to take longer as the index has to be calculated in multiple operations. I think it will take at least $O(\log n)$.Similarly even if an algorithm has $O(\log n)$ complexity, $\log n$ cannot possibly grow larger than about a hundred, so could be ignored as no larger than a small constant.So, is it really meaningful to treat $O(1)$ and $O(\log n)$ as different? The same applies of any difference of $\log n$, like between $O(n)$, $O(n\log n)$ and $O(n/\log n)$.
|
Is there a meaningful difference between O(1) and O(\log n)?
|
complexity theory;algorithm analysis;time complexity;asymptotics
| null |
_cstheory.8951
|
I was reading NP complete theory just thought.Is there any path of length k in given graphIs it polynomial time algorithm?
|
Path of length k in graph
|
ds.algorithms;graph algorithms
| null |
_unix.77113
|
I'm having a hard time getting what ./ does.In the Linux Essentials books, it asks me in an exercise to delete a file named -file. After googling, I found that I need to do rm ./-file but I don't get why!
|
What does ./ mean?
|
files;rm
|
The . directory is the current directory. The directory .. is the upper level of that directory$ pwd/home/user$ cd docs; pwd # change to directory 'docs'/home/user/docs$ cd . ; pwd # we change to the '.' directory, therefore we'll stay. No change/home/user/docs$ cd .. ; pwd # back to up level/home/userIn Linux, commands options are introduced by the - sign, i.e., ls -l, so if you want to make any reference to a file beginning with - such as -file, the command would think you are trying to specify an option. For example, if you want to remove it:rm -filewill complain because it's trying to use the option file of the command rm. In this case you need to indicate where the file is. Being in the current directory, thus the . directory, you need to refer to that file as ./-file, meaning, in the directory ., the file -file. In this case the command rm won't think that's an option.rm ./-fileIt can be done, also, using --.From man rm:To remove a file whose name starts with a '-', for example '-foo', use one of these commands:rm -- -foorm ./-foo
|
_unix.350297
|
I have an embedded linux system with framebuffer only. Normally a Qt Application is running. During update situation the application is stoped and I would like to use the framebuffer device to show simple text and a progress bar.The whole thing should run out of a C/C++ application (not a shell/script) and should be as lightweight and with as less dependencies to the OS as possible. I will not need any keyboard, mouse, touchscreen or other input, just output to framebuffer.Does anyone have a tool recommondation for me? Thanks :)
|
Minimalistic Framebuffer showing text and progress only
|
linux;embedded;console;framebuffer
| null |
_webapps.50370
|
I want to share a dropbox folder with a bunch of people at my university. I only have their university email adresses. I suspect most of them already have a private dropbox account associated with their personal email adresses, and they probably don't want to create a seperate account. Can I share a dropbox folder with them in such a way that they will be able to add it to their private accounts, without me having to ask for their private email adresses?If I share it with their uni adresses, will they get the option to add it to a different account (their personal accounts)? Will sharing a link allow them to use it like any other shared folder (not only view but also add and modify)? How do I go about this?
|
Sharing Dropbox Folders with Secondary Email adresses
|
dropbox;file sharing
|
No. You can invite your friends, but they have to make a separate account for each email they own.This also means that unless your university mates are not clever, they would already have upgraded their own dropbox by inviting their own 2nd email address, and you're out of people to invite.
|
_unix.294406
|
I recently had a problem with a c program I wrote. According to ps it was stuck in a system call, even kill -9 didn't change anything about it.Even a restart didn't work, it was stuck forever in shutting down until I did a hard reset.How can this happen? At which point in a system call execution can a program be, apparently irreversibly, stuck like this? Or is there another way to actually end the process that I don't know about?
|
Why are processes stuck in system calls (sometimes) unkillable?
|
process;kill;system calls
| null |
_softwareengineering.161481
|
A study shows that lines_written/time is language-independent and application-independent for most programmers. If this were true it would imply that the most terse a language is, the more productive a programmer can be on it.Where can this study be found?
|
A study shows that lines_written/time is language-independent for most programmers. Where can it be found?
|
programming languages;productivity
|
Well top result of web search for lines written time is programming language independent led me to an article that attributes this to The Mythical Man-Month by Brooks:Brooks is generally credited with the assertion that annual lines-of-code programmer productivity is constant, independent of programming language. In making this assertion, Brooks cites multiple authors including [7] and [8]. Brooks states, Productivity seems constant in terms of elementary statements, a conclusion that is reasonable in terms of the thought a statement requires and the errors it may include. [1] (p. 94)...[1] F. P. Brooks. The Mythical Man-Month: Essays on Software Engineering. Addison Wesley, Boston, MA, 1995. ... [7] W. M. Taliaffero. Modularity. the key to system growth potential. IEEE Software, 1(3):245257, July 1971. [8] R. W. Wolverton. The cost of developing largescale software. IEEE Transactions on Computers, C-23(6):615636, June 1974.Article quoted above is Do Programming Languages Affect Productivity? A Case Study Using Data from Open Source Projects by D. Delorey, C. Knutson, S. Chun.For the sake of completeness note that article authors are skeptical about mentioned assumption:This statement, as well as the works it cites... appears to be based primarily on anecdotal evidence.Quite the opposite, they claim:We examine data collected from the CVS repositories of 9,999 open source projects hosted on SourceForge.net to test this assumption for 10 of the most popular programming languages in use in the open source community. We find that for 24 of the 45 pairwise comparisons, the programming language is a significant factor in determining the rate at which source code is written, even after accounting for variations between programmers and projects
|
_codereview.67079
|
I am using the Android library Retrofit for networking in my app. The library calls for creating a RestAdapter for making service calls. I want to use this same instance of the RestAdapter for all of my service calls. How is my setup for this scenario?Extending Application:public class CustomApplication extends Application { private RestClient restClient = null; public RestClient getRestClient() { return restClient; } public void initRestClient() { if (restClient == null) { restClient = new RestAdapter.Builder().setEndpoint(BASE_URL).build().create(RestClient.class); } }}Initializing instance and making sure I have access to the instance in all my Activities, without explicitly having to get it for each Activity:public class BaseActivity extends Activity { protected RestClient restClient; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); ((CustomApplication) this.getApplication()).initRestClient(); restClient = ((CustomApplication) this.getApplication()).getRestClient(); }}Then all my Activities will extend BaseActivity.
|
Android Global Variable Setup
|
java;android
|
This is a common problem, and a common solution is dependency injection. From Wikipedia:Dependency injection is a software design pattern in which one or more dependencies (or services) are injected, or passed by reference, into a dependent object (or client) and are made part of the client's state. The pattern separates the creation of a client's dependencies from its own behavior, which allows program designs to be loosely coupled and to follow the dependency inversion and single responsibility principles.One dependency framework for Android is Dagger:Dependency injection isn't just for testing. It also makes it easy to create reusable, interchangeable modules. You can share the same AuthenticationModule across all of your apps. And you can run DevLoggingModule during development and ProdLoggingModule in production to get the right behavior in each situation.Beside's @janos's great point about inheritance being for is-a relationships, you can imagine BaseActivity slowly getting cluttered with methods and fields that some activities need, and other don't. Eventually it all becomes a mess.With dependency injection, an activity's reliance upon a RestAdapter can be made clear:class SomeActivity extends Activity { private final RestAdapter restAdapter; @Inject public SomeActivity(final RestAdapter restAdapter) { this.restAdapter = restAdapter; } ...}And now you can unit test SomeActivity with other RestAdapters.Making the RestAdapter a singleton requires only an attribute:@Provides @Singleton RestAdapter provideRestAdapter() { return new RestAdapter.Builder() .setEndpoint(BASE_URL) .build() .create(RestClient.class);}
|
_cs.70874
|
Given a language $L = \left\{ a^{nk+1} | n > 0 \right\}$ and $k$ is an integer constant. How to show that a DFA for this language must have $k+2$ states or more states using minimum state lemma.By minimum state lemma I mean the number of minimum state a DFA has is the number of pairewise distinguishable states. I have constructed a set of pairewise distinguishable string ${a, aa, aaa, ... a^{k+2}}$ with respect to L and found that I can not add anymore strings to it. But I don't know how to prove this string has the maximum number of pairewise distinguishable strings.
|
How to find the minimum number of states required by a DFA
|
formal languages;finite automata
|
You can prove that your set is maximal in (at least) two different ways:Show that every string is equivalent to one of the strings in your set.Construct a DFA for the language having the same number of states as are strings in your set.In your case both approaches are not too difficult.Note, however, that the question only asks you to show that every DFA for the language must contain at least $k+2$ states. For this there is no need to show that your collection is maximal. If you find $k+2$ pairwise inequivalent strings, then it follows that every DFA for the language must contain at least $k+2$ states. If the collection is not maximal, all it means is that your bound isn't tight.
|
_unix.116291
|
I have a CentOS 5.6 VM installation and currently have a handful of emails that I would like to get access to.When I run mail as root, I get:[root@dev mail]# mailMail version 8.1 6/6/93. Type ? for help./var/spool/mail/root: 11 messages 11 unread>U 1 [email protected] Mon Feb 17 10:06 44/1625 Logwatch for dev.localdomain (Linux)Where is that file stored? I would like to send it on to someone for review.I can't see it in /var/spool/mail/.
|
Where are my emails stored on CentOS 5.6
|
centos;email;virtual machine;sendmail
| null |
_cstheory.32107
|
An answer to the traveling salesman (and similar) problems can be easily verified on light lambda-calculi. Also, if I understand correctly, the light lambda-calculi can compute every polinomial-time computable function. That way, if one can prove that the traveling salesman problem can't be encoded on the light lambda-calculi, that would also prove the problem can't be solved in poly-time, which would also prove P!=NP. Is that correct, or am I confusing some concepts?
|
Would a proof that the traveling salesman algorithm can't be encoded on LAL also prove P!=NP?
|
cc.complexity theory;lo.logic;lambda calculus
| null |
_unix.333057
|
I am having issues with Seagate Laptop SSHD 1TB, PN: ST1000LM014-1EJ164-SSHD-8GB.dmesg | grep ata1:says this:[ 1.197516] ata1: SATA max UDMA/133 abar m2048@0xf7d36000 port 0xf7d36100 irq 31[ 6.548436] ata1: link is slow to respond, please be patient (ready=0)[ 11.232622] ata1: COMRESET failed (errno=-16)[ 16.588832] ata1: link is slow to respond, please be patient (ready=0)[ 21.269019] ata1: COMRESET failed (errno=-16)[ 26.621223] ata1: link is slow to respond, please be patient (ready=0)[ 56.322386] ata1: COMRESET failed (errno=-16)[ 56.322449] ata1: limiting SATA link speed to 3.0 Gbps[ 61.374591] ata1: COMRESET failed (errno=-16)[ 61.374651] ata1: reset failed, giving upFurther, I don't see the drive in GParted.Does this mean this drive is dead or semi-dead?
|
Did this drive die?
|
disk;ssd hdd hybrid
|
Since the issue is with the link, rather than an actual error reported by the drive itself, technically it means that either the SATA port, or the SATA cable, or the drive is having issues. In all likelihood though the drive is dead. (But try another cable if you have one!)
|
_unix.53759
|
I'm experiencing the all-too-common root partition full situation. 100% of my hard drive is allocated. My home (ext4) partition has plenty of space to give up for my full root partition (rootfs).Is there an amazing step by step tutorial out there for this type of a scenario where the root partition (rootfs) needs to be expanded after shrinking an ext4 partition (my home partition)?
|
Is there an excellent tutorial on how to resize a rootfs partition (and shrink another) on a drive that is 100% allocated?
|
filesystems;arch linux;partition
| null |
_unix.253067
|
I use bash completion from https://bash-completion.alioth.debian.org/ and some vendor supplied scripts too (eg. https://github.com/git/git/blob/master/contrib/completion/git-completion.bash)I also use export GREP_OPTIONS='-I --color=always --exclude=*.xhprof' because setting --color=always in (almost) every pipe is a massive pain.However the completion scripts often use grep and don't specify --color=auto or --color=never because by default it's not needed, this leads to broken output where escaped terminal color codes get interleaved with the output making it hard to read. (see below)^[[01;31m^[[K c^[[m^[[Kherry d^[[m^[[Kescribe g^[[m^[[Krep m^[[m^[[Kailinfo request-pull a^[[m^[[Kdd c^[[m^[[Kherry-pick d^[[m^[[Kiff g^[[m^[[Kui m^[[m^[[Kailsplit reset a^[[m^[[Km c^[[m^[[Kitool d^[[m^[[Kiff-files h^[[m^[[Kash-object m^[[m^[[Kerge revert a^[[m^[[Knnotate c^[[m^[[Klean d^[[m^[[Kiff-index h^[[m^[[Kelp m^[[m^[[Kerge-base rm a^[[m^[[Kpply c^[[m^[[Klone d^[[m^[[Kiff-tree h^[[m^[[Kttp-backend m^[[m^[[Kerge-file send-email a^[[m^[[Krchimport c^[[m^[[Kolumn d^[[m^[[Kifftool h^[[m^[[Kttp-fetch mergetool shortlog a^[[m^[[Krchive c^[[m^[[Kommit f^[[m^[[Kast-export h^[[m^[[Kttp-push mv show b^[[m^[[Kisect c^[[m^[[Kommit-tree f^[[m^[[Kast-import history name-rev show-branch b^[[m^[[Klame c^[[m^[[Konfig f^[[m^[[Ketch i^[[m^[[Kndex-pack notes stage b^[[m^[[Kranch c^[[m^[[Kount-objects f^[[m^[[Ketch-pack i^[[m^[[Knit p4 stash b^[[m^[[Kundle c^[[m^[[Kredential f^[[m^[[Kilter-branch i^[[m^[[Knit-db pull status c^[[m^[[Kat-file c^[[m^[[Kredential-cache f^[[m^[[Kmt-merge-msg i^[[m^[[Knstaweb push submodule c^[[m^[[Kheck-attr c^[[m^[[Kredential-osxkeychain f^[[m^[[Kor-each-ref i^[[m^[[Knterpret-trailers rebase subtree c^[[m^[[Kheck-ignore c^[[m^[[Kredential-store f^[[m^[[Kormat-patch l^[[m^[[Kog reflog svn c^[[m^[[Kheck-mailmap c^[[m^[[Kvsexportcommit f^[[m^[[Ksck l^[[m^[[Ks-files relink tag c^[[m^[[Kheck-ref-format c^[[m^[[Kvsimport f^[[m^[[Ksck-objects l^[[m^[[Ks-remote remote verify-commit c^[[m^[[Kheckout c^[[m^[[Kvsserver g^[[m^[[Kc l^[[m^[[Ks-tree repack whatchanged c^[[m^[[Kheckout-index d^[[m^[[Kaemon g^[[m^[[Ket-tar-commit-id lg replace worktree If completion were a command I ran manually I could just prepend the command with GREP_OPTIONS=, but since it's some combination of readline and bash which I don't fully understand I don't know what to do.So is there a way to clear the GREP_OPTIONS during tab completion? Or some other solution that doesn't involve me typing --color=always over 100 times a day?
|
Can I clear an env var during bash completion?
|
bash;grep;autocomplete
| null |
_cs.70563
|
I'm creating a tic tac toe game, there will be two players: X and O. X will be a human, and O is an AI which will always choose the best move to play. My board is an 11x11 board and the winning condition is 5 in a row. How do I know if a board is at end state (where one player has won)? For the 3x3 board, you can do it easily with just a few steps. But for the larger board (11x11) and the winning condition is smaller than the dimension, it's way more complex. The best solution I have found is to check for each n x n (n is the winning condition) square for a winner. But that seems slow.So here is my question: if you know the last move of a player, can you use that position to check if that player wins without checking every n x n square in the board?
|
The fastest way to check if a move is a winning move in Tic Tac Toe
|
algorithms;complexity theory;artificial intelligence;search algorithms
|
Yes, you can do this more efficiently. If you have a board position and the last move made to get to that position, you can check whether the board position is an end state using the following insight:The previous board position wasn't an end state. Thus, if the board is an end state, that can only happen because there's a 5-in-a-row that includes the square where the last move occurred.So, if the last move was in square $s$, check whether there's a 5-in-a-row that includes square $s$. There are only 4 directions to check (horizontal, vertical, and both diagonals), and you can check each direction efficiently.
|
_unix.98128
|
Doing a ps on my Linux box shows that systemd runs with the command line options --switched-root and --deserialize. Nothing in the man page or /usr/share/doc/systemd mentions them, and Google hasn't been much help. So, what do they do? I'm guessing that --switched-root has something to do with pivot_root, but that's just a guess.
|
What are the systemd command line options --switched-root and --deserialize?
|
linux;systemd
|
These are intentionally undocumented internal parts of systemd. Very simply, therefore:--deserialize is used to restore saved internal state that a previous invocation of systemd, exec()ing this one, has written out to a file. Its option argument is an open file descriptor for that process.--switched-root is used to tell this invocation of systemd that it has been invoked from systemd managing an initramfs, and so should behave accordingly — including turning off some of the behaviour otherwise caused by --deserialize.
|
_unix.317366
|
I have a CSV file from which I need to remove one column from it.The problem is I have exported the CSV file without headers.So how can I remove the column from the CSV file.For example if I have the example.csv I want to remove the last column from it which is a boolean data and have the file as input.csv.input.csv 1,data,100.00,TRUE2,code,91.8,TRUE3,analytics,100.00,TRUEoutput.csv1,data,100.002,code,91.83,analytics,100.00
|
Remove Columns from a CSV File
|
text processing;csv
| null |
_unix.323225
|
I hate the overwrite mode in VI. I never actually want to overwrite, I just want to hit insert to confirm I am in Insert mode before typing, regardless of what state I was in previously, without worrying I may be toggling overwrite mode instead.Is there a way to configure vi to never toggle to overwrite mode? So the insert key toggles insert mode always? I'm using Spacemacs so if someone knows how to do this in Spacemacs that would be best, but failing that if I can get the VI syntax I'm sure I can figure out how to add vi configuration to my Spacemacs config file (I'm pretty new to Spacemacs right now).
|
Prevent toggling overwrite mode in Spacemacs or VI
|
vi
| null |
_unix.324370
|
I was trying to install docker on my Linux machine and I encountered this failure. I tried searching for keys online keyservers and did not find it.Any suggestions?$ sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D[sudo] password for skumaran: Executing: /tmp/tmp.0Zg1ACSsNU/gpg.1.sh --keyserverhkp://ha.pool.sks-keyservers.net:80--recv-keys58118E89F3A912897C070ADBF76221572C52609Dgpg: keyserver receive failed: Connection reset by peer
|
Trouble Importing Docker Keys from Keyserver
|
apt
| null |
_unix.332168
|
I'm using Elementary OS Freya (Ubuntu 14.04).I've install DNSMASQ and running the command and getting the error below:$ sudo service dnsmasq start * Starting DNS forwarder and DHCP server dnsmasqdnsmasq: bad command line options: try --help [fail]In the /var/log/syslog, I found:Dec 22 10:34:10 Marcelo-PC dnsmasq[3176]: bad command line options: try --helpDec 22 10:34:10 Marcelo-PC dnsmasq[3176]: FAILED to start upRunning sh -x /etc/init.d/dnsmasq I get:marcelo@Marcelo-PC:~$ sh -x /etc/init.d/dnsmasq start+ set +e+ PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin+ DAEMON=/usr/sbin/dnsmasq+ NAME=dnsmasq+ DESC=DNS forwarder and DHCP server+ ENABLED=1+ [ -r /etc/default/dnsmasq ]+ . /etc/default/dnsmasq+ ENABLED=1+ CONFIG_DIR=/etc/dnsmasq.d,.dpkg-dist,.dpkg-old,.dpkg-new+ [ -r /etc/default/locale ]+ . /etc/default/locale+ LANG=en_US.UTF-8+ export LANG+ test -x /usr/sbin/dnsmasq+ [ -f /lib/lsb/init-functions ]+ . /lib/lsb/init-functions+ run-parts --lsbsysinit --list /lib/lsb/init-functions.d+ [ -r /lib/lsb/init-functions.d/01-upstart-lsb ]+ . /lib/lsb/init-functions.d/01-upstart-lsb+ unset UPSTART_SESSION+ _RC_SCRIPT=/etc/init.d/dnsmasq+ [ -r /etc/init//etc/init.d/dnsmasq.conf ]+ _UPSTART_JOB=dnsmasq+ [ -r /etc/init/dnsmasq.conf ]+ [ -r /lib/lsb/init-functions.d/20-left-info-blocks ]+ . /lib/lsb/init-functions.d/20-left-info-blocks+ [ -r /lib/lsb/init-functions.d/50-ubuntu-logging ]+ . /lib/lsb/init-functions.d/50-ubuntu-logging+ LOG_DAEMON_MSG=+ FANCYTTY=+ [ -e /etc/lsb-base-logging.sh ]+ true+ [ ! ]+ [ != yes ]+ [ -x /sbin/resolvconf ]+ RESOLV_CONF=/var/run/dnsmasq/resolv.conf+ [ ! ]+ DNSMASQ_USER=dnsmasq+ test 1 != 0+ log_daemon_msg Starting DNS forwarder and DHCP server dnsmasq+ [ -z Starting DNS forwarder and DHCP server ]+ log_use_fancy_output+ TPUT=/usr/bin/tput+ EXPR=/usr/bin/expr+ [ -t 1 ]+ [ xxterm != x ]+ [ xxterm != xdumb ]+ [ -x /usr/bin/tput ]+ [ -x /usr/bin/expr ]+ /usr/bin/tput hpa 60+ /usr/bin/tput setaf 1+ [ -z ]+ FANCYTTY=1+ true+ /usr/bin/tput xenl+ /usr/bin/tput cols+ COLS=169+ [ 169 ]+ [ 169 -gt 6 ]+ /usr/bin/expr 169 - 7+ COL=162+ log_use_plymouth+ [ n = y ]+ plymouth --ping+ printf * Starting DNS forwarder and DHCP server dnsmasq * Starting DNS forwarder and DHCP server dnsmasq + /usr/bin/expr 169 - 1+ /usr/bin/tput hpa 168 + printf + start+ [ ! -d /var/run/dnsmasq ]+ start-stop-daemon --start --quiet --pidfile /var/run/dnsmasq/dnsmasq.pid --exec /usr/sbin/dnsmasq --test+ start-stop-daemon --start --quiet --pidfile /var/run/dnsmasq/dnsmasq.pid --exec /usr/sbin/dnsmasq -- -x /var/run/dnsmasq/dnsmasq.pid -u dnsmasq -r /var/run/dnsmasq/resolv.conf -7 /etc/dnsmasq.d,.dpkg-dist,.dpkg-old,.dpkg-newdnsmasq: opes invlidas de linha de comando: tente --help+ return 2+ log_end_msg 1+ [ -z 1 ]+ [ 162 ]+ [ -x /usr/bin/tput ]+ log_use_plymouth+ [ n = y ]+ plymouth --ping+ printf \r+ /usr/bin/tput hpa 162 + [ 1 -eq 0 ]+ printf [[+ /usr/bin/tput setaf 1+ printf failfail+ /usr/bin/tput op+ echo ]]+ return 1+ exit 1And I can't put the DNSMASQ to work. The only uncommented line in my dnsmasq.conf is (you can see the entire file here):address=/nintendowifi.net/192.168.0.8How can I see what the problem is?
|
How to get DNSMASQ to work?
|
dnsmasq
| null |
_unix.345380
|
Suppose i have a string = remove this sentenceI want to remove this string from all files in the current directory and all files in sub directories of current directory.How to achieve it? I have tried this:sudo grep -rl stringtoreplace ./ | xargs sed -i 'stringtoreplace//g'It gives me Permission ERROR! even though I am using sudostring to replace is this:,\x3E\x74\x70\x69\x72
|
Remove a string from all files which have the string
|
linux;grep
| null |
_softwareengineering.38924
|
I am reading the book Object-Oriented Analysis and Design written by Grady Booch and others. In the Section : I Concepts in a subsection Bringing Order to Chaos authors suggest to separate between a Method and a Methodology:According to the book:A method is a disciplined procedure for generating a set of models that describe various aspects of a software system under development, using some well-defined notation.A methodology is a collection of methods applied across the software development lifecycle and unified by process, practices, and some general, philosophical approach.I understood that a Method is used to built system models and a Methodology is a set of such methods that are applied across software development lifecycle. To my knowledge, a software development lifecycle includes but is not limited to analysis, design, implementation and testing phases. How it can be that a Method that is used to built system models is also applied in implementation or testing phase?
|
Confusion in definitions of a method and a methodology in the book OOAD with Applicatons (Booch et al)
|
object oriented;development methodologies;methodology
|
Maybe I'm misreading the definitions (I haven't read the book), but wouldn't you have different methods for system building and testing? So your methodology would include some methods that apply to analysis, some that apply to building, some that apply to testing, etc. All of those methods would be grouped by a common approach or goal -- e.g. Agile methodology, Waterfall methodology, etc.
|
_softwareengineering.336496
|
I'm developing an application that produces files that are ultimately just a list of groups of transactions: an event store. I was wondering if there was a name for the way I'm handling undo/redo. Or if there's another common way to implement such functionality.For example, a file with three nodes might look something like this:transactionGroups: [ { isUserOwned: true, transactions: [ { op: nodeAdded, frame: 0, rotation: 132, length: 435, scale: 100 } ] }, { isUserOwned: true, transactions: [ { op: nodeAdded, frame: 0, rotation: 144, length: 363, scale: 100 } ] }, { isUserOwned: true, transactions: [ { op: nodeAdded, frame: 0, rotation: 163, length: 311, scale: 100 } ] }In the spirit of there is no delete, I have devised a way to implement undo/redo by adding an undo transaction that references a transaction relative to its position in the store. Basically, it tells the file builder to ignore a referenced transaction when constructing the current state of the file.To redo an operation, an undo transaction is just undone again.transactionGroups: [ { isUserOwned: true, transactions: [ { op: nodeAdded, frame: 0, rotation: 132, length: 435, scale: 100 } ] }, { isUserOwned: true, transactions: [ { op: nodeAdded, frame: 0, rotation: 144, length: 363, scale: 100 } ] }, { isUserOwned: true, transactions: [ { op: nodeAdded, frame: 0, rotation: 163, length: 311, scale: 100 } ] }, // Undoes the last two transactions { isUserOwned: true, transactions: [ { op: undo, transaction: 1 } ] }, { isUserOwned: true, transactions: [ { op: undo, transaction: 3 } ] }, // Redoes the last two undo transactions { isUserOwned: true, transactions: [ { op: undo, transaction: 1 } ] }, { isUserOwned: true, transactions: [ { op: undo, transaction: 3 } ] }]
|
Event Sourcing: Undo Redo
|
event sourcing
| null |
_unix.194561
|
Suppose I have a executable which for which I want to log STDOUT and STDERR to separate files, like so:python write.py > out 2> errWhen I use this command with GNU timeout, my out file is always empty. Why does this happen, and how can I fix this?timeout 5s python write.py > out 2> errExample write.py:#!/bin/bashimport sys, timei = 0while True: print i print >> sys.stderr, i time.sleep(1) i += 1
|
Using GNU timeout when redirecting stdout to file
|
stdout;timeout
|
$ timeout 5s python write.py > out 2> err$ ls -l out err-rw-r--r-- 1 yeti yeti 10 Apr 6 08:12 err-rw-r--r-- 1 yeti yeti 0 Apr 6 08:12 out$ timeout 5s python -u write.py > out 2> err$ ls -l out err-rw-r--r-- 1 yeti yeti 10 Apr 6 08:13 err-rw-r--r-- 1 yeti yeti 10 Apr 6 08:13 out$ cmp err out && echo samesame$ cat out 01234python uses buffered writes on stdout but not on stderr. So everything written to sys.stderr is written immediately but stuff for sys.stdout is kept in a buffer until the buffer is full to minimise write operations. Within 5 seconds, the buffer does not fill enough to get written even once and timeout terminates the python interpreter.Adding the -u option to python's invocation, writes to stdout will be unbuffered too.You can force the same behaviour for a Python program by setting the environment variable PYTHONUNBUFFERED. For other programs you can use unbuffer from the expect package or stdbuf from GNU coreutils (see Turn off buffering in pipe).
|
_webmaster.69549
|
I have updated my design and uploaded it to my server, but it's showing old content for sometime, new content for sometime. I randomly checked lots of computer, same problem is happening everywhere.
|
loading old web pages and not the updated ones
|
web hosting;web development;cache;uploading
| null |
_codereview.110412
|
Post a cursory read of this I implemented a simple dictionary and an interface to assign, replace, look up, and redefine its terms to apply the concepts.It's simple, but I'd still like to know any way I could make this cleaner and more pythonic.def print_dict(dictionary): for key in dictionary: print({} : {}.format(key, dictionary[key]))def display_menu(): print(\n0 = Quit + \n1 = Look up a term + \n2 = Add a term + \n3 = Redefine a term + \n4 = Delete a term + \n5 = Display Dictionary )def is_integer(value): try: temp = int(value) return True except ValueError: return Falsedef validate(choice): if is_integer(choice) and 0 <= int(choice) <= 5: return int(choice) else: print(Input must be an integer between 0 and 5, inclusive) return validate(input(\nEnter Selection: ))def lookup_term(dictionary): term = input(which term would you like to look up? ) if term in dictionary: print({} : {}.format(term, dictionary.get(term))) else: print(Term does not exist, input 2 to add new term)def redefine_term(dictionary): term = input(which term would you like to redefine? ) if term in dictionary: dictionary[term] = input(and its definition? ) else: print(Term does not exist, input 2 to add new term)def add_term(dictionary): term = input(What term would you like to add? ) if term in dictionary: print(Already exists. To redfine input 3) else: dictionary[term] = input(and its definition? )def delete_term(dictionary): del dictionary[input('Which term would you like to delete? ')]def process_request(choice, dictionary): if choice == 0: print(Thank you for using Stack Exchange Site Abbreviation!) quit() elif choice == 1: lookup_term(dictionary) elif choice == 2: add_term(dictionary) elif choice == 3: redefine_term(dictionary) elif choice == 4: delete_term(dictionary) else: print_dict(dictionary)def main(): site_dictionary = { 'SO' : 'Stack Overflow', 'CR' : 'Code Review', 'LH' : 'Lifehacks', '??' : 'Puzzling', 'SR' : 'Software Recommendations', 'SU' : 'Super User', 'M' : 'Music: Practice & Theory', 'RE' : 'Reverse Engineering', 'RPi' : 'Raspberry Pi', 'Ro' : 'Robotics' } print_dict(site_dictionary) print(\nWelcome to Stack Exchange Site Abbreviation Translator!) display_menu() while(True): process_request(validate((input(\nEnter Selection: ))), site_dictionary)if __name__ == __main__: main()
|
Practice with dictionaries
|
python;beginner;python 3.x;dictionary
|
Don't return validate from within validate, that's unnecessary recursion and you might end up accidentally hitting the maximum recursion level (as much as that shouldn't happen, what if the user just holds down enter?). Instead wrap the function in a while True loop. Since you have a return statement, you already have a mechanism to break the loop.def validate(choice): while True: if is_integer(choice) and 0 <= int(choice) <= 5: return int(choice) else: print(Input must be an integer between 0 and 5, inclusive) choice= input(\nEnter Selection: )(Though I agree about adding the is_integer test into here)Python implicitly concatenates neighbouring string literals, so in your menu print you actually don't need to use plus signs: print(\n0 = Quit \n1 = Look up a term \n2 = Add a term \n3 = Redefine a term \n4 = Delete a term \n5 = Display Dictionary )In redefine_term you don't do any validation on the new text being entered. Sure, the user can enter what they want but what if it's empty space? Do you want that. Maybe you do, but if not you could easily validate with an or:dictionary[term] = input(and its definition? ) or dictionary[term]If the input is an empty string it evaluates as False, meaning that Python then uses the other value in the A or B expression, defaulting back to the old value. If you wanted to prevent whitespace in general (eg tabs or spaces) then just add .strip() to the input call, to remove whitespace from the start and end of the result.You have a bug in delete_term. If the user enters a non existent key it will raise a KeyError. You should probably handle it with a try excepttry: del dictionary[input('Which term would you like to delete? ')]except KeyError: print(That key does not exist.)
|
_softwareengineering.355888
|
I have two variants of this GridBuilder class I'm designing, and I'm not quite sure which one is preferable:Using class propertiesclass GridBuilder{ private $grid; public function __construct() { $this->grid = new Grid(); } public function build() { $items = ['item1', 'item2', 'item3']; $this->addItemsToGrid($items); return $this->grid; } public function addItemsToGrid($items) { // add items ... }}Passing variables to functionsclass GridBuilder{ public function build() { $items = ['item1', 'item2', 'item3']; $grid = new Grid(); $this->addItemsToGrid($items, $grid); return $grid; } public function addItemsToGrid($items, $grid) { // add items ... return $grid; }}The first one uses class properties. The second one passes the grid to other functions.The second one feels a little cleaner and better to me, but I can't explain why. Any ideas?
|
Using class properties vs passing variables to functions
|
object oriented;php;language agnostic;class design;variables
| null |
_cs.41327
|
Could someone, in plain english, explain the distinction between the fundamental matrix and the essential matrix in multi-view computer vision?How are they different, and how can each be used in computing the 3D position of a point imaged from multiple views?
|
The Fundamental and Essential Matrix
|
algorithms;image processing;computer vision
|
Both matrices relate corresponding points in two images. The difference is that in the case of the Fundamental matrix, the points are in pixel coordinates, while in the case of the Essential matrix, the points are in normalized image coordinates. Normalized image coordinates have the origin at the optical center of the image, and the x and y coordinates are normalized by Fx and Fy respectively, so that they are dimensionless.The two matrices are related as follows:E = K' * F * K, where K is the intrinsic matrix of the camera.F has 7 degrees of freedom, while E has 5 degrees of freedom, because it takes the camera parameters into account. That's why there is an 8-point algorithm for computing the fundamental matrix and a 5-point algorithm for computing the essential matrix.One way to get a 3D position from a pair of matching points from two images is to take the fundamental matrix, compute the essential matrix, and then to get the rotation and translation between the cameras from the essential matrix. This, of course, assumes that you know the intrinsics of your camera. Also, this would give you up-to-scale reconstruction, with the translation being a unit vector.
|
_unix.358630
|
How to drop inbound, un-encrypted, TCP & UDP connections using packet inspection and not a specific port or protocol (such as 22/SSH or 443/HTTPS) using iptables or nftables?
|
nftables or iptables only allow encrypted traffic
|
networking;security;encryption
| null |
_codereview.132957
|
I'm working on a nice crawler that start with one URL, and find the other URLs to process each page, a kind of Google crawler, to index pages.I worked hard on this crawler to respect many points I've found over many websites, including:Respect of robots.txtNot querying too much each website (I add a delay for each subsequent requests on the same domain)The main code is a worker.php that is spawned using Supervisor. Supervisor launch n instances of that file depending on the server so it's possible that multiple instances of worker.php are running in parallel.There is one issue I can't pinpoint: it seems that the more time it runs, the more time it takes to process the URL (it's getting slower and slower), and I can't target why and where (if you have any ideas, I'm interested!).I've also created a public gist based on the code presented here.<?phpif (php_sapi_name() !== 'cli') exit(1);require_once(__DIR__.'/../init.php');define('WORKER_LIMIT_INSTANCES', 200);define('CRAWLER_MAX_DEPTH', 10000);define('CRAWLER_MAX_HIGH_URLS', 100);use \Pheanstalk\Pheanstalk;use \Crawler\Models\LinkModel;$pheanstalk = new Pheanstalk('127.0.0.1');$reloadedInitialTime = filemtime(__DIR__.'/../reloaded');fwrite(STDOUT, Started new instance of script (.$reloadedInitialTime.).\n);$loopCounter = 0;while (true) { clearstatcache(); // Script to stop the service if (intval(file_get_contents(__DIR__.'/../breakworker')) === 1 ) exit(1); // We check if we need to stop this worker (code update?) $autoReloadSystem = filemtime(__DIR__.'/../reloaded'); if ($reloadedInitialTime !== $autoReloadSystem) { fwrite(STDOUT, New update - Reloading script.\n); exit(0); } usleep(500000); // Give it some slack ; 1/2 second $loopCounter++; if ($loopCounter > WORKER_LIMIT_INSTANCES) break; // We count on Supervisord to reload workers // grab the next job off the queue and reserve it $job = $pheanstalk->watch(QUEUE_NAME) ->ignore('default') ->reserve(); // remove the job from the queue $pheanstalk->delete($job); $data = json_decode($job->getData(), true); if (is_null($data)) { fwrite(STDERR, [FATAL] Invalid Job data : .$job->getData().\n); } if (!isset($data['retries'])) $data['retries'] = 0; if (!isset($data['priority'])) $data['priority'] = \Crawler\Engine\Spider::MEDIUM_PRIORITY; if ($data['priority'] == \Crawler\Engine\Spider::LOW_PRIORITY) { // Normally, only new links are in low priority $data['priority'] = \Crawler\Engine\Spider::MEDIUM_PRIORITY; } /* * The Spider goes to the website using a basic CURL request * It also pre-fetch the robots.txt the first request to ensure we respect it * With the following CURL rules : * CURLOPT_FOLLOWLOCATION => true, CURLOPT_FORBID_REUSE => true, CURLOPT_FRESH_CONNECT => true, CURLOPT_HEADER => false, CURLOPT_RETURNTRANSFER => true, CURLOPT_SSL_VERIFYPEER => false, CURLOPT_MAXREDIRS => 5, CURLOPT_TIMEOUT => 5, CURLOPT_ENCODING => '' */ $spider = new \Crawler\Engine\Spider($data['url']); $duration = $spider->exec(); // First, we ensure that we are not black-listed // So we analyze the status code // For 401, 403 and 404, we retry once // For 408, 429 and 503, we retry 3 times, with increasing wait between requests if (in_array($spider->getStatusCode(), array(401, 403, 404, 408, 429, 503))) { $data['retries']++; if ((in_array($spider->getStatusCode(), array(401, 403, 404)) && $data['retries'] <= 1) // Only one retry || (in_array($spider->getStatusCode(), array(408, 429, 503)) && $data['retries'] <= 3) // 3 retries ) { $pheanstalk->putInTube(QUEUE_NAME, json_encode($data), $data['priority'], $data['retries'] * 30); continue; } // We are here (and not in the if section) when the status code is in the array // but the retries are reached, that mean we stop for this url // So the next step will be to add it in the Link database and stop the data. } // We update the url in the database to indicate it has been crawled LinkModel::update($data['url'], true); if (strtolower($data['url']) !== strtolower($spider->getUrl())) { // We were redirected, so we add a new URL also marked as being crawled, with $data['url'] being the origin $jobId = LinkModel::add($spider->getUrl(), true, $data['url']); // We remove the job of the redirect url because we had it already in queue if (!is_null($jobId)) { // We catch exception in case the url has already been processed try { $job = $pheanstalk->peek($jobId); $pheanstalk->delete($job); } catch (\Exception $e) {} } } $domainName = $spider->getUrlParts(PHP_URL_HOST); $domainName = strtolower($domainName['host']); // Here's the code I do to index the webpages // I removed it because it's not interesting in our case // But in general, if you are looking for a similar work, you can implement your need here :) // This code extract all the links in the page to add them in the queue $links = \Crawler\Extractors\LinkExtractor::extract($spider); // And we add them now : $priority = $data['priority']; foreach ($links as $link) { $parsedDomain = strtolower(parse_url($link, PHP_URL_HOST)); $jobsData = array( 'url' => $link, 'retries' => 0, 'referer' => $spider->getUrl() ); $jobsData['delay'] = ceil($duration * (rand(1, 10)/10000)); // Delay between 0.1 and 1 seconds x $duration of the request if ($jobsData['delay'] > 5) $jobsData['delay'] = 5; // We increase the time to wait per number of links for this specific domain $jobsData['delay'] = $jobsData['delay'] + LinkModel::countQueued($parsedDomain); if (\Crawler\Engine\Spider::HIGH_PRIORITY) { // Allow 5 simultaneous request on high priority $jobsData['delay'] = floor($jobsData['delay'] / 10); } $iCountCrawledUrls = LinkModel::countTotal($parsedDomain); if ($iCountCrawledUrls > CRAWLER_MAX_DEPTH) break; // We stop crawling this domain if ($domainName === $parsedDomain) { if ($priority === \Crawler\Engine\Spider::HIGH_PRIORITY && $iCountCrawledUrls > CRAWLER_MAX_HIGH_URLS) { $priority = \Crawler\Engine\Spider::MEDIUM_PRIORITY; } $jobsData['priority'] = $priority; } else { $jobsData['priority'] = \Crawler\Engine\Spider::LOW_PRIORITY; } $jobId = $pheanstalk->putInTube(QUEUE_NAME, json_encode($jobsData), $jobsData['priority'], $jobsData['delay']); // The add method checks if the url is already present in the database // To avoid adding multiple time the same url (and going in loop in case two sites links to each others !) LinkModel::add($link, false, null, $jobId); }}The Spider:<?phpnamespace Crawler\Engine;class Spider { const MAX_DOWNLOAD_SIZE = 1024*1024*100; // in bytes, =100kb const LOW_PRIORITY = 1024; // = Default const MEDIUM_PRIORITY = 512; const HIGH_PRIORITY = 256; private $options = array( CURLOPT_FOLLOWLOCATION => true, CURLOPT_FORBID_REUSE => true, CURLOPT_FRESH_CONNECT => true, CURLOPT_HEADER => false, CURLOPT_RETURNTRANSFER => true, CURLOPT_SSL_VERIFYPEER => false, CURLOPT_MAXREDIRS => 5, CURLOPT_TIMEOUT => 5, CURLOPT_ENCODING => '' ); private $curl = null; private $url = null; private $urlParts = array(); private $statusCode = null; private $source = null; public function __construct($url, $referer) { $this->options[CURLOPT_WRITEFUNCTION] = array($this, 'curl_handler_recv'); $this->options[CURLOPT_REFERER] = $referer; $this->curl = curl_init(); curl_setopt($this->curl, CURLOPT_URL, $url); curl_setopt_array($this->curl, $this->options); $this->source = ''; } public function curl_handler_recv($curl, $data) { $this->source .= $data; if (strlen($this->source) > self::MAX_DOWNLOAD_SIZE) return 0; return strlen($data); } public function exec() { $start = round(microtime(true) * 1000); curl_exec($this->getCurl()); $this->getUrl(); $this->getStatusCode(); curl_close($this->getCurl()); return round(microtime(true) * 1000) - $start; } public function getCurl() { return $this->curl; } public function getSource() { return $this->source; } public function getUrl() { if (is_null($this->url)) { $this->url = curl_getinfo($this->getCurl(), CURLINFO_EFFECTIVE_URL); $this->urlParts = parse_url($this->url); } return $this->url; } public function getUrlParts($key = null) { if (!is_null($key) && isset($this->urlParts[$key])) { return $this->urlParts[$key]; } return $this->urlParts; } public function getStatusCode() { if (is_null($this->statusCode)) { $this->statusCode = curl_getinfo($this->getCurl(), CURLINFO_HTTP_CODE); } return $this->statusCode; }}The LinkExtractor class:<?phpnamespace Crawler\Extractors;class LinkExtractor { private static $excludes = array( '.png', '.gif', '.jpg', '.jpeg', '.svg', '.mp3', '.mp4', '.avi', '.mpeg', '.ps', '.swf', '.webm', '.ogg', '.pdf', '.3gp', '.apk', '.bmp', '.flac', '.gz', '.gzip', '.jpe', '.kml', '.kmz', '.m4a', '.mov', '.mpg', '.odp', '.oga', '.ogv', '.pps', '.pptx', '.qt', '.tar', '.tif', '.wav', '.wmv', '.zip', // Removed '.js', '.coffee', '.css', '.less', '.csv', '.xsl', '.xsd', '.xml', '.html', '.html', '.php', '.txt', '.atom', '.rss' // Implement later ? '.doc', '.docx', '.ods', '.odt', '.xls', '.xlsx', ); private static $excludedDomains = array( '.google.', '.facebook.', '.bing.' ); private static function _getBaseUrl($parsed_url) { $scheme = isset($parsed_url['scheme']) ? $parsed_url['scheme'] . '://' : '//'; $host = isset($parsed_url['host']) ? $parsed_url['host'] : ''; $port = isset($parsed_url['port']) ? ':' . $parsed_url['port'] : ''; return strtolower($scheme$host$port); } public static function extract(\Crawler\Engine\Spider $spider) { $parsed = parse_url(strtolower($spider->getUrl())); if (!isset($parsed['scheme'])) { $parsed['scheme'] = 'http'; } $base = self::_getBaseUrl($parsed); $host_length = strlen($parsed['host']); preg_match_all(/(href|src)=[\'\]?([^\'\>]+)/i, $spider->getSource(), $out); $linkPattern = '/^(?:[;\/?:@&=+$,]|(?:[^\W_]|[-_.!~*\()\[\] ])|(?:%[\da-fA-F]{2}))*$/'; $urls = array(); if (is_array($out) && isset($out[2])) { foreach ($out[2] as $key=>$url) { if (substr($url, 0, 2) === '#!') { // see https://developers.google.com/webmasters/ajax-crawling/docs/getting-started $url = $base.$parsed['path'].'?_escaped_fragment_='.substr($url, 2); } else if (substr($url, 0, 2) === '//') { // generic scheme $url = $parsed['scheme'].'://'.$url; } else if (substr($url, 0, 1) === '/') { // generic scheme $url = $base.$url; } else if (substr($url, 0, 4) !== 'http') { continue; } if (strlen($url) > 250) continue; // We ignore too long urls $urll = strtolower($url); $parsed_url = parse_url($url); if ($parsed_url === false) continue; // We ignore invalid urls if (preg_match($linkPattern, $urll) !== 1) continue; $isExcluded = false; foreach (self::$excludes as $exclude) { if (substr($urll, strlen($exclude) * -1) === $exclude) { $isExcluded = true; break; } } foreach (self::$excludedDomains as $exclude) { if (strpos($urll, $exclude) !== false) { $isExcluded = true; break; } } if ($isExcluded) continue; // We ignore some extensions if (\Crawler\Models\LinkModel::isPresent($url)) continue; // We don't add a link that is already present if (\Crawler\RobotsTxtParser::disallowed($url)) continue; // We respect robots.txt $urls[$url] = true; } } return array_keys($urls); }}The LinkModel:<?phpnamespace Crawler\Models;class LinkModel { public static function __callStatic($name, $arguments) { return call_user_func_array(array(self::get(), '_'.$name), $arguments); } private static $instance = null; public static function get() { if (is_null(self::$instance)) { self::$instance = new self(); } return self::$instance; } private $presentStmt = null; private $countQueuedStmt = null; private $countTotalStmt = null; private function __construct() { $this->presentStmt = \Crawler\Database::prepare('SELECT `id` FROM `urls` WHERE `url` = :url AND `executed` > (UTC_TIMESTAMP() - INTERVAL 1 MONTH) LIMIT 1;'); $this->detailsStmt = \Crawler\Database::prepare('SELECT `job_id` AS `job` FROM `urls` WHERE `url` = :url AND `executed` > (UTC_TIMESTAMP() - INTERVAL 1 MONTH) LIMIT 1;'); $this->insertStmt = \Crawler\Database::prepare('INSERT INTO `urls` (`url`, `is_crawled`, `executed`, `source`, `job_id`) VALUES (:url, :crawled, UTC_TIMESTAMP(), :source, :job)'); $this->updateStmt = \Crawler\Database::prepare('UPDATE `urls` SET `is_crawled` = :crawled WHERE `url` = :url AND `executed` > (UTC_TIMESTAMP() - INTERVAL 1 MONTH) LIMIT 1;'); $this->countQueuedStmt = \Crawler\Database::prepare('SELECT COUNT(id) AS `total` FROM `urls` WHERE (`url` LIKE :domaina OR url LIKE :domainb) AND `source` IS NULL AND `is_crawled` = 0 AND `executed` > (UTC_TIMESTAMP() - INTERVAL 1 MONTH);'); $this->countTotalStmt = \Crawler\Database::prepare('SELECT COUNT(id) AS `total` FROM `urls` WHERE (`url` LIKE :domaina OR url LIKE :domainb) AND `source` IS NULL AND `executed` > (UTC_TIMESTAMP() - INTERVAL 1 MONTH);'); } public function _isPresent($url) { $this->presentStmt->execute(array('url' => strtolower($url))); $result = $this->presentStmt->fetch(\PDO::FETCH_ASSOC); return is_array($result); } /** * crawled : The engine extracted this url * redirectedFrom : The url it cames from, was redirected * * In certain case, crawled != fetched. This means the $url was a redrection from an other url */ public function _add($url, $crawled = false, $redirectedFrom = null, $jobId = null) { $url = strtolower($url); if (is_null($jobId)) { $this->detailsStmt->execute(array('url' => $url)); $result = $this->detailsStmt->fetch(\PDO::FETCH_ASSOC); // We search if already exists if (is_array($result)) { $this->_update($url, $crawled); // And return the job id if present ! return (empty($result['job']) ? null : $result['job']); } } // We insert $this->insertStmt->execute(array( 'url' => $url, 'crawled' => $crawled, 'source' => $redirectedFrom, 'job' => $jobId )); return null; } public function _update($url, $crawled = false) { $url = strtolower($url); $this->updateStmt->execute(array( 'url' => $url, 'crawled' => $crawled )); } public function _countQueued($domain) { $this->countQueuedStmt->execute(array( 'domaina' => 'http://'.$domain.'%', 'domainb' => 'https://'.$domain.'%', )); $result = $this->countQueuedStmt->fetch(\PDO::FETCH_ASSOC); if (!is_array($result)) return 0; return $result['total']; } public function _countTotal($domain) { $this->countTotalStmt->execute(array( 'domaina' => 'http://'.$domain.'%', 'domainb' => 'https://'.$domain.'%', )); $result = $this->countTotalStmt->fetch(\PDO::FETCH_ASSOC); if (!is_array($result)) return 0; return $result['total']; }}
|
PHP web crawler
|
performance;php;parsing;web scraping
| null |
_softwareengineering.21256
|
Java is often found in academia. What is the reason behind that?
|
Why do we study Java at university?
|
java
| null |
_cs.43118
|
Consider the following language: $$L = \{ \langle M \rangle \ |\ M \text { is a TM that decides the halting problem} \}$$determine whether or not the language is in $R$.Now, from my understanding an $\langle M \rangle \in L$ doesn't necessarily returns the right answer but rather halts for every $\langle P, x \rangle$ where $p$ is a program (=TM) and $x$ is an input for the program. I am guessing the language isn't decidable and can be showed as such by some reduction, but couldn't think of something useful. I'll be glad for help.
|
Determine if the language is $R$
|
formal languages;turing machines;reductions;undecidability
|
comments summary:The language $L$ is decidable.Hint: $L$ is in fact empty! It contains all the machines that decide the halting problem. But, the halting problem is undecidable => there are no machines that decide it.
|
_unix.193013
|
I am attempting to move a sub-directory from one parent directory to another for hundreds of instances, while changing the name of the sub-directory during the move. My directories are a set of numbers:1000, 1001, 1002, 1003, ..., 1998, 1999Each directory has a sub-folder called 'old' (e.g., 1000/old), which I want to move into the next incremented directory (and rename the sub-folder).For example, I want to move '1000/old' to '1001/new'.I've tried using xargs, which I' new to, so I'm not sure I'm going in the right direction. I think what I want is something like:find 1* -name 'old' | xargs -i -t mv {} <dir+1>/newI'm just not sure how to implement incrementing (the dir+1 bit).I've also tried to implement a modification of the accepted answer to this question, but my modification is also not working properly (I'm using ls to test the code before I actually start moving/renaming directories):#!/bin/bashfor x in 1*; do ls -d $x/old ${x}$i/new ((++i))doneThe issue with the above is that the next directory becomes 10001, 10002, etc instead of 1001, 1002.Any suggestions are much appreciated.
|
Moving sub-directory to new parent directory where the new directory name is incremented by 1
|
bash;shell script;xargs;arithmetic;mv
|
Shells treat strings representing integers in decimal as integers. If you have a directory whose name contains only digits with no leading zeros, you have a number and you can perform arithmetic on it.for d in 1*; do mv $d/old $((d+1))/newdoneYou can make the script more robust and only perform the move if the old subdirectory actually exists, and create the destination if necessary.for d in 1*; do if [ -d $d/old ]; then mkdir -p $((d+1)) mv $d/old $((d+1))/new fidonefind isn't useful here since you aren't traversing subdirectories recursively.
|
_webmaster.79264
|
I recently helped a friend migrate from Blogger to Wordpress. Everything is working great, and everything seems to be propagated. Google search results are already displaying the new site information, but I've run into a small problem.When I search for her site on google from my desktop, results are fine. However, when I search on a mobile device, the search results appear to be okay but when you actually click the link it adds ?m=1 to the end of the site address, causing the link not to work.I think ?m=1 was a Blogger thing - so I'm not sure why it is still doing this when nameservers are updated, and Google obviously reindexed the site.My question is this - is there something I can do to prevent this from being added to the end of the site address for Google search results? Or do we just have to wait for Google indexing/crawling to take care of that?
|
Google search results mobile adding ?m=1 to end of site address
|
google search
| null |
_cs.18210
|
First I apologize if I confused therms DFA and FSM, to me it seems that is the same thing. The question is simple: Are the flowcharts (sequence, branching and jumping) equivalent to DFA resp. FSM? I am a bit confused about this. There are classes where using logical synthesis, Karunaugh maps, state encodings, flip flops etc. one is able to construct hardware consisting of logic gates and flip-flops which realizes the desired DFA. Basically all processes that runs on the computer (no matter if is written in C# or Assembler), are at the lowest level realized through logical gates, zeros and ones. So it seems that programs firstly needs to be converted (by compiler I suppose) to some form as I've described. This might imply that every problem that is solvable using C# is solvable using FSM. But this is in contradiction to Chomsky hierarchy and all this theory related stuff, which says that you cannot do the same magic with regular expressions (which are based on FSM) that you can do on Turing machine (which is equivalent of any programming language, if I am wrong correct me please). Moreover, if flowcharts (or even C#, Java ... source codes) were equivalent to FSM why we do not have all software formally verified so far? There is mathematical apparatus for FSM and related stuff, so why do not formally verify everything and ensure the correctness? What I am missing here?
|
Flowcharts vs DFA resp FSM equivalency
|
formal languages;turing machines;finite automata;computer architecture
| null |
_softwareengineering.212665
|
True or False: A string is the same thing as an array.I had an interview the other day and the above question was asked. I said false, but the interviewer said it was actually true. I explained to her that a string is a data type and an array is an object that holds multiple instances of a variable, and that they aren't even similar concepts. But, she was an HR person who was just reading tech questions that were handed to her, so she couldn't elaborate or relate to my point.A coworker later explained the True answer by stating that a string is actually a character array... thus, a string is an array. Now, if a string is a character array, that doesn't mean that document arrays or integer arrays qualify as strings. Thus, a string is not the same thing as an array.But even so, (considering C#) a string isn't even a character array. In order to do character manipulation on a string (without using concatenation methods), you have to convert the string to a character array. Now, if a string was synonymous with a character array, why would you need to convert it? Wouldn't you just be able to call each character in the string implicitly as you would a character array? No! Why? Because a string is not a character array!I liken it to asking, Is a primary key the same as a unique identifier? The answer is false. Reason being: A primary key is a unique identifier, but you can have unique identifiers in a table without setting them as keys. What do you all think? Is a string (data type) the same thing as an array (object type)? If so, why?
|
Settle an Argument: String vs. Array?
|
strings;array
| null |
_codereview.97173
|
My usual disclaimer, I'm new to Python and scripting and I'm still studying the PEP8 guide, so please forgive any huge failures with respect to syntax, formatting and style. I'm open to any suggestions in regards to pretty much anything so let me have it!That said, I've been building little games to learn so far, but decided it's time to try to create something useful that I might actually use one day and continue to build off of. Being a sysadmin I decided to try to build a cross platform (Windows and OSX) script that would run performance and system information gathering.The version I'm about to paste is about 2 days old, and is in its infancy, but I wanted to get some feedback on a few things:Are there any obvious failures in how I'm structuring it that I should fix now?What are some suggestions for out of the box ways to gather the info and performance I'm getting with psutil? It's a non-standard module and I would like to make this script as standard as I can so it can just be run with a vanilla Python install. I'm considering just doing subprocess.call a lot, but figured that there has to be some stuff I'm not seeing when digging around the wild webs. Though it does look like they were considering adding it to the standard library back in October last year...Is there anything that you see that would concern you to run on your own system?I'm not sure how far I can get with this without it requiring some degree of admin rights, but that's a high priority for me, so I'll keep going until I hit a wall. I'm making a concerted effort to only call stuff that doesn't require admin rights.As well if you think this just isn't useful at all and there are 50 different other ways to do this via python already and I'm recreating a wheel that's already much more elegant, then let me know. I've done a lot of looking around and haven't found anything that does this in particular (except psutil itself), at least at the depth and scope I would like to see anyway.I'm also totally up for suggestions for functionality and features you think should go in here!#!/usr/bin/env python3'''This script is a system information and performance informationgathering script. It will pull information regarding live statslike memory and cpu as well as information like os version andserial number.It's designed to be cross platform between Windows and OSX butsome data just isn't available on both.This is an informational script only. It is not designed to changeany information, although with a few tweaks it could be.In the spirit of being easy to run, I've only applied funtions thatdon't require root/admin priviledges to run, so that any averageuser/process can use and utilize this.I may eventually branch this and remove psutil, as it's not a standardmodule and requires some work to install. I would like this scriptto be runnable from a default python install across platforms, soI may eventually completely isolate the OSX and windows functionsand remove the cross platform section, putting in some logic to makeit transparent to the user.Chris GleasonLast update - 7/12/2015Written for Python 3.x### NOTES: ###FUTURE WORK1) Figure out if dependencies exist and if not exit properly with an informative message2) If you can elegantly install the dependencies.3) Need to run as sudo to pull network info from psutilDEPENDENCIESpsutilOSXwget https://pypi.python.org/packages/source/p/psutil/psutil-3.1.0.tar.gz /tmptar -zxvf /tmp/psutil-3.1.0.tar.gz /tmp/pip install /tmp/psutil-3.1.0/psutilWINDOWShttps://pypi.python.org/packages/3.4/p/psutil/psutil-3.1.0.win32-py3.4.exe#md5=eb4504f7da8493a512a6f038a502a25c'''__version__ = $Revision: 1################# IMPORTS HERE #################import subprocessimport osimport platformimport sysimport argparseimport psutilimport readlineimport time############################################ ARGUMENTS AND SCRIPT RELATED ITEMS HERE ############################################parser = argparse.ArgumentParser(description='Print system usage statistics and system information. \ Default (no args) will determine OS and gather ALL information')parser.add_argument('--ntfs' , action='store_true' , help='Gather Windows information')parser.add_argument('--osx' , action='store_true' , help='Gather OSX information')parser.add_argument('--sys' , action='store_true' , help='Gather System Information')parser.add_argument('--perf' , action='store_true' , help='Gather Performance Information')args = parser.parse_args()########################################### TRULY CROSS PLATFORM THINGS START HERE ###########################################def noargs(): ''' This function is to determine OS level if the user doesn't define it in the command line switches ''' if sys.platform == 'win32': osver=ntfs print('Platform is NTFS/Windows!') runntfs() runperf() runsys() elif sys.platform == 'darwin': osver=osx print('Platform is OSX!') runosx() runperf() runsys()def runperf(): ''' This function runs cross platform performance related tests ''' print('##########################') print('Performance tests running!') print('##########################') print (''' ''') print('--------') print('CPU INFO') print('--------') print('') print('CPU times at runtime are ', psutil.cpu_times()) print('') print('CPU percent per CPU at runtime is ', psutil.cpu_percent(interval=5, percpu=True)) print('') print(''' ''') print('-----------') print('MEMORY INFO') print('-----------') print('') print('Memory usage statistics are ', psutil.virtual_memory()) print('') print(''' ''') print('---------') print('DISK INFO') print('---------') print('') if sys.platform == 'darwin': print('Disk usage is:\n') print(subprocess.call(['/bin/df', '-h'])) print('') #print('Space usage from root down is:\n', subprocess.call(['/usr/bin/du', '-hs', '/*',])) print('') print('Disk IO statistics are\n ') print(subprocess.call(['/usr/sbin/iostat', '-c10'])) print('') print('Be sure to ignore the first iostat line, as per best practices') print('') if sys.platform == 'win32': print('Disk usage is ', ) print('') print(''' ''') print('------------') print('NETWORK INFO') print('------------') print('') print('Network I/O stats ', psutil.net_io_counters(pernic=False)) print('') print('') print('') print('')def runsys(): ''' This function runs cross platform system information gathering ''' print('#############################') print('System information gathering!') print('#############################') OS = sys.platform print (''' ''') print('Your OS is ', platform.system(), platform.release(), '-', platform.version()) print('') print('Your architecture is ', platform.architecture()) print('') print('# of logical CPU\'s are ', psutil.cpu_count()) print('') print('# of physical CPU\'s, including threaded are ', psutil.cpu_count(logical=False)) print('') print('Disk information is ', psutil.disk_partitions(all=True)) print('') if sys.platform == 'darwin': print('Users on the system are:\n') print(subprocess.call(['who', '-a'])) print('') if sys.platform == 'win32': print('Users on the system are:\n') print('')####################################### WINDOWS SPECIFIC THINGS START HERE #######################################def runntfs(): ''' This function runs the Windows specific tests that can't be put into the cross platform checks ''' print('NTFS Tests running!')############################# OSX SPECIFIC THINGS HERE #############################def runosx(): ''' This function runs the OSX specific tests that can't be put into the cross platform checks ''' print('OSX Tests runnning!')################## MAIN CODE RUN ##################if args.ntfs and args.osx: print (You can't run both Windows and OSX flags on the same system!) exit(0)if args.ntfs: print('You chose NTFS!') runntfs()elif args.osx: print('You chose OSX!') runosx()else: print('No OS specified!')if args.sys and args.perf: print('You chose both System and Performance tests!') runperf() runsys()elif args.sys: print('You chose to run System Information gathering only!') runsys()elif args.perf: print('You chose to tun Performance Metric tests only!') runperf()else: print(You didn't specify performance or system so both will be run!)#if len(args) == 0:# noargs()if not len(sys.argv) > 1: noargs()
|
Cross-platform performance and statistical information script
|
python;beginner;performance;python 3.x;portability
|
Duplicated logicYou have multiple repeated conditions on sys.platform.These repeated checks are not great because of the hard-coded platform strings.It would be better to encapsulate these checks in helper functions:def is_windows(): return sys.platform == 'win32'def is_osx(): return sys.platform == 'darwin'Other repeated code is header texts, like these: print('--------') print('CPU INFO') print('--------') # ... print('-----------') print('MEMORY INFO') print('-----------')It would be better to create a helper function that takes a string,uppercases it, calculates the text length and formats a header text accordingly.Argument parsingInstead of having --ntfs and --osx flags and then doing extra validation to make sure only one of them was used,another option is to use choices:parser.add_argument('os', choices=('ntfs', 'osx'))This way, ArgumentParser will take care of the validation for you.When using ArgumentParser,it's not normal to use sys.argv too.The args value returned from parser.parse_args() should be all you need.SimplifyInstead of this: print('')This is the same thing: print()
|
_unix.252045
|
My board continues to display the message below.The terminal does not have any input.What is it with the following message, which I know? (T, g, c, q ...)What is the cause of this phenomenon?How can I fix this phenomenon?INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 0, t=3936547 jiffies, g=367023708, c=367023707, q=1511)INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 2, t=3972552 jiffies, g=367023708, c=367023707, q=1511)INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4008557 jiffies, g=367023708, c=367023707, q=1511)INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4044562 jiffies, g=367023708, c=367023707, q=1511)INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 2, t=4080567 jiffies, g=367023708, c=367023707, q=1511)INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 0, t=4116572 jiffies, g=367023708, c=367023707, q=1511)INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4152577 jiffies, g=367023708, c=367023707, q=1511)INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 0, t=4188582 jiffies, g=367023708, c=367023707, q=1511)INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4224587 jiffies, g=367023708, c=367023707, q=1511)INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4260592 jiffies, g=367023708, c=367023707, q=1511)INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4296597 jiffies, g=367023708, c=367023707, q=1511)INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 2, t=4332602 jiffies, g=367023708, c=367023707, q=1511)INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 2, t=4368607 jiffies, g=367023708, c=367023707, q=1511)
|
rcu_preempt detected stalls on CPUs / tasks message appears to continue
|
kernel;linux kernel;cpu;debugging;cpu frequency
|
You probably have a real time application that is consuming all cpu (some bad implementation) and because of its realtime scheduling priority the system doesn't have enough resources available for other tasks.I suggests that you remove realtime priority from your applications and check which one is consuming a lot of CPU and, after correcting the problem, puts it back to realtime priority
|
_unix.154519
|
I have created an encrypted Archlinux partition on my SD card, but currently I am unable to decrypt it. I am using the key X&(4n=%YF3!BN which includes german letters. So for this I have included to /etc/mkinitcpio.conf:HOOKS=[...] keyboard keymap consolefont encrypt [...]In /etc/vconsole.conf I have added KEYMAP=de-latin1FONT=lat9w-16And in /boot/cmdline.txt I have added:vconsole.keymap=de-latin1 vconsole.font=lat9w-16But I am still unable to decrypt it and I just do not know how to solve this.I am looking forward to hear from you.
|
Archlinux ARM Rasperry Pi decryption fail
|
keyboard;login;cryptsetup;mkinitcpio
| null |
_webmaster.56267
|
I have a website post4city.com. Sent an application for Google AdSense. It was rejected with message.Site does not comply with Google policies: We're unable to approve your AdSense application at this time because your site does not comply with the Google AdSense program policiesTips to improve.Dont place ads on auto-generated pages or pages with little to no original content. Your site should also provide a good user experience through clear navigation and organization. Users should be able to easily click through your pages and find the information theyre seeking.Before I resubmit I need suggestions about what changes I should be making? The site is in Joomla 2.5 - Adsmanager.
|
How to comply with the Google AdSense policy that ads should not be placed on auto-generated pages?
|
google adsense;advertising
| null |
_webmaster.53997
|
I have to change the urls of my site, from something like:www.site.com/page.htmltowww.site.com/page.html?value=randomvalue This will cause problems with SEO?
|
Adding query string to url already indexed, will cause problem with SEO?
|
seo;web development;query string
|
Google allow you to declare the purpose of query string parameters within Webmaster Tools. According to Google:In general, URL parameters fall into one of two categories:Parameters that don't change page content: for example, sessionid, affiliateid. Parameters like these are often used to track visits and referrers. They have no affect on the actual content of the page.Parameters that change or determine the content of a page: for example, brand, gender, country, sortorder.If Google have already crawled pages with query strings then they may already appear on the URL Paramaters page in Webmaster Tools. Otherwise you can just click to Add parameter for any new ones.If the parameter doesn't affect the content displayed to the user, select No in the Does this parameter change list, and then click Save. If the parameter does affect the display of content, click Yes: Changes, reorders, or narrows page content, and then select how you want Google to crawl URLs with this parameter. For more information read the Webmaster Tools help page on URL Parameters.
|
_softwareengineering.263177
|
Could no amount of formal analysis, type/rule checking prevent it's exploitation? How about a fully verified kernel such as SEL4 ?
|
Is software inherently buggy and hence, vulnerable?
|
debugging;verification;vulnerabilities
| null |
_codereview.98440
|
I have to perform on some collection and return ranking based on some logicI wish to optimize this working code since I think it can be bettered (maybe using tasks?).I need to search on Counterparts item. I load this data from DB and I've got more aliases per counterpart.I need to return the result based on those criteria:Counterpart Code is equal to search stringCounterpart Code starts with search stringCounterpart Code contains search stringCounterpart Description contains search stringCounterpart Alias is equal to search stringCounterpart Alias starts with search stringCounterpart Alias contains search stringEach of those rules starts from Ranking 1 to 7 and I have to sort ok that ranking ascending.public class Counterpart{ public int Id { get; set; } public string Code { get; set; } public string Description { get; set; } public IEnumerable<Alias> Aliases { get; set; } public override bool Equals(object obj) { Counterpart obj2 = obj as Counterpart; if (obj2 == null) return false; return Id == obj2.Id; }}public class Alias{ public int? Type { get; set; } public string Description { get; set; }}internal class CounterPartRanking{ public int Rank { get; set; } public Counterpart CounterPart { get; set; }}public static class CounterpartExtensions{ public static IEnumerable<Counterpart> SearchWithRank(this IEnumerable<Counterpart> source, string pattern) { var items1 = source.Where(x => x.Code == pattern); var items2 = source.Where(x => x.Code.StartsWith(pattern)); var items3 = source.Where(x => x.Code.Contains(pattern)); var items4 = source.Where(x => x.Description.Contains(pattern)); var items5 = source.Where(x => x.Aliases != null && x.Aliases.Any(y => y.Description == pattern)); var items6 = source.Where(x => x.Aliases != null && x.Aliases.Any(y => y.Description.StartsWith(pattern))); var items7 = source.Where(x => x.Aliases != null && x.Aliases.Any(y => y.Description.Contains(pattern))); Stopwatch sw = Stopwatch.StartNew(); var rankedItems = new List<CounterPartRanking>(); if (items1.Any()) rankedItems.AddRange(items1.Select(x => new CounterPartRanking { Rank = 1, CounterPart = x })); if (items2.Any()) rankedItems.AddRange(items2.Select(x => new CounterPartRanking { Rank = 2, CounterPart = x })); if (items3.Any()) rankedItems.AddRange(items3.Select(x => new CounterPartRanking { Rank = 3, CounterPart = x })); if (items4.Any()) rankedItems.AddRange(items4.Select(x => new CounterPartRanking { Rank = 4, CounterPart = x })); if (items5.Any()) rankedItems.AddRange(items5.Select(x => new CounterPartRanking { Rank = 5, CounterPart = x })); if (items6.Any()) rankedItems.AddRange(items6.Select(x => new CounterPartRanking { Rank = 6, CounterPart = x })); if (items7.Any()) rankedItems.AddRange(items7.Select(x => new CounterPartRanking { Rank = 7, CounterPart = x })); sw.Stop(); Debug.WriteLine(Time elapsed {0} for {1}, sw.Elapsed, pattern); var items = rankedItems.OrderBy(x => x.Rank).Select(x => x.CounterPart); var distinct = items.Distinct(); return distinct; }}
|
Optimize LINQ search with custom fixed ranking
|
c#;linq
|
DistinctTake a look at the documentation for Distinct.A couple things from the remarks should stand out for you:The Distinct(IEnumerable) method returns an unordered sequence that contains no duplicate values.I know that the order of the sequence is preserved when using Distinct but that's an implementation detail - there's no gaurantee.The default equality comparer, Default, is used to compare values of the types that implement the IEquatable generic interface. To compare a custom data type, you must implement this interface and provide your own GetHashCode and Equals methods for the type.Your custom type Counterpart does not do this. It should implement IEquatable<Counterpart> and additionally override both GetHashCode and Equals.public class CounterPart : IEquatable<Counterpart>{ // ... public bool Equals(Counterpart other) { // left for you. } public override bool Equals(object other) { return Equals(other as Counterpart); } public override bool GetHashCode() { // http://stackoverflow.com/a/263416/1402923 }}Now you have a type that is safe for use with Distinct!AddRangeAs far as I know, AddRange only throws when the collection is null. You don't need to check for items before you call it. That elminates a whole heap of code:rankedItems.AddRange(items1.Select(x => new CounterPartRanking { Rank = 1, CounterPart = x }));rankedItems.AddRange(items2.Select(x => new CounterPartRanking { Rank = 2, CounterPart = x }));rankedItems.AddRange(items3.Select(x => new CounterPartRanking { Rank = 3, CounterPart = x }));rankedItems.AddRange(items4.Select(x => new CounterPartRanking { Rank = 4, CounterPart = x }));rankedItems.AddRange(items5.Select(x => new CounterPartRanking { Rank = 5, CounterPart = x }));rankedItems.AddRange(items6.Select(x => new CounterPartRanking { Rank = 6, CounterPart = x }));rankedItems.AddRange(items7.Select(x => new CounterPartRanking { Rank = 7, CounterPart = x }));Other commentsAs I mentioned previously, this isn't gauranteed to be correct (but is in every instance I know of):var items = rankedItems.OrderBy(x => x.Rank).Select(x => x.CounterPart);var distinct = items.Distinct();
|
_codereview.82483
|
The branches I need to merge are called test and test-passed. Merging will always be fast-forward, from test to test-passed as commits to test-passed are only done automatically from test. This is currently working, just wondering if the approach is correct. The script is executed by Hudson, once all testing is complete.git statusgit reset --hardgit pull origin testgit checkout origin/testgit pull origin test-passedgit checkout origin/test-passedgit merge origin/testgit push origin HEAD:test-passedOne specific question I have, is if I need to create local branches as well (-b) or is that not required?Output from above:+ git statusHEAD detached from origin/test-passednothing to commit, working directory clean+ git reset --hardHEAD is now at 16a2d8d updated version+ git pull origin testFrom ssh://github.com/myrepo.git * branch test -> FETCH_HEADAlready up-to-date.+ git checkout origin/testHEAD is now at 16a2d8d... updated version+ git pull origin test-passedFrom ssh://github.com/myrepo.git * branch test-passed -> FETCH_HEADAlready up-to-date.+ git checkout origin/test-passedPrevious HEAD position was 16a2d8d... updated versionHEAD is now at 2aa260d... Merge branch 'dev-integration' into test+ git merge origin/testUpdating 2aa260d..16a2d8dFast-forward app/application.properties | 8 ++++---- 4 files changed, 14 insertions(+), 5 deletions(-)+ git push origin HEAD:test-passedTo ssh://[email protected]/myrepo.git 2aa260d..16a2d8d HEAD -> test-passed
|
Git merge script
|
shell;git
| null |
_reverseengineering.2215
|
I'm trying to understand very basic stack-based buffer overflowI'm running Debian wheezy on a x86_64 Macbook Pro.I have the following unsafe program:#include <stdlib.h>#include <stdio.h>CanNeverExecute(){ printf(I can never execute\n); exit(0);}GetInput(){ char buffer[512]; gets(buffer); puts(buffer);}main(){ GetInput(); return 0;}I compiled with -z execstack and -fno-stack-protector for my tests.I have been able to launch the program through gdb, get the address of CanNeverExecute function which is never called, and overflow the buffer to replace the return address by this address. I got printed I can never execute, which is, so far, so good.Now I'm trying to exploit this buffer overflow by introducing shellcode in the stack. I'm currently trying directly into gdb: break in GetInputfunction, set buffer value through gdb and jump to buffer adress with jump command.But I have a problem when setting the buffer:I have a breakpoint just after gets function, and I ran the programm with 512 a characters as input.In gdb, I do:(gdb) p buffer$1 = 'a' <repeats 512 times>The input was read without any problem, and my buffer is 512 aI then try to modify its value. If I do this:(gdb) set var buffer=and try to print buffer, its length is now 511! How come??(gdb) p buffer$2 = '\000' <repeats 511 times>et:And when I try to set it back to, for instance, 512 a, I get:Too many array elementsI can set it to 511 a though, it is really that las byte that doesn't work... How come, is there a simple explanation?
|
GDB Error Too many array elements
|
gdb;buffer overflow
|
GDB protects you to overflow your char array. (gdb) p &buffer$25 = (char (*)[512]) 0x7fffffffdfe0To bypass this security you can either write directly the memory :(gdb) set 0x7fffffffe1e0=0x41414141Or cast the array as a bigger one and then set your stuff :set {char [513]}buffer=512xA
|
_unix.146687
|
I've installed sendmail in Ubuntu as below: apt-get install sendmailThen I sent an Email to test, I received Email from root <root@mydomain>. I checked the content of /etc/aliases, but it was empty. I've looked around but couldn't figure out how to change default user for mail sending. What kind of record should I add to aliases? What I want to achieve is to change root to something like no-reply.
|
How should I change root@mydomain when I send from mail() php function?
|
ubuntu;root;alias;sendmail
|
You can change it when using php mail() function, by passing an additional parameter:<?phpmail('[email protected]', 'Subject', 'Message', null, '[email protected]');?>Or make it default by changing sendmail_path option in php.ini:sendmail_path = /usr/sbin/sendmail -t -i -f'[email protected]'
|
_reverseengineering.12398
|
I'm writing a parser for some xml scenario files.Among other cleartext info there is a node 'Scenario_Compressed' which i like to analyse.I've uploaded the content here:http://www.lunex.net/temp/compstr.txtcan anybody of you help me identifing the type of compression?thanks in advanceLunex
|
Need help with compressed string of unknown format
|
windows;decompress
|
As @w4rex said, it definitely looks like base64. If you try to decode it like a regular base64 string, you end up with :37 7a bc af 27 1c 00 03 d8 a0 33 34 30 78 00 00 7z..............You recognize the '7z' magic of a 7zip file, and it's indeed a 30Ko archive containing a single file of 363Ko named 'default'. The file is password-protected though, so you could try to either brute-force it or reverse the application generating this file to find the password.
|
_unix.187668
|
I am using a proxy to navigate on internet, and I am trying to setup a firewall that will only let me connect to internet via this proxy:if I forgot to turn on the proxy, that I should not be able to connect to internetif the proxy was disfunctional (it happened to me with a VPN in the past), then the connection would be cut.I have done this so far:ufw default deny outgoingufw default deny incomingufw allow from 149.XXX.XXX.XXX # (the address of the proxy)ufw allow to 149.XXX.XXX.XXXAs soon as I doufw enablewhen the proxy is turned off, it does cut all connections.When I turn on the proxy, the connections are still blocked.BUT if I disable the firewall:ufw disablethen I get prompted with my username and password for the proxy.Once I have typed these, I can enable the firewall, and all the connections work.So it works, BUT on my first connection with the proxy I need to disable the firewall, in order to get prompted for credentials.Is there a way around this?I guess this indicates that my first connection to the proxy is not a connection to 149.XXX.XXX.XXX. Why? How can I identify this first connection, in order to allow it?PS: I am using Archlinux, but I don't think it makes any difference for ufw.
|
ufw firewall - how to only allow when I am going through a proxy
|
linux;firewall;proxy;http proxy;ufw
| null |
_unix.235328
|
I'm trying to set up a send-hook so that gpg encryption is enabled when I send to a specific recipient, but if it's sent to other recipients as well, then encryption is disabled. However, send-hooks seem to fire when a particular recipient is anywhere in the recipient list, regardless of who else is present.Ideally, I'd encrypt if it goes to [email protected], but not if goes to [email protected], [email protected], [email protected]. The mutt manual saysWhen multiple matches occur, [send-hook] commands are executed in the order they are specified in the muttrc.Hence, I put the following in my muttrc. If mail is sent to [email protected], then enable autoencrypt. However, if there is a recipient that is not [email protected], then unset autoencrypt.send-hook . unset crypt_autoencryptsend-hook !~l ~t ^foo@bar\\.com$ set crypt_autoencryptsend-hook !~l !~t ^foo@bar\\.com$ unset crypt_autoencryptHowever, it doesn't seem to work. It seems that send-hooks don't seem to parse each individual recipient separately. Even if I address mail to [email protected], [email protected], mutt attempts to encrypt it.WorkaroundI can get around this with a very ugly hack.send-hook . unset crypt_autoencryptsend-hook !~l ~t ^foo@bar\\.com$ set crypt_autoencryptsend-hook !~l ~t [^r]\\.com$ unset crypt_autoencryptIf I send an email to a .com address that has a non-r character preceding, then it won't encrypt. There are obviously lots of r.com addresses that aren't [email protected], so I have to extend the third line as follows.send-hook !~l ~t '([^r]\\.com|[^a]r\\.com)$ unset crypt_autoencryptThis also excludes r.com addresses with a non-a character preceding too. I just repeat this sequence a few more times.The major problem with this is that send-hooks don't seem to fire for cc: addresses, making this whole third line moot if the email is cc:ed to [email protected].
|
How can I GPG encrypt for only a sole specific recipient in mutt?
|
mutt
|
In muttrc, useset crypt_opportunistic_encrypt = yesFrom $ man 5 muttrccrypt_opportunistic_encrypt Type: boolean Default: no Setting this variable will cause Mutt to automatically enable and disable encryption, based on whether all message recipient keys can be located by mutt. When this option is enabled, mutt will determine the encryption setting each time the TO, CC, and BCC lists are edited. If $edit_headers is set, mutt will also do so each time the message is edited. While this is set, encryption settings can't be manually changed. The pgp or smime menus provide an option to disable the option for a particular message. If $crypt_autoencrypt or $crypt_replyencrypt enable encryption for a message, this option will be disabled for the message. It can be manually re-enabled in the pgp or smime menus. (Crypto only)This also inspects cc:ed addresses for validity. Unfortunately, as per the second-last paragraph, this overrides many useful settings. For example, I have set pgp_autoinline = yes, which is deprecated, but necessary for sending to older clients1, which don't support PGP/MIME.1 For example, Android's K-9 + APG. AFAIK this is the only FOSS Android email client that reads PGP-encrypted email at all, but only in a limited fashion. (EDIT: K-9 + openkeychain now supports PGP/MIME.)
|
_hardwarecs.2589
|
I am looking for desktop motherboard which could handle:4 x nvidia k4200 quadro (4 x PCIe x16, gen 3 if possible)Intel Core i7-47904 x DDR3 slot max. memory size 32 GB (at least)ATX format (if possible)We would use them for cuda calculations we need full PCIe bandwith.You can suggest other configuration which will cost not much to upgrade.Also I wanted to know how it is possible to handle 4 gpu with x16 with one cpu. Will it be better than 2 gpu
|
Motherboards which can handle 4 Gpu x16 Pcie
|
motherboard
| null |
_softwareengineering.335170
|
Whenever I am asked how I would scale out an application I inevitably tend toward a queueing model: split the application responsibilities into individual services and add queues between those services. This means you can spin up more or less instances of a given service as required and they simply pull from the queue to get work.Get enough of these services with different roles and almost inevitably there is the creation of orchestration layer - something that understands the necessary flow of a work item through the queues and manages that end to end.What are some of the alternative approaches for scaling out applications that don't use queues and end up with an orchestration layer?Update based on commentsAs @tofro pointed out, I'm probably talking more about elasticity instead of scalability https://stackoverflow.com/questions/9587919/what-is-the-difference-between-scalability-and-elasticity.Here is an example. Let say I have a service that does video encoding. A user uploads a file, selects one or more different encoding formats (quicktime, divx), the file is encoded into the formats and the user can download the resulting output files. One way to make this elastic using queues would be to have different services, QuickTimeEncoder and DivXEncoder, with queues QTEQueue and DivXQueue and put jobs on queues as required. More instances of the encoders could be added over time as demand changes.
|
Scaling without queues?
|
scalability
|
In the absence of queues, you must spin up a new agent immediately for each new task to be executed; the new agent holds the task instance instead of a queue.The Erlang programming language is capable of doing this, because it has the capability of spinning up millions of lightweight agents.Note that Erlang also has a queue module, so it still gives you that option.
|
_unix.206143
|
I am using the following bash script to update an email address [email protected] but the problem I have is that the field could be anything, not necessary [email protected] I have tried to use '*' instead, how can I run the following to work for whatever the current email is set as under emailaddress field?#! bin/bashupdatevar=UPDATE email_users SET emailaddress = REPLACE(emailaddress, '[email protected]', 'admin@$(hostname)');mysql --user=root --password=PASSWORD DATABASE << eof$updatevareof
|
Update mysql database specific field with this bash script
|
bash;mysql
| null |
_codereview.1156
|
I have the following code:private ScatterViewItem FindScatterViewOfSourceFile(SourceFile find){ foreach (ScatterViewItem svi in classScatterViews) { if ((svi.Tag as SourceFile).Equals(find)) { return svi; } } return null;}Now I'm asking myself if this is valid or if I should better use:private ScatterViewItem FindScatterViewOfSourceFile(SourceFile find){ ScatterViewItem result = null; foreach (ScatterViewItem svi in classScatterViews) { if ((svi.Tag as SourceFile).Equals(find)) { result = svi; break; } } return result;}Is there any common practive which one to use? And are both loops doing the same?
|
Is this a valid loop?
|
c#;algorithm
|
First one is hundred times better then the second one. I would avoid defining a variable if you don't really need to. Also I don't believe in one return methods. Especially I don't believe that they improve readability. LINQ would definitely improve it: return classScatterViews.FirstOrDefault(v => v.Tag.Equals(find));Also I would not use as operator if I do not check result for null.
|
_unix.321176
|
(Using an Ubuntu EC2 on AWS)I've a script, /home/ubuntu/start.sh. If I run it as ubuntu, it runs well. I need it to be run at launch, so I put it in /etc/rc.local. This will then be run as root on reboot, and this fails. I'm able to reproduce the failure by:# I'm ubuntu$ whoamiubuntu$ sudo su# i'm now root$ whoamiroot$ ./start.sh./start.sh: line 9: npm: command not found$ su -c ./start.sh - ubuntu./start.sh: line 9: npm: command not foundSo it looks like:root doesn't know about npm (installed by ubuntu under /home/ubuntu/.nvm/versions/node/v4.2.6/bin/npm so that makes sense)su -c ./start.sh - ubuntu doesn't exactly run the script as ubuntuHow can I run this script exactly as if I was logged in as ubuntu?
|
Command not found when running script as other user
|
ubuntu;root;amazon ec2
| null |
_softwareengineering.189962
|
I have a terminal disease and there is a very high chance that I will no longer be in this world by the end of the year.I have developed a web application that it is extensively used in my familys business (a small hairdressing shop). No member of my family has neither programming nor system administration skills. I have neither close friends with those skills.The business makes at most 10k in net profits per year. In fact, the business profits can only afford to pay the salaries of its 3 employees (father, mother and sister) and those are quite low and decreasing each year due to the financial crisis. In fact, I am not an employee of my familys business, I work for a normal software development company. I developed the application during my free time in order to help them.So far I do not care if another business also uses my application or even if the application itself loses my ownership. I just want that my familys business can continue using it, which means system administration support if something goes wrong and development for new features/bugs.I would like to ask you if you could give me the measures you think I could take in order to guarantee as much as possible the continuity of the application.The technologies of the application are:Platform: Tomcat (Java), MySQL and LinuxFrameworks: mainly JPA and ZK
|
Maintain a web application once the only developer is gone
|
project management;licensing;maintenance;hosting;knowledge transfer
| null |
_codereview.120905
|
I've taken a coding challenge as part of a job interview and the recruiting process. Sadly I didn't get through it and couldn't secure the job. I am wondering if anyone can help me out here and show me how it could've been done better.The problem described by interviewer was:The purpose of the class is to register aliases for a value. For example 'Dave, Davey and Davy are aliases for David I'm looking for working tests and some thought around edge cases, usability (i.e. what that api would be like to use), and a reasonably efficient implementation (does not need to be extremely high performance). Thread safety is optional.Provided skeleton:public class Aliases { public Aliases addAliases(String value, String... aliases) { throw new UnsupportedOperationException(); } public String lookup(String name) { throw new UnsupportedOperationException(); } }public class AliasesTest { private final Aliases nameAliases = new Aliases() .addAliases(david, dave, davey, davie, davy) .addAliases(thomas, tom, tommy) .addAliases(michael, mike, micky) .addAliases(elizabeth, liz, beth, lizzie, bettie, lizbeth); @Test public void canLookupExactMatches() { assertEquals(elizabeth, nameAliases.lookup(liz)); assertEquals(david, nameAliases.lookup(davy)); assertEquals(michael, nameAliases.lookup(michael)); } @Test public void canLookupCaseInsensitiveMatches() { assertEquals(elizabeth, nameAliases.lookup(Liz)); assertEquals(david, nameAliases.lookup(DAVIE)); assertEquals(michael, nameAliases.lookup(Mike)); } /* Add more test methods as required, feel free to remove/rename these if you prefer different terminology */ @Test public void cannotFindLookupForAlias() { // .... } @Test public void edgeCases() { // .... } @Test public void moreEdgeCases() { // .... } /* If you think the class would benefit from other convenience methods please add/test them */ /* If anything is unclear please contact Dave W. */}Here is my solution:import java.util.Collections;import java.util.Map;import java.util.Set;import java.util.concurrent.ConcurrentHashMap;public class Aliases { private Map<String, Set<String>> aliasesMap = Collections.synchronizedMap(new ConcurrentHashMap<>()); //public Aliases addAliases(String value, String... aliases) { //changed due to better understability and to avoid mistakes public Aliases addAliases(String value, Set<String> aliases) { if (aliases.contains() || aliases.contains(null)) { throw new IllegalArgumentException(Name set can't contain nulls); } aliasesMap.put(value, aliases); return this; } public String lookup(final String name) { if (null == name) { return null; } Set<String> keySet = aliasesMap.keySet(); if (setContainsString(keySet, name)) { return name; } for (Map.Entry<String, Set<String>> entry : aliasesMap.entrySet()) { Set<String> set = entry.getValue(); if (setContainsString(set, name)) { return entry.getKey(); } } return null; } private static boolean setContainsString(Set<String> set, String str) { return set.stream().map(String::toUpperCase).anyMatch(str.toUpperCase()::equals); }}public class AliasesTest { private Aliases nameAliases = new Aliases() .addAliases(david, new HashSet<>(Arrays.asList(dave, davey, davie, davy))) .addAliases(thomas, new HashSet<>(Arrays.asList(tom, tommy))) .addAliases(michael, new HashSet<>(Arrays.asList(mike, micky))) .addAliases(elizabeth, new HashSet<>(Arrays.asList(liz, beth, lizzie, bettie, lizbeth))); @Test public void canLookupExactMatches() { assertEquals(elizabeth, nameAliases.lookup(liz)); assertEquals(david, nameAliases.lookup(davy)); assertEquals(michael, nameAliases.lookup(michael)); } @Test public void canLookupCaseInsensitiveMatches() { assertEquals(elizabeth, nameAliases.lookup(Liz)); assertEquals(david, nameAliases.lookup(DAVIE)); assertEquals(michael, nameAliases.lookup(Mike)); } @Test public void cannotFindLookupForAlias() { assertNull(nameAliases.lookup(123)); assertNull(nameAliases.lookup(BOO)); assertNull(nameAliases.lookup(Foo)); } @Test public void edgeCases() { assertNull(nameAliases.lookup(null)); assertNull(nameAliases.lookup()); } @Test(expected = IllegalArgumentException.class) public void addNullAliasesTest() { new Aliases().addAliases(david, new HashSet<>(Arrays.asList(dave, null))); } @Test(expected = IllegalArgumentException.class) public void addEmptyAliasesTest() { new Aliases().addAliases(david, new HashSet<>(Arrays.asList(dave, ))); }}The feedback I got from the interviewer:Code looks up values by walking the entire data-structure looking for a match (major)Usage of concurrent map structure but not its methods, therefore adds little/nothing (major)Unclear what would happen if more aliases were registered to a value (minor)Each walk also performs the lowercasing on each item every time (why not store lowercased) (major)addAliases rejects nulls in the alias list but accepts nulls as a value (medium)change of api from varargs to set seems simply to make it easier for the given implementation, not necessarily the api (minor)failed to test giving nulls as values (major)
|
Registering and looking up aliases
|
java;interview questions;hash map
|
The first point in the feedback was Code looks up values by walking the entire data-structure looking for a match (major).If you turn your logic around you can fix this easily. The requirement is basically for a mapping alias => name. How about using the aliases as keys and the full name as the value? The code below also deals with another major feedback point also, storing the names in lower case, not doing the conversion on every lookup.import java.util.Objects;import java.util.Map;import java.util.HashMap;public class Aliases { public Aliases addAliases(String value, String... aliases) { for (String alias : aliases) { this.aliases.put(alias.toLowerCase(), value); } this.aliases.put(value, value); return this; } public String lookup(String name) { Objects.requireNonNull(name, Name must not be null); return aliases.get(name.toLowerCase()); } public static void main(String[] args) { final Aliases nameAliases = new Aliases() .addAliases(David, dave, davey, davie, davy) .addAliases(Thomas, tom, tommy) .addAliases(Michael, mike, micky) .addAliases(Elizabeth, liz, beth, lizzie, bettie, lizbeth); String alias = Liz; System.out.printf(%s is an alias for %s%n, alias, nameAliases.lookup(alias)); alias = dave; System.out.printf(%s is an alias for %s%n, alias, nameAliases.lookup(alias)); } // PRIVATE // private Map<String, String> aliases = new HashMap<>();}
|
_webapps.39346
|
We wanted to crunch some data re: the length of time it took to get from one column to the next; all of our cards get moved from our Production to Post-Production board and we see that the information from the card gets deleted when it switches boards. Is there any way to get that data back?
|
Is there a way to pull data from a card after it has been transferred to another board?
|
export;trello boards
| null |
_scicomp.26654
|
Consider a fluid flow simulation in a pipe. At the outflow, instead of explicitly imposing a boundary condition, I linearly extrapolate information from the interior (for velocity components). This will translate to $$\frac{\partial ^2 u}{\partial x^2} =\frac{\partial ^2 v}{\partial x^2}=\frac{\partial ^2 w}{\partial x^2}= 0$$ at the boundary($x$ being axial direction). This kind of numerical condition has been used successfully throughout the literature (see page 262 of this paper).What I am interested is - what does this condition physically mean?Any discussion would be greatly appreciated.
|
Outflow boundary condition - second derivative of velocity
|
fluid dynamics;boundary conditions
| null |
_unix.23515
|
I want to include the return status in my prompt. (Easy add '$? ', right?)However, I only want the status returned (and trailing space) if non-zero.Example:sd ~ $ false1 sd ~ $ truesd ~ $
|
Display Non-Zero Return Status in PS1
|
bash;prompt
|
Make sure that the promptvars option is on (it is by default). Then put whatever code you like in PROMPT_COMMAND to define a variable containing exactly what you want in the prompt.PROMPT_COMMAND='prompt_status=$? ; if [[ $prompt_status == 0 ]]; then prompt_status=; fi'PS1='$prompt_status\h \w \$ 'In zsh you could use its conditional construct in PS1 (bash has no equivalent).PS1='%(?,,%? )%m %~ %# '
|
_unix.230399
|
I've been trying for a while but have not been able to find a way to control the lights on a set of controllers from the game Buzz (wired, from Playstation 2). You can see some of my failed attempts in my questions over on Stack OverflowRuby libusb: Stall errorSending HID defined messages with usblibSo I turned to a more base linux method of sending messages, and failed to do it by piping data to /dev/hidraw0, too.Then I discovered a file in the linux repository which refers to the buzz controllers specifically (/linux/drivers/hid/hid-sony.c), and the fact that they have a light. It even has a method called buzz_set_leds (line 1512):static void buzz_set_leds(struct sony_sc *sc)So I'm 100% sure that this is the code does what I'm trying to do.I've had a go at including this in a c file, but am unable to include hid-sony because I seem to be missing these files.#include <linux/device.h>#include <linux/hid.h>#include <linux/module.h>#include <linux/slab.h>#include <linux/leds.h>#include <linux/power_supply.h>#include <linux/spinlock.h>#include <linux/list.h>#include <linux/idr.h>#include <linux/input/mt.h>#include hid-ids.hIn compilation, I get this error:hid-sony.c:29:26: fatal error: linux/device.h: No such file or directory #include <linux/device.h> ^compilation terminated.Sorry, I'm a Ruby programmer with no background in C.How do I get these missing 'linux/' files and refer to them from my c library - or how can I write to the controllers from the shell?
|
How can I write to the Buzz controllers HID device created by hid-sony.c to work the LEDs?
|
linux;drivers;usb;hid
|
with the sony driver loaded the driver provides standard led kernel interfaces:echo 255 > /sys/class/leds/*buzz1/brightnessecho 0 > /sys/class/leds/*buzz1/brightness
|
_softwareengineering.159634
|
We have a project where UI code will be developed by the same team but in a different language (Python/Django) from the services layer (REST/Java). The code for each layer exits in different code repositories and which can follow different release cycles. I'm trying to come up with a process that will prevent/reduce breaking changes in the services layer from the perspective of the UI layer.I've thought to write integration tests at the UI layer level that we'll run whenever we build the UI or the services layer (we're using Jenkins as our CI tool to build the code which is in two Git repos) and if there are failures then something in the services layer broke and the commit is not accepted.Would it also be a good idea (is it a best practice?) to have the developer of the services layer create and maintain a client library for the REST service that exists in the UI layer that they will update whenever there is a breaking change in their Service API? Conceivably, we would then have the advantage of a statically-typed API that the UI code builds against. If the client library API changes, then the UI code won't compile (so we'll know sooner that there was a breaking change). I'd also still run the integration tests upon building the UI or services layer to further validate that the integration between UI and the service(s) still works.
|
Should one generally develop a client library for REST services to help prevent API breakages?
|
rest;django
| null |
_unix.202887
|
So I just installed the latest Kali Linux on my laptop which was based on what is currently an oldstable. I then upgraded the whole thing to current stable, which then upgraded my Gnome Desktop to version 3...Coincidentally, Wayland just had a working package in the Debian repository on the current stable. How do I use it?When I boot up I get shown this screen (which it didn't do in the Gnome 2 of the latest Kali):https://blogs.gnome.org/mclasen/files/2013/09/login-screen.pngThen I select the Setting (the gear on the left side of Sign In button). It shows me:System DefaultGnomeGnome ClassicGnome on WaylandThe only one that's ever worked was Gnome Classic. How do I force the usage of Wayland? I already have these Wayland related packages installed:xwaylandwestonibus-waylandlibwayland-client0libwayland-serverlibwayland-egl1-mesalibwayland-cursor0libva-wayland1Yet when I choose Gnome on Wayland, the screen gets screwed up... How do I enter Wayland?Btw, the file ~/.config/weston.ini does not exist.
|
Using Wayland on my Debian upgraded system
|
desktop environment;wayland
| null |
_unix.238373
|
I'm using a MySQL database and Apache server with an AWS EC2 instance running AWS Linux. I start them both using the service command, but I am not familiar with any other tool that would come pre-installed or available through package manager that would ensure they don't get interrupted.Now I'm just running a script manually that restarts all the services when an issue with the site occurs:sudo service httpd restart && sudo service php-fpm-5.6 restart && sudo service mysqld restartSometimes the services run into and error and hang or otherwise get interrupted and it also crashes my site. I'm looking for a way to restart the services and monitor their health either through a command line utility, modifying config files, or any other way that works.
|
Keeping services on EC2 alive
|
apache httpd;mysql;services;amazon ec2
|
For generic checks of a service I see 2 main options:a process monitor like monit (AmiLinux has monit in its amzn-main repo)monitor multiple aspects of your system performance with zabbix which can also perform conditional command execution on top of metric trigger notifications. This should ideally run on another server and monitor your server through an agent to allow for better reliability.
|
_softwareengineering.264658
|
Is it possible to call a method using infix notation?For example, in Haskell, I could write the following fucntion:x `isAFactorOf` y = x % y == 0and then use it like:if 2 `isAFactorOf` 10 ...Which in some cases allows for very readable code. Is anything similar to this possible in Scala? I searched for Scala infix notation, but that term seems to mean something different in Scala.
|
Scala infix notation
|
scala
|
To expand on the answer by @eques, starting with version 2.10, Scala introduced implicit Classes to handle precisely this issue.This will perform an implicit conversion on a given type to a wrapped class, which can contain your own methods and values.In your specific case, you'd use something like this:implicit class RichInt(x: Int) { def isAFactorOf(y: Int) = x % y == 0}2.isAFactorOf(10)// or, without dot-syntax2 isAFactorOf 10Note that when compiled, this will end up boxing our raw value into a RichInt(2). You can get around this by declaring your RichInt as a subclass of AnyVal:implicit class RichInt(val x: Int) extends AnyVal { ... }This won't cause boxing, but it's more restrictive than a typical implicit class. It can only contain methods, not values or state.
|
_webmaster.19046
|
On the home page of our online store we have a banner promoting particular product. Clicking that banner takes to the product details page (does not add to the cart). We would like to track how many banner clicks lead to actual sale of that product. If that's not possible, is it possible to track how many banner clicks lead to a sale (regardless if that product is in the cart).We are using Google Analytics.In our situation, we cannot modify the store sources (it is a hosted solution). Only custom javascript can be added to template files.
|
Track how many banner clicks lead to sales of the advertised product
|
google analytics
| null |
_webapps.60550
|
I need to create some data for a simulation. So I wanted to take a spreadsheet, and for each column, fill it with random data falling within a domain. A uniform distribution is good enough. For a binary variable, I am currently doing following steps: create a formula like if(rand()>0.5, black, white) in a new columnfill a column with it, careful to only do it for the amount of rows I want (not just select the whole column and copy into it)copy the results and do paste specials -> values in the original column But if I have a variable with 7 possible values, I can't think of anything better than 7 nested if statements. Are there better ways?
|
How can I easily fill a column in Google Spreadsheet with random values from a list?
|
google spreadsheets
|
I think this is what you want. A spreadsheet the generates a column of rand() numbers. Then look up the rand() number and return another value.In the following instruction be sure to keep the $s for absolute referencing.In cell E2 enter the upper limit of the random numbers (I used 7).In cells C2 to C8 enter the numbers 1 to 7 (since the upper value is 7).In cells D2 to D8 enter the values you want to return. In this case I used names.In cell A2 enter the formula: =int(RAND()*$E$2)+1 (where cell E2 holds the upper limit of the random numbers).Copy this formula down as far as needed.In cell B2 enter the formula: =vlookup(A2,$C$2:$D$8,2) (where cells C2 to D8 hold the substitution values). If the random number generated is 1 then this returns the name Abe.Copy this formula down as far as needed.Let me know how this goes.Also see my RAND() vlookup spreadsheet.
|
_unix.103234
|
I want to use korean characters in Emacs, so I installed hangul.el.I can activate input-method by '(set-input-method korean-hangul)'.And I want to define a mode that have the `korean-hangul' input method.I only know how to define a minor mode with a defined map-key like this.(easy-mmode-define-minor-mode korean-mode Mode for korean nil korean-mode-map)But I don't know how to connect a input-method and a minor-mode.I'll add add some key-bindings to the korean-mode.How can I define a mode like this korean-mode?
|
How to define a minor mode that uses specified input-method in Emacs
|
emacs
| null |
_codereview.49416
|
Is this a good approach of designing a class or is there some other way that I am not aware of?Student.hspecification file#ifndef STUDENT_H#define STUDENT_H#include <string>using namespace std;class Student{ private: int ID; string name; double GPA; char gender; public: Student(); Student(int ID, string name, double GPA, char gender); void setStudent(int ID, string name, double GPA, char gender); int getID(); string getName(); double getGPA(); char getGender(); void print();};#endifStudent.cpp implementation file#include Student.h#include <iostream>using namespace std;Student :: Student(){ ID = 0; name = ; GPA = 0; gender = ' ';}Student :: Student(int ID, string name, double GPA, char gender){ this -> ID = ID; this -> name = name; this -> GPA = GPA; this -> gender = gender;}void Student :: setStudent(int ID, string name, double GPA, char gender){ this -> ID = ID; this -> name = name; this -> GPA = GPA; this -> gender = gender;}int Student :: getID(){ return ID;}string Student :: getName(){ return name;}double Student :: getGPA(){ return GPA;}char Student :: getGender(){ return gender;}void Student :: print(){ cout << ID : << ID << endl; cout << Name : << name << endl; cout << GPA : << GPA << endl; cout << Gender : << gender << endl;}StudentDemo.cpp#include <iostream>#include Student.husing namespace std;int main(){ Student s; int ID; string name; double GPA; char gender; cout << Enter ID ; cin >> ID; cout << Enter name ; cin >> name; cout << Enter GPA ; cin >> GPA; cout << Enter gender ; cin >> gender; s.setStudent(ID, name, GPA, gender); s.print(); return 0;}
|
C++ Student Class
|
c++;classes
| null |
_unix.38422
|
I am running VirtualBox (using the Qiime image http://qiime.org/install/virtual_box.html)The physical hardware is a 32 core machine. The virtual machine in VirtualBox has been given 16 cores.When booting I get:Ubuntu 10.04.1 LTSLinux 2.6.38-15-server# grep . /sys/devices/system/cpu/*/sys/devices/system/cpu/kernel_max:255/sys/devices/system/cpu/offline:1-15/sys/devices/system/cpu/online:0/sys/devices/system/cpu/possible:0-15/sys/devices/system/cpu/present:0/sys/devices/system/cpu/sched_mc_power_savings:0# ls /sys/kernel/debug/tracing/per_cpu/cpu0 cpu1 cpu10 cpu11 cpu12 cpu13 cpu14 cpu15 cpu2 cpu3 cpu4 cpu5 cpu6 cpu7 cpu8 cpu9# ls /sys/devices/system/cpu/cpu0 cpufreq cpuidle kernel_max offline online possible present probe release sched_mc_power_savings# echo 1 > /sys/devices/system/cpu/cpu6/online -su: /sys/devices/system/cpu/cpu6/online: No such file or directorySo it seems it detects the resources for 16 CPUs, but it only sets one online.I have tested with another image that the VirtualBox host can run a guest with 16 cores. That works. So the problem is to trouble shoot the Qiime image to figure out why this guest image only detects 1 CPU.
|
VirtualBox guest: 16 CPUs detected but only 1 online
|
kernel;virtualbox;cpu;hot plug;smp
|
QIIME came out with a new virtualbox image (version 1.5), which works.If no one finds the answer to the problem above I will close the question in a week.
|
_reverseengineering.1965
|
I have an android application that uses a shared library which I would like to step through with a debugger. I've had success using IDA 6.3 to debug executables with the android_server debug server included with IDA but haven't gotten it to work with shared objects yet. For a specific example, suppose I have the following Java code (This comes from the hellojni example in the Android NDK):System.loadLibrary(hello-jni);tv.setText( stringFromJNI() );With the JNI C code as:jstringJava_com_example_hellojni_HelloJni_stringFromJNI( JNIEnv* env, jobject thiz ){ return (*env)->NewStringUTF(env, Hello from JNI !);}If the java code is run only when the application starts up, how can I break in the function Java_com_example_hellojni_HelloJni_stringFromJNI?
|
How to break on an Android JNI function with IDA Pro Debugger
|
ida;android
|
There are two options I can see.Start the Dalvik VM manually using app_process. The command line seems to be something like (see am script source):app_process /system/bin com.android.commands.am.Am start -a <ACTION>Put an endless loop in the beginning of your JNI method, run the app, attach to the new process and skip the loop manually in the debugger.
|
_softwareengineering.279174
|
Scenario: I have a web application that records and checks data against two temp tables (1 table being a temp source and the other being a destination for the application). These temp tables are synced up each night with their respective destination/source sql views. The Issue:The source/destination views have char/nvarchar data types however the actual content inside these views are mostly integers. How should I construct my model for the temp tables within the application; should I convert the data to their real types (converting the types during sync time) or just keep them in the form of strings?Important:There is no validation needed for the types, it will be impossible for the user to enter invalid data. The true data types of the content will hold constant for each column with no accidental variation. So the question comes down to, is there anything wrong with processing integers as strings? Other than it being annoying as hell.
|
To convert to accurate data types or maintain default type of string
|
c#;sql server;data types
|
Store them in the database as the respective data types. Instead of storing 1234 as a string, store the int 1234.This does a couple of things for you. It prevents data consumers from having to do the conversion and possible mistakes (not only will this prevent mistakes, but it'll maximize performance for the data consumers. Convert the data before it is dumped into the table so it doesn't have to be continuously converted upon retrieval)It will allow bad data to fail fast and fail early, as you wouldn't be able to put bad data in the source table (i.e. on an int column, if you didn't expect bad data but attempted to throw 123a4 in there it would fail. That's a good thing)Store the data as it is expected. It will alleviate a lot of complications.
|
_cstheory.22354
|
From the common sense point of view, it is easy to believe that adding non-determinism to $\mathsf{P}$ significantly extends its power, i.e., $\mathsf{NP}$ is much larger than $\mathsf{P}$. After all, non-determinism allows exponential parallelism, which undoubtedly appears very powerful. On the other hand, if we just add non-uniformity to $\mathsf{P}$, obtaining $\mathsf{P}/poly$, then the intuition is less clear (assuming we exclude non-recursive languages that could occur in $\mathsf{P}/poly$). One could expect that merely allowing different polynomial time algorithms for different input lengths (but not leaving the recursive realm) is a less powerful extension than the exponential parallelism in non-determinism. Interestingly, however, if we compare these classes with the very large class $\mathsf{NEXP}$, then we see the following counter-intuitive situation. We know that $\mathsf{NEXP}$ properly contains $\mathsf{NP}$, which is not surprising. (After all, $\mathsf{NEXP}$ allows doubly exponential parallelism.) On the other hand, currently we cannot rule out $\mathsf{NEXP}\subseteq \mathsf{P}/poly$. Thus, in this sense, non-uniformity, when added to polynomial time, possibly makes it extremely powerful, potentially more powerful than non-determinism. It might even go as far as to simulate doubly exponential parallelism! Even though we believe this is not the case, but the fact that currently it cannot be ruled it out still suggests that complexity theorists are struggling with mighty powers here.How would you explain to an intelligent layman what is behind this unreasonable power of non-uniformity?
|
The unreasonable power of non-uniformity
|
cc.complexity theory;complexity classes;big picture
| null |
_datascience.949
|
I am facing this bizarre issue while using Apache Pig rank utility. I am executing the following code:email_id_ranked = rank email_id;store email_id_ranked into '/tmp/';So, basically I am trying to get the following result1,email12,email23,email3... Issue is sometime pig dumps the above result but sometimes it dumps only the emails without the rank. Also when I dump the data on screen using dump function pig returns both the columns. I don't know where the issue is. Kindly advice.Please let me know if you need any more information. Thanks in advance.Pig version: Apache Pig version 0.11.0-cdh4.6.0
|
Pig Rank function not generating rank in output
|
bigdata;apache hadoop;apache pig
| null |
_softwareengineering.104582
|
I need to determine the ranking of values in an array without altering their position, so that I can print the position of each split time next to the actual value of the split time in a table like so.<table><tr> <th>Split 1</th> <th>Split 2</th> <th>Split 3</th></tr><tr> <td>4.66 (1)</td> <td>5.12 (3)</td> <td>4.75 (2)</td></tr></table>My array = [4.66, 5.12, 4.75] and I need to iterate through it and print the rank rank as seen in parenthesis above. I can't sort, because I need to do this for several decimals in the html table. Any suggestions for implementing this algorithm?
|
Algorithm design for comparing split times in a race
|
algorithms;array
|
Use quicksort (or the sorting algorithm of your choice) to sort a list of array indexes according to the corresponding split times. The array of indexes is a proxy for the array of split times. We don't want to change the order of the times, but we want to know what the order of the times would be if we sorted it.Given split time array: splits[] = {4.2, 3.9, 4.1, 3.8, 3.7}We start with an array of indices: indexes[] = {0, 1, 2, 3, 4}Now we sort indexes array using the values from splits. Any sorting algorithm needs a comparison function, and ours is:compare(i, j) := if splits[i] < splits[j], i is smaller, else if splits[i] > splits[j], i is larger, else they're equalSo, for example, compare(0, 1) should return i > j because 4.2 > 3.9.Following is some C code that illustrates the solution using fairly little code. It relies on the qsort_r() C standard library function, which is a version of quicksort, but you could use any sorting algorithm. An important implementation detail is that the qsort_r() routine lets us pass an extra parameter that's provided to the comparison function; this lets us pass the splits array to the comparison function.void sortSplits(float splits[], int index[], int count){ // initialize the index array for (int i = 0; i < count; i++) { index[i] = i; } qsort_r(index, count, sizeof(int), splits, compareSplits);}int compareSplits(void *thunk, const void *item1, const void *item2){ float *splits = (float*)thunk; int i = *(int*)item1; int j = *(int*)item2; if (splits[i] < splits[j]) return -1; // less than else if (splits[i] > splits[j]) return 1; // greater than else return 0; // equal}To use this code, call sortSplits() passing in the array of split times, an array of ints that's at least as long as the array of times, and the number of times. On return, the index array will contain the sorted list of indexes. In other words, if the resulting array looks like:{3, 0, 1, 2}it means that the split time at index 3 is the smallest, followed by the time at index 0, followed by the time at index 1, followed by the time at index 2.The indexes here really work like pointers. You could even say that they ARE pointers in a sense. Indeed, you could implement exactly the same algorithm described above using an array of pointers in place of the array of indexes That would eliminate the need for passing the splits array as a separate parameter.
|
_reverseengineering.2907
|
Original question asked on Stackoverflow: Can the 'r' be removed from a function stack ?I am trying to modify the processor for the Fujitsu FR, and IDA by default inserts the return variable r on each stack, but the Fujitsu FR processor does not put r as the first item, so this stuffs up the stack.What I can't workout is: in the processor plugin, what needs overriding to resolve this, or if any of the example processors have solutions to copy.
|
Can the 'r' be removed from a function stack
|
ida
|
for completeness, implementing get_frame_retsize [int (*get_frame_retsize(func_t *pfn)] in your processor_t LPH is the solution to this.in that function for my processor I needed to return zero instead of the default of 4.
|
_unix.8524
|
I need a very simple script which does this:mogrify -resize $1x$2^ -gravity center -crop $1x$2+0+0 $3so that I can call it in this way:cropresize.sh 110 110 *.pngthe problem is that the shell expands the *.png pattern instead of passing it as it is to the script.How can I achieve this (script, alias or any other equivalent solution is fine)?
|
How do I prevent expansion when I use a pattern as argument to a script?
|
bash;scripting
|
Since the shell performs glob expansion before the arguments are handed over to the command, there's no way I can think of to do it transparently: it's either controlled by the user (quote the parameter) or brute-force (disable globbing completely for your shell with set -o noglob).You're looking at the problem from the wrong end. Change your script to accept multiple filename arguments:x=$1y=$2shift 2mogrify -resize ${x}x${y}^ -gravity center -crop ${x}x${y}+0+0 $@
|
_webmaster.1130
|
The accepted answer to this question states that Adsense functions rather poorly. What is the alternative then?My site has about 400 daily visitors, 1.5 pages/visit - so I'm too small to manually approach any sponsor for an personal ad deal. What AdSense would work best?By 'best' I mostly mean make the most money, although usability is also a factor.
|
Best alternative to Adsense for a small website?
|
google adsense
|
There are many alternatives. The best way to find an advertiser is to look at a competitor's website and see who places ads with them. Find out if they use an ad network that is positioned to the same market as yours.Also, you don't have to get a big name advertiser to place ads with you. Do a quick search on Google and see who is advertising there.Chances are if they advertise on Google they may place ads with you too. You can really pitch something like that; as long as your content is much more focused on a niche. Give them a call, you have nothing to lose.Finally, if your posting about a product you might be able to find an affiliate program for it. Do some research and see if you can find any good deals. Amazon if a good place to start.
|
_unix.89964
|
I've installed Skype by using dkpg and when I try to run it, this is what I get -bash: /usr/bin/skype: No such file or directory. Which is very strange since ls -l | grep skype shows this:-rwxr-xr-x 1 root root 30717480 May 7 01:43 skypeI had similiar problems when I installed FireFox, but since I didn't need it all that much, didn't care. However, I do need Skype on Linux.Output of my $PATH variable:/home/max/.rvm/gems/ruby-2.0.0-p247/bin:/home/max/.rvm/gems/ruby-2.0.0-p247@global/bin:/home/max/.rvm/rubies/ruby-2.0.0-p247/bin:/home/max/.rvm/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/gamesAnyone care to explain?... Because I'm stunned.
|
Debian Wheezy 7.1 - can't launch Skype or Firefox
|
bash;path;debian
|
It seems that this is the perennial 32-bit application on a 64-bit system issue.Does the following help?[root@host]# dpkg --add-architecture i386[root@host]# apt-get update #Will take a while[root@host]# apt-get install ia32-libs #Will download and install ~100-200 MB of data[user@host]$ skype&
|
_unix.211173
|
So I want to add 10 seconds to a time. The command to do that came from here. To illustrate:STARTIME=$(date +%T)ENDTIME=$STARTIME today + 10 secondsCALL=$(echo date -d $ENDTIME +'%H:%M:%S')The problem that I have with this code is that if I echo the $CALL variable, it gives:date -d 12:51:19 today + 10 seconds +%H:%M:%SThe correct version of this string would look like:date -d 12:48:03 today + 10 seconds +'%H:%M:%S'But if I wrap the variable name in quotes, like so: STARTIME=$(date +%T)ENDTIME=$STARTIME today + 10 secondsCALL=$(echo date -d '$ENDTIME' +'%H:%M:%S')...it's interpreted as a string literal, and if you echo it, it gives:date -d $ENDTIME +%H:%M:%SSo what I need to do is call the variable such that it's value is swapped into the function and wrapped with double-quotes(), but avoid the name of the variable being read as a literal string. I'm extremely confused with this, I miss Python!
|
In bash, how to call a variable and wrap quotes around its value as part of a command without causing the variable name to become a string literal?
|
bash;shell script;quoting;date
|
Just for completeness, you dont need all those () nor the final $(echo ...).Here's the simplified version of your assignments that produce the sameeffect:STARTIME=$(date +%T)ENDTIME=$STARTIME today + 10 secondsCALL=date -d '$ENDTIME' +'%H:%M:%S'Note how you dont need to quote when doing var=$(...) but you do usuallywith var=many words:a=$(echo 'a b'); echo $a # result: a bInside () a (') has no special significance, and vice-versa, eg:a=that's nice; echo $a # result: that's nicea='that is nice'; echo $a # result: that is nice
|
_softwareengineering.328686
|
I am trying to implement a tree data structure that callers of my code edit for me to operate on. The idea is that the caller can hold a reference to nodes from the tree and modify their data (both attributes and children), which triggers an event that prompts my code to update a visual render of the tree.The problem that I have is that each node also needs to hold some private data (a reference to the corresponding visual element) which I don't want to have polluting the public interface of the node object. The private data needs to be accessible by the main class that is handling the tree object.I could decide to just stick the private data onto my public interface like this:class Node{ public int NodeData; public List<Node> ChildNodes; public object PrivateVisualizationData;}class TreeControl{ public Node RootNode; // Use PrivateVisualizationData fields on node objects}However, this presents two problems as I see it:Any consumers of Node will see the PrivateVisualizationData field, which could be confusingExternal code could modify my private data, breaking code that needs itHow could I design this structure so that each node has custom data associated with it, but the data isn't accessible externally? I would like to avoid the management cost of a separate lookup table if possible, but that may be what I end up doing.
|
Implementing a publicly-editable tree where each node must hold private implementation data
|
object oriented design;access control
|
You could try something like this: class Node { public int NodeData; public List<Node> ChildNodes; } class TreeControlNode extends Node { public object PrivateVisualizationData; } class TreeControl { private TreeControlNode RootNode; public Node GetRootNode() { return (Node)RootNode; } // Use PrivateVisualizationData fields on node objects }Inheriting from the Node class allows you to use all of the members and methods of Node while still adding your special data. By returning your RootNode as the Node class the user should only us the Node members and methods.
|
_codereview.3448
|
Yes I'm very slowly making my way through Purely Functional Data Structures. So I went through the section on Red Black Trees. What he presents is amazingly concise, except for the fact that he didn't include the delete function. Searching around didn't turn up many functional delete methods, well only two so far. One in Haskell the other in Racket (a version of Scheme I think). The Haskell code seemed rather impenetrable to me so I went with trying to grok the Racket version from Matt Might. My scheme experience is pretty rusty, but my Haskell knowledge is nill.The code below is what I came up with. You can see the complete implementation of RedBlackTree.fs here. I'm sure that there are still problems in this implementation since I haven't fully tested it yet. My main question for the more experienced guys is does what Matt has laid out here really make sense? And do you think the way I've tried to implement this in F# is going to work?If you read Matt's blog you'll see a description of how he is approaching the problem. He adds two new colors (double black and negative black) to the tree temporarily during the delete. He also has this notion of a double black leaf, and that is where my main point of confusion lies. After a delete a double black leaf is sometimes left behind (when the element being deleted has no children). So it appears that the double black leaf isn't temporary. It's not clear to me based on his description if this is what was intended or I still have some problem in my logic.Thanks for taking a look at this,Derek// BB = double-black// NB = negative-blacktype Color = R | B | BB | NBtype Tree<'e when 'e :> IComparable> = | L | BBL // BBL = double-black leaf | T of Color * Tree<'e> * 'e * Tree<'e>module RedBlackTree = let empty = L...let addBlack c = match c with | B -> BB | R -> B | NB -> R | BB -> failwith BB Nodes should only be temporarylet subBlack c = match c with | B -> R | R -> NB | BB -> B | NB -> failwith NB Nodes should only be temporarylet redden n = match n with | L | BBL -> n | T(_,l,v,r) -> T(R,l,v,r)let blacken node = match node with | BBL -> L | T(_,l,v,r) -> T(B,l,v,r) | _ -> nodelet rec balanceNode clr tl e tr = match clr, tl, e, tr with | BB,T(R, T(R,a,x,b),y,c), z, d | BB,T(R,a,x, T(R,b,y,c)), z, d | BB,a,x, T(R, T(R,b,y,c),z,d) | BB,a,x, T(R,b,y, T(R,c,z,d)) | B,T(R, T(R,a,x,b),y,c), z, d | B,T(R,a,x, T(R,b,y,c)), z, d | B,a,x, T(R, T(R,b,y,c),z,d) | B,a,x, T(R,b,y, T(R,c,z,d)) -> T((subBlack clr), T(B,a,x,b), y, T(B,c,z,d)) | BB,a,x,T(NB,T(B,b,y,c),z,(T(B,_,_,_) as d)) -> T(B,T(B,a,x,b),y, balanceNode B c z (redden d)) | BB,T(NB,(T(B,_,_,_) as a),x,T(B,b,y,c)),z,d -> T(B, (balanceNode B (redden a) x b), y, T(B,c,z,d)) | _,_,_,_ -> T(clr,tl,e,tr)let bubble t = match t with | T(c,(T(lc,ll,lv,lr) as lt),v, (T(rc,rl,rv,rr) as rt)) -> if lc = BB || rc = BB then balanceNode (addBlack c) (T(subBlack lc,ll,lv,lr)) v (T(subBlack rc,rl,rv,rr)) else t | _ -> tlet isLeaf node = match node with | L | BBL -> true | _ -> falselet rec getMax node = match node with | L | BBL -> None | T(c,l,v,r) -> match (isLeaf l), (isLeaf r) with | false, true | true,true -> Some(v) | _,_ -> getMax rlet rec remove node = match node with | L | BBL -> node | T(nc, lchild, nv, rchild) -> match (isLeaf lchild),(isLeaf rchild) with | true,true -> match nc with | R -> L | B -> BBL | _ -> failwith Illegal black node | true,false -> match nc,rchild with | R,T(rc,rl,rv,rr) -> rchild | B,T(rc,rl,rv,rr)-> match rc with | R -> T(B,rl,rv,rr) | B -> T(addBlack rc,rl,rv,rr) | _ -> failwith Illegal black node | _ -> failwith Illegal black node | false,true -> match nc,lchild with | R,T(lc,ll,lv,lr) -> lchild | B,T(lc,ll,lv,lr) -> match lc with | R -> T(B,ll,lv,lr) | B -> T(addBlack lc,ll,lv,lr) | _ -> failwith Illegal black node | _ -> failwith Illegal black node | false,false -> let max = (getMax lchild).Value let t = removeMax lchild bubble (T(nc,t,max,rchild))and removeMax node = match node with | T(c,l,v,r) -> if isLeaf r then remove node else bubble (T(c,l,v, removeMax r)) | _ -> nodelet delete key node = let rec del (key : IComparable) node = match node with | T(c,l,v,r) -> match key.CompareTo v with | -1 -> bubble (T(c,(del key l),v,r)) | 0 -> remove node | _ -> bubble (T(c,l,v,(del key r))) | _ -> node blacken (del key node)
|
Deleting from Red Black Tree in F#
|
f#;functional programming
| null |
_webapps.108935
|
I am working with this script: /** * Automatically sorts the 1st column (not the header row) Ascending. */ function onEdit(event){ var sheet = event.source.getActiveSheet(); var editedCell = sheet.getActiveCell(); var columnToSortBy = 3; var tableRange = A2:T99; // What to sort. if(editedCell.getColumn() == columnToSortBy){ var range = sheet.getRange(tableRange); range.sort( { column : columnToSortBy, ascending: true } ); }I need it to only run on Sheet 2, Sheet 3, Sheet 4, and not all sheet in the workbook. I tried doing several variation of getsheets that didn't work.
|
Script to work on certain sheetname.... .getSheetByName('Sheet 1'); didn't work. for multiple sheets
|
google spreadsheets;google apps script
|
The name of the current sheet is obtained by sheet.getSheetName(). You want to make sure that this name is one of Sheet 2, Sheet 3, Sheet 4. A concise way to express this condition is to have an array sheetList with the names that need to be sorted, and check whether the name of the active sheet is on the list. The method indexOf does this check: it returns -1 if the name is not on the list. function onEdit(event) { var sheetList = [Sheet 2, Sheet 3, Sheet 4]; var sheet = event.source.getActiveSheet(); if (sheetList.indexOf(sheet.getSheetName()) != -1) { var editedCell = sheet.getActiveCell(); var columnToSortBy = 3; var tableRange = A2:T99; if (editedCell.getColumn() == columnToSortBy) { var range = sheet.getRange(tableRange); range.sort( { column : columnToSortBy, ascending: true } ); } }}
|
_unix.16623
|
How can I set file to be executable only to other users but not readable/writable, the reason for this I'm executing something with my username but I don't want to give out the password. I tried :chmod 777 testfilechmod a=xchmod ugo+xI still get permission denied when executing as another user.
|
File permission execute only
|
linux;security;permissions;executable
|
You need both read and execute permissions on a script to be able to execute it. If you can't read the contents of the script, you aren't able to execute it either.tony@matrix:~$ ./hello.worldhello worldtony@matrix:~$ ls -l hello.world-rwxr-xr-x 1 tony tony 17 Jul 13 22:22 hello.worldtony@matrix:~$ chmod 100 hello.worldtony@matrix:~$ ls -l hello.world---x------ 1 tony tony 17 Jul 13 22:22 hello.worldtony@matrix:~$ ./hello.worldbash: ./hello.world: Permission denied
|
_unix.260211
|
I have been a Debian user since several years but now would like to try openSUSE as well in a dual boot environment. Here are some thoughts:Two root partitions.To avoid the two installations from stepping on each other I am thinking of keeping two root partitions of 50 GB each.Two home partitions, again about 50 GB each. Since application releases from the two distros are going to be different (KDE 4 versus 5) it makes sense to have separate home directories. I don't know if all applications from the two distros are compatible with one another. I would like to be cautious than sorry.One big user data partition. My concern is more around user data, which in a single boot setup resides in the /home directory. It seems eminent that a separate partition has to be created. However, it would be convenient if my personal data that is not affected by KDE release continues to remains accessible from /home. Say the mount-point for the data partition is /mnt/data, then I would create symlinks like my-docs in both the home directories. This scheme has the obvious problem of creating a symlink every time I wanted to work in a new directory in the /home directory.Any ideas from the good folks here will be appreciated.EDIT: The other issue clearly does not answer the question here. There are plenty of resources available on how-to-partition and how-to-dual boot. The query here is on the partitioning scheme (number of partitions, their size, role) given some specific ideas. The quick marking as duplicate is giving an impression that the question has not been properly read.
|
What is a suitable dual boot partitioning scheme (Debian and openSUSE Leap)
|
debian;partition;opensuse
| null |
_scicomp.1692
|
I've got a system of ordinary differential equations - 7 equations, and ~30 parameters governing their behavior as part of a mathematical model of disease transmission. I'd like to find the steady states for those equations Changing dx/dt = rest of the equation to 0 = equation for each of the equations makes it a straightforward algebra problem. This could be done by hand, but I'm ridiculously bad at that kind of computation.I've tried using Mathematica, which can handle smaller versions of this problem (see here), but Mathematica is grinding to a halt on this problem. Is there a more efficient/effective way to approach this? A more efficient symbolic math system? Other suggestions?A few updates (March 21st):The goal is indeed to solve them symbolically - the numerical answers are nice but for the moment the end-goal is the symbolic version.There is at least one equilibrium. I haven't actually sat down and proved this, but by design it should have at least one trivial one wherein none is infected at the start. There may not be anything besides that, but that would make me as content as anything else.Below is the actual set of equations being talked about.In summary, I'm looking for symbolic expressions for the solutions of a system of 7 quadratic equations in 7 variables.
|
Symbolic solution of a system of 7 nonlinear equations
|
ode;symbolic computation;epidemiology
|
It looks like the equations you're dealing with are all polynomial after clearing denominators. That's a good thing (transcendental functions are often a little harder to deal with algebraically). However, it's not a guarantee that your equations have a closed-form solution. This is an essential point that many people don't really get, even if they know it in theory, so it bears restating: there are fairly simple systems of polynomial equations for which there is no way of giving the solutions in terms of ($n$th) roots etc. A famous example (in one variable) is $x^5-x+1=0$. See also this wikipedia page.Having said that, of course there are also systems of equations that can be solved, and it's worthwhile to check if your system is one of those. And even if your system cannot be solved, it might still be possible to find a form for your system of equations that is simpler, in some sense. For example, find one equation involving only the first variable (even if it cannot be solved algebraically), then a second equation involving only the first and second variable, etc. There are a few competing theories for how to find such normal forms of polynomial systems; the most well-known is Groebner basis theory, and a competing one is the theory of regular chains.In the computer algebra system Maple (full disclosure: I work for them) both of them are implemented. The solve command typically calls the Groebner basis method, I believe, and that quickly grinds to a halt on my laptop. I tried running the regular chains computation and it takes longer than I have patience for but doesn't seem to blow up as badly memory-wise. In case you're interested, the help page for the command I used is here, and here is the code I used:restart;sys, vars := {theta*H - rho_p*sigma_p* Cp*(Us/N) - rho_d*sigma_d*D*(Us/N)*rho_a*sigma_a* Ca*(Us/N) = 0, rho_p*sigma_p*Cp*(Us/N) + rho_d*sigma_d* D*(Us/N)*rho_a*sigma_a*Ca*(Us/N) + theta*H = 0, (1/omega)*Ua - alpha*Up - rho_p*psi_p* Up*(H/N) - Mu_p*sigma_p*Up*(Cp/N) - Mu_a*sigma_a*Up*(Ca/N) - Theta_p* Up + Nu_up*(Theta_*M + Zeta_*D) = 0, alpha*Up - (1/omega)*Ua - rho_a*psi_a* Ua*(H/N) - Mu_p*sigma_p*Ua*(Cp/N) - Mu_a*sigma_a*Ua*(Ca/N) - Theta_a* Ua + Nu_ua*(Theta_*M + Zeta_*D) = 0, (1/omega)*Ca + Gamma_*Phi_*D + rho_p*psi_p* Up*(H/N) + Mu_p*sigma_p*Up*(Cp/N) + Mu_a*sigma_a*Up*(Ca/N) - alpha*Cp - Kappa_* Cp - Theta_p*Cp + Nu_cp*(Theta_*M + Zeta_*D) = 0, alpha*Cp + Gamma_*(1 - Phi_)*D + rho_a*psi_a* Ua*(H/N) + Mu_p*sigma_p*Ua*(Cp/N) + Mu_a*sigma_a*Ua*(Ca/N) - (1/omega)* Ca - Kappa_*Tau_*Ca - Theta_a*Ca + Nu_ca*(Theta_*M + Zeta_*D) = 0, Kappa_*Cp + Kappa_*Tau_*Ca - Gamma_*Phi_* D - Gamma_*(1 - Phi_)*D - Zeta_*D + Nu_d*(Theta_*M + Zeta_*D) = 0, Us + H + Up + Ua + Cp + Ca + D = 0, Up + Ua + Cp + Ca + D = 0}, {Us, H, Up, Ua, Cp, Ca, D, N, M}:sys := subs(D = DD, sys):vars := subs(D = DD, vars):params := indets(sys, name) minus vars:ineqs := [theta > 0 , rho_p > 0 , sigma_p > 0 , rho_d > 0 , sigma_d > 0 , rho_a > 0 , sigma_a > 0 , omega > 0 , alpha > 0 , psi_p > 0 , Mu_p > 0 , Mu_a > 0 , Theta_p > 0 , Nu_up > 0 , Theta_ > 0 , Zeta_ > 0 , psi_a > 0 , Theta_a > 0 , Nu_ua > 0 , Gamma_ > 0 , Phi_ > 0 , Kappa_ > 0 , Nu_cp > 0 , Tau_ > 0 , Nu_ca > 0]:with(RegularChains):R := PolynomialRing([vars[], params[]]):sys2 := map(numer, map(lhs - rhs, normal([sys[]]))):sol := LazyRealTriangularize(sys2,[],map(rhs, ineqs),[],R);
|
_codereview.72155
|
I need to cut down the run time of this query. Currently it's taking 45 minutes. Is there something I can change in the table or the query to allow this to run faster?SELECT SUM(B.PRICE),C.ST_FILEFROM TAD1D.ST_EUCM AS B, TAD1D.ST_DSA_TRAN AS C, TAD1A.TIBHLD TWHERE C.ST_TRAN_FILE=B.ST_TRAN_FILE AND C.ST_IND='Y' AND C.ST_SRC_SYS_CD='HANN' AND C.ST_TBL_NAME='ST_EUCM' AND B.PARTNUMB=T.ACCT_NUM AND B.FVSRVCCODE=T.SRV_CDGROUP BY C.ST_FILEEXCEPTSELECT sum(A.ACCUM_C),D.ST_FILE from TAD1A.TIBHLD_DLY_VAL A, TAD1D.ST_DSA_TRAN AS D where A.SRC_SYS_CD = 'HANN' AND D.ST_SRC_SYS_CD=A.SRC_SYS_CD AND D.ST_TBL_NAME='ST_EUCM' AND DATE(A.HLD_VAL_DT)=DATE(D.ST_FILE) GROUP BY D.ST_FILE WITH UR --There are no indexes on either table. Please provide a tip/solution in the event i am able to change the table structure or add an index (I may not).
|
Summing the prices from Transaction files
|
sql;database;db2
|
NitpicksThere is a slight amount of inconsistency in your capitalization of keywords. SUM() / sum() and WHERE / where. Otherwise, it's pretty consistent throughout!AliasesYou use the following aliases: B C T. There is no meaning or explanation of what those mean. You should use meaningful aliases, especially where your table names don't really help, such as in your case. Old-style JOINThis:FROM TAD1D.ST_EUCM AS B, TAD1D.ST_DSA_TRAN AS C, TAD1A.TIBHLD TWHERE C.ST_TRAN_FILE=B.ST_TRAN_FILE AND C.ST_IND='Y' AND C.ST_SRC_SYS_CD='HANN' AND C.ST_TBL_NAME='ST_EUCM' AND B.PARTNUMB=T.ACCT_NUM AND B.FVSRVCCODE=T.SRV_CD...is deprecated pre-ANSI-92 syntax. It should not be used, rather you should favor explicit JOIN syntax, like such:FROM TAD1D.ST_EUCM AS BINNER JOIN TAD1D.ST_DSA_TRAN AS C ON C.ST_TRAN_FILE = B.ST_TRAN_FILEINNER JOIN TAD1A.TIBHLD T ON B.PARTNUMB = T.ACCT_NUM AND B.FVSRVCCODE = T.SRV_CDWHERE AND C.ST_IND = 'Y' AND C.ST_SRC_SYS_CD = 'HANN' AND C.ST_TBL_NAME = 'ST_EUCM'Notice I added spaces around your = operators as it makes the code easier to read. You would also want to use similar JOIN syntax in your subquery. IndexesYou stated:There are no indexes on either table. Please provide a tip/solution in the event i am able to change the table structure or add an index (I may not).And to that, I say resoundingly: YES! Add indexes!But make sure even before you do that, look carefully at your query execution plan and look at where the most expensive steps are. That should give you a clue as to what is happening. I suspect there are a few nasty nested loops in there that are bogging it down, and depending on your RDBMS there are some optimizations available, and there may be ways to change the execution manually.
|
_scicomp.8573
|
While reading many research-papers comparing parallel implementations of algorithms on different machines/architectures, I have noticed that the performance comparison is always listed in terms of GFlop/s and not the actual wall-clock time for the run in seconds. I am curious why this convention is used. My only guess is that since every company advertises its device as having a certain peak flop-counts/second such research papers investigate how much of its potentialhas been achieved by listing the performance as GFlop/s for the particular application at hand. Is this correct?Also, say the performance of a $m$ x $n$ Matrix -- $n$ x $1$ Vector multiply has been stated as 4 GFlop/s. Is it reasonable to obtain the wall clock time in seconds by the following formula?$$\frac{m(2n-1)}{4 * 10^9} \hspace{3mm} \text{seconds}$$ where $m(2n-1)$ is the number of floating point operations for the matrix-vector multiplication
|
Why performance is given in Gflop/s rather than actual time in seconds
|
parallel computing;performance
| null |
_cogsci.5290
|
Human behavior towards other living beings can be classified into three categories:altruistic: actively caring for the well-being of otherscruel: deliberately inflicting pain and sufferingneutral: not affecting the life and well-being of othersYou cannot always care for the well-being of everyone around you. Therefore active altruism cannot be a general guiding principle of behavior. But you can always chose not to be actively cruel. Lacking a better name, I will call this type of behavior, which includes both altruistic and neutral behavor, non-cruel, as suggested by Nick Stauner.What I would like to know is if there are predictors that differentiate between habitually cruel persons and those that are usually either actively altruistic or at least show a passive lack of deliberate cruelty and knowing acquiescence of suffering?Which personality traits are responsible for habitually non-cruel behavior?Notes:I am interested in traits in the traditional sense (patterns of behavior, thought, and emotion) as well as neural correlates of non-cruelty.
|
Which personality traits cause non-cruelty?
|
personality;altruism;aggression
| null |
_cs.28433
|
In particular POS tagging, dependency and constituent parsing. This is not really my field of study but I would really like to be able to make informed claims on what precisions current top systems achieve (and what top systems for each task are). This paper of a (high quality) system (http://arxiv.org/abs/1103.0398) we have used in experiements uses benchmarks from 2000, 2003 and 2005. But I don't know if more recent systems have outperformed those or new benchmark tasks exists.
|
What are recent, high-quality surveys on NLP topics?
|
reference request;natural language processing
| null |
_datascience.16985
|
From https://github.com/google/deepdream/blob/master/dream.ipynbdef objective_L2(dst): # Our training objective. Google has since release a way to load dst.diff[:] = dst.data # arbitrary objectives from other images. We'll go into this later.def make_step(net, step_size=1.5, end='inception_4c/output', jitter=32, clip=True, objective=objective_L2): '''Basic gradient ascent step.''' src = net.blobs['data'] # input image is stored in Net's 'data' blob dst = net.blobs[end] ox, oy = np.random.randint(-jitter, jitter+1, 2) src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift net.forward(end=end) objective(dst) # specify the optimization objective net.backward(start=end) g = src.diff[0] # apply normalized ascent step to the input image src.data[:] += step_size/np.abs(g).mean() * g src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image if clip: bias = net.transformer.mean['data'] src.data[:] = np.clip(src.data, -bias, 255-bias)If I understand what is going on correctly, the input image in net.blobs['data'] is inserted into the NN until the layer end. Once, the forward pass is complete until end, it calculates how off the blob at the end is from something.QuestionsWhat is this something? Is it dst.data? I stepped through a debugger and found that dst.data was just a matrix of zeros right after the assignment and then filled with values after the backward pass.Anyways, assuming it finds how off the result of the forward pass is, why does it try to do a backwards propagation? I thought the point of deep dream wasn't to further train the model but morph the input image into whatever the original model's layer represents.What exactly does src.data[:] += step_size/np.abs(g).mean() * g do? It seems like applying whatever calculation was done above to the original image. Is this line what actually morphs the image?Links that I have already read throughhttps://stackoverflow.com/a/31028871/2750819I would be interested in what the author of the accepted answer meant by we take the original layer blob and enchance signals in it. What does it mean, I don't know. Maybe they just multiply the values by coefficient, maybe something else.http://www.kpkaiser.com/machine-learning/diving-deeper-into-deep-dreams/In this blog post the author comments next to src.data[:] += step_size/np.abs(g).mean() * g: get closer to our target data. I'm not too clear what target data means here.Note I'm cross posting this from https://stackoverflow.com/q/40690099/2750819 as I was recommended to in a comment.
|
Clarification wanted for make_step function of Google's deep dream script
|
machine learning;python;neural network;deep learning
|
What is this something? Is it dst.data? I stepped through a debugger and found that dst.data was just a matrix of zeros right after the assignment and then filled with values after the backward pass.Yes. dst.data is the working contents of the layer inside the CNN that you are trying to maximise. The idea is that you want to generate an image that has a high neuron activation in this layer by making changes to the input. If I understand this correctly though, it should be populated immediately after the forward pass here: net.forward(end=end)Anyways, assuming it finds how off the result of the forward pass is, why does it try to do a backwards propagation? I thought the point of deep dream wasn't to further train the model but morph the input image into whatever the original model's layer represents.It is not training. However, like with training we cannot directly measure how off the source that we want to change is from an ideal value, so instead we calculate how to move toward a better value by taking gradients. Back propagation is the usual method for figuring out gradients to parameters in the CNN. There are some main differences with training:Instead of trying to minimise a cost, we want to increase a metric which summarises how excited the target layer is - we are not trying to find any stationary point (e.g. the maximum possible value), and instead Deep Dream usually just stops arbitrarily after a fixed number of iterations.We back propagate further than usual, all the way to the input layer. That's not something you do normally during training.We don't use any of the gradients to the weights. The weights in the neural network are never changed. We are only interested in the gradients at the input - but to get them we need to calculate all the others first.What exactly does src.data[:] += step_size/np.abs(g).mean() * g do? It seems like applying whatever calculation was done above to the original image. Is this line what actually morphs the image?It takes a single step in the image data along the gradients that we have calculated will trigger more activity in the target layer. Yes this alters the input image. It should be repeated a few times, according to how extreme you want the Deep Dream effect to be.
|
_softwareengineering.105627
|
Is it possible for the author to register a logo or the name of his/her application when it is open-source because it uses a gpl library (for example)? The application uses the library but it has its own features, that is it's not a modification of the library.So everyone can see the source code, downloading an anonymous non-branded version from Sourceforge, but none can use the logo or the name and so the author can sell it from his/her web-site (for example) to non-expert users, only after payment.Is he/she obliged to give directly the source code, or it is enough that there is the non-branded version available on Sourceforge or other official repository?
|
Can I brand my open-source application?
|
open source;gpl;lgpl
| null |
_unix.367346
|
I'm having a problem with ssh keys on my server. Whenever I try to connect, it tells me this (I want to use the key to authenticate myself instead of typing password, but it still asks me for password after.) From my understanding from other articles, it asks me for my password because the private key authentication failed or something? I ran ssh -vvv user@ip and got this:$ ssh -vvv user@ipOpenSSH_7.4p1, OpenSSL 1.0.2k 26 Jan 2017debug2: resolving ip port 22debug2: ssh_connect_direct: needpriv 0debug1: Connecting to ip [ip] port 22.debug1: Connection established.debug1: key_load_public: No such file or directorydebug1: identity file /home/Aericio/.ssh/id_rsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/Aericio/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/Aericio/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/Aericio/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/Aericio/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/Aericio/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/Aericio/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/Aericio/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_7.4debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7p1 Debian-5+deb8u3debug1: match: OpenSSH_6.7p1 Debian-5+deb8u3 pat OpenSSH* compat 0x04000000debug2: fd 3 setting O_NONBLOCKdebug1: Authenticating to ip:22 as 'user'debug3: hostkeys_foreach: reading file /home/Aericio/.ssh/known_hostsdebug3: record_hostkey: found key type ECDSA in file /home/Aericio/.ssh/known_hosts:1debug3: load_hostkeys: loaded 1 keys from ipdebug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521debug3: send packet: type 20debug1: SSH2_MSG_KEXINIT sentdebug3: receive packet: type 20debug1: SSH2_MSG_KEXINIT receiveddebug2: local client KEXINIT proposaldebug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-cdebug2: host key algorithms: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsadebug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbcdebug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbcdebug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: compression ctos: none,[email protected],zlibdebug2: compression stoc: none,[email protected],zlibdebug2: languages ctos:debug2: languages stoc:debug2: first_kex_follows 0debug2: reserved 0debug2: peer server KEXINIT proposaldebug2: KEX algorithms: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1debug2: host key algorithms: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519debug2: ciphers ctos: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected]: ciphers stoc: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected]: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: compression ctos: none,[email protected]: compression stoc: none,[email protected]: languages ctos:debug2: languages stoc:debug2: first_kex_follows 0debug2: reserved 0debug1: kex: algorithm: [email protected]: kex: host key algorithm: ecdsa-sha2-nistp256debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: nonedebug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: nonedebug3: send packet: type 30debug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug3: receive packet: type 31debug1: Server host key: ecdsa-sha2-nistp256 SHA256:hTCFRXSL6Pn2ahO8AzocQsLS+VZP26OnZm/WvOWqq1Idebug3: hostkeys_foreach: reading file /home/Aericio/.ssh/known_hostsdebug3: record_hostkey: found key type ECDSA in file /home/Aericio/.ssh/known_hosts:1debug3: load_hostkeys: loaded 1 keys from ipdebug1: Host 'ip' is known and matches the ECDSA host key.debug1: Found key in /home/Aericio/.ssh/known_hosts:1debug3: send packet: type 21debug2: set_newkeys: mode 1debug1: rekey after 134217728 blocksdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug3: receive packet: type 21debug1: SSH2_MSG_NEWKEYS receiveddebug2: set_newkeys: mode 0debug1: rekey after 134217728 blocksdebug2: key: /home/Aericio/.ssh/id_rsa (0x0)debug2: key: /home/Aericio/.ssh/id_dsa (0x0)debug2: key: /home/Aericio/.ssh/id_ecdsa (0x0)debug2: key: /home/Aericio/.ssh/id_ed25519 (0x0)debug3: send packet: type 5debug3: receive packet: type 6debug2: service_accept: ssh-userauthdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug3: send packet: type 50debug3: receive packet: type 51debug1: Authentications that can continue: publickey,passworddebug3: start over, passed a different list publickey,passworddebug3: preferred publickey,keyboard-interactive,passworddebug3: authmethod_lookup publickeydebug3: remaining preferred: keyboard-interactive,passworddebug3: authmethod_is_enabled publickeydebug1: Next authentication method: publickeydebug1: Trying private key: /home/Aericio/.ssh/id_rsadebug3: no such identity: /home/Aericio/.ssh/id_rsa: No such file or directorydebug1: Trying private key: /home/Aericio/.ssh/id_dsadebug3: no such identity: /home/Aericio/.ssh/id_dsa: No such file or directorydebug1: Trying private key: /home/Aericio/.ssh/id_ecdsaEnter passphrase for key '/home/Aericio/.ssh/id_ecdsa':debug3: sign_and_send_pubkey: ECDSA SHA256:/nMfW17zQ9zoH1UCzlSmLWtN4Mh/ST62SE5sB9B7D24debug3: send packet: type 50debug2: we sent a publickey packet, wait for replydebug3: receive packet: type 51debug1: Authentications that can continue: publickey,passworddebug1: Trying private key: /home/Aericio/.ssh/id_ed25519debug3: no such identity: /home/Aericio/.ssh/id_ed25519: No such file or directorydebug2: we did not send a packet, disable methoddebug3: authmethod_lookup passworddebug3: remaining preferred: ,passworddebug3: authmethod_is_enabled passworddebug1: Next authentication method: passworduser@ip's password:My configuration file in sshd_config is:# Package generated configuration file# See the sshd_config(5) manpage for details# What ports, IPs and protocols we listen forPort 22# Use these options to restrict which interfaces/protocols sshd will bind to#ListenAddress ::#ListenAddress 0.0.0.0Protocol 2# HostKeys for protocol version 2HostKey /etc/ssh/ssh_host_rsa_keyHostKey /etc/ssh/ssh_host_dsa_keyHostKey /etc/ssh/ssh_host_ecdsa_keyHostKey /etc/ssh/ssh_host_ed25519_key#Privilege Separation is turned on for securityUsePrivilegeSeparation yes# Lifetime and size of ephemeral version 1 server keyKeyRegenerationInterval 3600ServerKeyBits 1024# LoggingSyslogFacility AUTHLogLevel INFO# Authentication:LoginGraceTime 60PermitRootLogin noStrictModes yesRSAAuthentication yesPubkeyAuthentication yesAuthorizedKeysFile %h/.ssh/authorized_keys# Don't read the user's ~/.rhosts and ~/.shosts filesIgnoreRhosts yes# For this to work you will also need host keys in /etc/ssh_known_hostsRhostsRSAAuthentication no# similar for protocol version 2HostbasedAuthentication no# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthenticationIgnoreUserKnownHosts yes# To enable empty passwords, change to yes (NOT RECOMMENDED)PermitEmptyPasswords no# Change to yes to enable challenge-response passwords (beware issues with# some PAM modules and threads)ChallengeResponseAuthentication no# Change to no to disable tunnelled clear text passwordsPasswordAuthentication yes# Kerberos options#KerberosAuthentication no#KerberosGetAFSToken no#KerberosOrLocalPasswd yes#KerberosTicketCleanup yes# GSSAPI options#GSSAPIAuthentication no#GSSAPICleanupCredentials yesX11Forwarding noX11DisplayOffset 10PrintMotd noPrintLastLog yesTCPKeepAlive no#MaxStartups 10:30:60#Banner /etc/issue.net# Allow client to pass locale environment variablesAcceptEnv LANG LC_*Subsystem sftp /usr/lib/openssh/sftp-server# Set this to 'yes' to enable PAM authentication, account processing,# and session processing. If this is enabled, PAM authentication will# be allowed through the ChallengeResponseAuthentication and# PasswordAuthentication. Depending on your PAM configuration,# PAM authentication via ChallengeResponseAuthentication may bypass# the setting of PermitRootLogin without-password.# If you just want the PAM account and session checks to run without# PAM authentication, then enable this but set PasswordAuthentication# and ChallengeResponseAuthentication to 'no'.UsePAM yesI've been trying to do other thing. Whenever I disabled the set passwordauthentication and pam to no, it tells me:$ ssh user@ipEnter passphrase for key '/home/Aericio/.ssh/id_ecdsa':Permission denied (publickey).FYI: I am using cygwin64 to use ssh. Thanks.
|
SSH Key doesn't work
|
ssh
|
This is the relevant part of your log file:debug1: Trying private key: /home/Aericio/.ssh/id_ecdsaEnter passphrase for key '/home/Aericio/.ssh/id_ecdsa':debug3: sign_and_send_pubkey: ECDSA SHA256:/nMfW17zQ9zoH1UCzlSmLWtN4Mh/ST62SE5sB9B7D24debug3: send packet: type 50debug2: we sent a publickey packet, wait for replydebug3: receive packet: type 51SSH found one key file (id_ecdsa) - it asked you for its password then relayed it to the destination server. The destination server didn't trust your key and ssh moved on to using the next authentication method.What you need to do is copy the public key over to the destination server into the authorized_keys file, most easily with something like the following:ssh-copy-id <user>@<destination server>If you have more than one key or have one specific identity to copy over, you can add -i .ssh/id_ecdsa.pub to the command
|
_cstheory.31243
|
I am supposed to write a small paper about DFA in OOP for a CS class in theory. But I am required to connect that (DFA) to axiomatic and denotational semantics!I read few resources about axiomatic/denotational semantics but no one is talking specifically about their relation to Data Flow Analysis.Can you please guide me about that, or at least guide me to a book, chapter or a paper that talks about that?
|
What is the relation/difference between axiomatic and denotational semantics one one side, and the data flow analysis(DFA) on the other sied?
|
semantics;dfa;denotational semantics
| null |
_unix.382587
|
When I run p11tool --list-tokens on Ubuntu 17 the certificates from my USB connected Sitecom smart card reader don't show up in the results. I was hoping to use p11tool to get the URL of the certificate I need to use to connect to my company's VPN using OpenConnect.I am confident the card reader is detected and works. I used the PCSC-lite and CCID open source drivers from https://pcsclite.alioth.debian.org/. When I use their verification tool (PCSC-perl) it shows me Card state: Card inserted, Shared Mode.. and it responds immediately when I take the card out by telling me:Card state: Card removedI installed opensc and opensc-pkcs11 (https://github.com/OpenSC/OpenSC/wiki). I have two opensc.module files on my system, because I wasn't sure where to put them:/etc/pkcs11/modules/opensc.module /usr/share/p11-kit/modules/opensc.moduleBoth files' contents are the same:module:/usr/lib/x86_64-linux-gnu/pkcs11/opensc-pkcs11.soI verified that opensc-pksc11.so exists.Any idea where to go from here? My smart card isn't listed in the output from p11tool --list-tokens, so I can't get the certificate URL that I need to connect to the VPN using OpenConnect. Output from p11tool:Token 0: URL: pkcs11:model=p11-kit-trust;manufacturer=PKCS%2311%20Kit;serial=1;token=System%20Trust Label: System Trust Type: Trust module Manufacturer: PKCS#11 Kit Model: p11-kit-trust Serial: 1 Module: p11-kit-trust.soToken 1: URL: pkcs11:model=1.0;manufacturer=Gnome%20Keyring;serial=1%3aSSH%3aHOME;token=SSH%20Keys Label: SSH Keys Type: Generic token Manufacturer: Gnome Keyring Model: 1.0 Serial: 1:SSH:HOME Module: gnome-keyring-pkcs11.soToken 2: URL: pkcs11:model=1.0;manufacturer=Gnome%20Keyring;serial=1%3aSECRET%3aMAIN;token=Secret%20Store Label: Secret Store Type: Generic token Manufacturer: Gnome Keyring Model: 1.0 Serial: 1:SECRET:MAIN Module: gnome-keyring-pkcs11.soToken 3: URL: pkcs11:model=1.0;manufacturer=Gnome%20Keyring;serial=1%3aUSER%3aDEFAULT;token=Gnome2%20Key%20Storage Label: Gnome2 Key Storage Type: Generic token Manufacturer: Gnome Keyring Model: 1.0 Serial: 1:USER:DEFAULT Module: gnome-keyring-pkcs11.soToken 4: URL: pkcs11:model=1.0;manufacturer=Gnome%20Keyring;serial=1%3aXDG%3aDEFAULT;token=User%20Key%20Storage Label: User Key Storage Type: Generic token Manufacturer: Gnome Keyring Model: 1.0 Serial: 1:XDG:DEFAULT Module: gnome-keyring-pkcs11.soBelow is th output of lsusb. When I remove the card reader from the USB port the Realtek Semiconductor entry disappears, so I assume the card reader is recognized by the OS. However, I am not sure why it is called Realtek since it is a Sitecom reader, but I assume Realtek is just the chip manufacturer. Also, it is not a Storage Device. When I use the card on Windows it will sometimes try to assign a drive letter to the card and access it, which fails. Maybe the card reader inaccurately reports it is a storage device?me@me:/etc/pkcs11/modules$ lsusbBus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 001 Device 004: ID 8087:0a2b Intel Corp. Bus 001 Device 003: ID 0bda:0169 Realtek Semiconductor Corp. Mass Storage DeviceBus 001 Device 005: ID 1bcf:2b95 Sunplus Innovation Technology Inc. Bus 001 Device 002: ID 046d:c52f Logitech, Inc. Unifying ReceiverBus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
|
p11tool doesn't list my smart card on Ubuntu 17
|
certificates;openconnect;smartcard
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.