id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.183216
I have multiple files all namedseperate1seperate2etc. How do I rename them all to have the extension .csv?
Rename multiple file with no extensions?
rename
null
_cs.16672
Let $\phi$ be a 3-CNF formula over variables $x_1,x_2,\ldots,x_n$. Every variable $x_i$, $i \in [n]$, occurs equally many times as a positive literal and as a negative literal in $\phi$. Is it NP-complete to decide the satisfiability of such a formula? Assuming it is, I would be interested in knowing if it has a special name. Has it perhaps also been investigated somewhere?
3-SAT where variables occur equally many times as a positive literal and as a negative literal
complexity theory;np complete;satisfiability;decision problem
The problem has been studied as $m$P$n$N-SAT problem by Ryo Yoshinaka in Higher-Order Matching in the Linear Lambda Calculus (in 16th International Conference, RTA 2005, Nara, Japan, April 19-21, 2005, Proceedings).The $m$P$n$N-SAT is a SAT problem where each positive literal occurs exactly $m$ times and each negative literal exactly $n$ times. It has been shown that even $2$P$1$N-SAT is NP-complete. Note that $1$P$1$N-SAT is in P since each variable can easily be removed after a single step of resolution, which doesn't increase the number of clauses in the formula.
_softwareengineering.138405
For example, in functional languages, variables are single assignment and their values are immutable once assigned. So they have two states unbound and bound, once bound they can't be changed.Is there some mathematical term or other computer science term that is most appropriate for such as thing? Something that semantically doesn't imply variance or mutability.If not there doesn't exist such as term and if you were designing a language that had such constructs, what other than the word variable would you use to for these?I am not really looking to poll for ideas, I am trying to figure out if there is already an accepted industry ( any industry ) term for such a thing.
What is a good alternative to the name variable for a language that only has immutable references or labels?
language agnostic;naming;language design;language features
What about symbol?I saw a video on F# where the speaker said,you do not assign a value to a variable, you bind a value to a symbol.(Still looking for the reference for this.)Whenever I encounter the word variable in places where such constructs are immutable, I silently think bound symbol.
_unix.358200
I am using ssh favorites in order to have a comfortable way to tunnel myself onto workstations at my university from my laptop at home.My config looks like this:host sample_workstation hostname sample_workstation port 22 user johndoe ProxyCommand ssh local_server -W %h:%phost local_server hostname local_server port 22 user johndoe ProxyCommand ssh gateway_server -W %h:%phost gateway_server hostname gateway_server.my.university.tld port 22 user johndoeBasically I am ssh'ing to gateway_server, which is accessible through the internet and from there to a local intranet server local_server, which gives me another tunnel where sample_workstation is reachable.It works perfectly with ssh and is easy to use, since I just need to:ssh sample_workstation...and the config does the magic. However, I also would like to access files. rsync is one solution, but too complicated for every day use (in my opinion).Therefore I would like to use sshfs to mount my workstations working directory. How can I tunnel through gateway_server and local_server to sample_workstation via sshfs?
sshfs through multiple hosts?
ssh;mount;ssh tunneling;sshfs
null
_codereview.37960
I've been writing a simple code just for fun:This is a simple WinForms application with which you can measure your typing speed. There are three controls on it, doing the following:rtbResults - The one where you need to be as quick as possible.textBox1 - The one where you submit the reference text.tbBestTime - The one storing the best time of a contestant.Fields:Stopwatch sw = new Stopwatch();long bestTime = long.MaxValue;One handler:private void rtbResults_TextChanged(object sender, EventArgs e){ if (rtbResults.Text.Length == 1) { sw.Restart(); } else { if (rtbResults.Text == textBox1.Text) { sw.Stop(); if (sw.ElapsedMilliseconds < bestTime) { bestTime = sw.ElapsedMilliseconds; tbBestTime.Text = (bestTime / 1000.0).ToString(); } MessageBox.Show(Your time: + sw.ElapsedMilliseconds / 1000.0 + s.); sw = null; rtbResults.Text = ; } }}Another handler:private void textBox1_TextChanged(object sender, EventArgs e){ bestTime = long.MaxValue; tbBestTime.Text = bestTime.ToString();}Now I'm looking for techniques to optimize the performance to get more precise results.The point is not the memory consumption, but the length of the source code and the performance, and of course it must result the same as it does now (I mean the mentioned code).I would like you to consider the following issues:Is the...StopwatchTextChanged eventtype longif/else (instead of if/return)... the best way to do this?
WinForms typing speed game
c#;performance;game
Your code seems to me to be as efficient as reasonable for the application you are writing...... but what you want/intend to do with it is unreasonable.... but first:The method resetting the time is not very practicalprivate void textBox1_TextChanged(object sender, EventArgs e){ bestTime = long.MaxValue; tbBestTime.Text = bestTime.ToString();}This method is setting the text to a value that the typical user will have no frame of reference for.... why is that 'big' number meaningful. Why not just set the tbBestTime.Text to be 'None yet' or something? This is especially significant since all other times the tbBestTime is set, you set it to a floating-point time/1000.0 value.In your messageBox you do a simple /1000.0 for your time display. This is not going to always be 'pretty' since not all floating point values have a neat representation in binary. You should be using a Number formatter to ensure that the presentation of the value is consistent.... consider Custom Numeric Format Strings.Finally, lets talk about accuracy and precision.... precision is reporting values to a large number of significant figures. In your case, reporting the time to (an intended) millisecond precision seems like a good idea, but, is that accurate?Your intention with this question is to improve the accuracy of the results by reducing the overhead of the code when compared to the typing.Unfortunately there is so much happening between your code and the keyboard that any attempt to increase the code performance will be outweighed by the simple operations happening on the system.... The following are things that may/will affect the accuracy of your timing:Key debouncingInterrupt handlingOS event notificationOS Thread schedulingClock granularityTextBox listener notificationI would wager that each of these will be significantly more time than the per-cycle cost of your code.Optimizing your code will make no difference to the accuracy of your timing, and, as it is, your reported precision is probably far more than the actual accuracy. In fact, I would suggest that anything within 1/10th of a second (instead of 1/1000th) is a 'tie'.
_unix.178875
I installed two JAVA JREs on my new CentOS since Cassandra needs java7u25 or later while iReport needs to work with 1.6.Now how do I launch each program from command line telling each program which version to use?Do I have to change the /etc/profile file? If so how?
use different java version to run two programs
centos;java;jdk
There's no point in having them both in $PATH because only one will get used. You could symlink one to a different name -- e.g. java6 -- I've never tried this w/ java and not sure if it would work.The best way to do this would be to install one of them (presumably 1.6) in a location like /opt/java6, leaving 1.7 as the default. Then when you want to use 6:export PATH=/opt/java6/bin:$PATHAnd start it from the command line. You could also put all that together in a script. Don't try to run Cassandra from the same shell after that unless you remove that from $PATH (easy way to check is echo $PATH). To automate this for one specific application:#!/bin/shexport PATH=/opt/java6/bin:$PATHexec /path/to/applicationYou can then put that somewhere in the regular $PATH (e.g., /usr/local/bin), make sure it is executable (chmod 755 whatever.sh) and start the application that way. It will then not affect $PATH in the process which launches it.
_webmaster.17460
As a company we now have Facebook, LinkedIN, Twitter and now Google+, is there a way to easily manage all these accounts without having to log into them individually?Things like posting content to each one is becoming a full time job in itself, is there a way to post once that in turn posts to all other accounts? I used to use http://ping.fm/ a long time ago, has there been any advancements in something similar to this?With friend lists, news feeds etc etc for each one, I wish there was a way to manage them all in one place with a service/tool!
Suggestions on managing social media accounts
social media;social networks;social networking
I would take a look at something like Seesmic which allows you to manage all your social accounts in one place. I have also read a lot about radion6 although have no experience with that software. Tweetdeck is another example. Although I would say pushing all the same content out to each different network may not be the best approach, as if people follow you across networks it could be seen as filling up the feeds with duplicate information ( although I am aware a lot of people do this).
_unix.43196
I have a dual boot Linux/windows system set up, and frequently switch from one to the other. I was thinking if I could add a menu item in one of the menus to reboot directly into windows, without stopping at the GRUB prompt.I saw this question on a forum, that's exactly what I want but it's dealing with lilo, which is not my case.I thought of a solution that would modify the default entry in the GRUB menu and then reboot, but there are some drawbacks, and I was wondering if there was a cleaner alternative.(Also, I would be interested in a solution to boot from Windows directly into Linux, but that might be harder, and does not belong here. Anyway, as long as I have it in one way, the other way could be set up as the default.UPDATE It seems someone asked a similar question, and if those are the suggested answers, I might as well edit /boot/grub/grubenv as grub-reboot and grub-set-default and grub-editenv do.)Thanks in advance for any tips.UPDATE:This is my GRUB version: (GRUB) 1.99-12ubuntu5-1linuxmint1I tried running grubonce, the command is not found. And searching for it in the repositories gives me nothing. I'm on Linux Mint, so that might be it...Seeing man grub-reboot, it seems like it does what I want, as grubonce does. It is also available everywhere (at least it is for me, I think it is part of the grub package). I saw two related commands: grub-editenv and grub-set-default.I found out that after running sudo grub-set-default 4, when running grub-editenv list you get something similar to:saved_entry=4And when running grub-reboot 4, you get something like:prev_saved_entry=0saved_entry=4Which means both do the same thing (one is temporary one is not).Surprisingly, when I tried:sudo grub-reboot 4sudo reboot nowIt did not work, as if I hadn't done anything, it just showed me the menu as usual, and selected the first entry, saying it will boot this entry in 10s.I tried it again, I thought I might have written the wrong entry (it is zero-based, right?). That time, it just hanged at the menu screen, and I had to hard-reset the PC to be able to boot.If anyone can try this out, just to see if it's just me, I'd appreciate it. (mint has been giving me a hard time, and that would be a good occasion to change :P).Reading the code in /boot/grub/grub.cfg, seems like this is the way to go, but from my observations, it's just ignoring these settings...
How can I tell GRUB I want to reboot into Windowsbefore I reboot?
grub2;dual boot;reboot
In order for the grub-reboot command to work, several required configuration changes must be in place:The default entry for grub must be set to saved. One possible location for this is the GRUB_DEFAULT= line in /etc/default/grub Use grub-set-default to set your default entry to the one you normally use.Update your grub config (e.g. update-grub).This should take care of the initial set-up. In the future, just do grub-reboot <entry> for a one-time boot of <entry>.
_webmaster.38208
Hi I have a Magento installation linked with a Google Analytics account. It works very well in that I can see conversions, I can see the products that are selling directly from analytics and I can get an overview of traffic sources for those sales.What I can't work out how to track/see is what keywords are being used by the customers that are completing sales. Can anybody let me know how this data can be gathered or if it's even possible? (is this possibly a privacy issue?)
Analysing traffic sources in Analytics for Magento sales
google analytics;analytics;ecommerce;magento;conversions
In Analytics go to, Traffic Sources -> Sources -> Search -> Ogranic. Then in the top left of the chart you'll see Visits vs. Select a metric. Click on select a metric, then Ecommerce and select Revenue or Transactions.Keywords searched using Google Instant Search appear as (not provided) in the table.
_webapps.44308
I have Dropbox installed on a few computers. One of which I no longer have access to. I no longer wish for that computer to be able to have access to my Dropbox account but I see no way to stop that computer with Dropbox installed from being able to do so.I went in on the website and changed my password but Dropbox installed on the computer is still able to sync even with the password changed! I find this to be a major security issue. What if my laptop was stolen and I didn't want the thief to have access and be able to delete/ change the files on my Dropbox account? It seems there is no way to protect your Dropbox account from this sort of thing?
How to remove Dropbox access from a computer you no longer have?
security;dropbox
null
_webmaster.71882
Ok, so my question is about Facebook link share thumbnail image display. I have tried sharing video links on my Facebook page with lots of different strategies:I tried uploading image, tried direct link share but nothing seems to do the required job. I am looking to get this desired result (photo attached) but what I get is a small thumbnail displayed (another image attached) on my Facebook page once the link has been shared. Am I missing something? How do I make my larger image clickable and displayed as it is.Thanks
How to make Facebook link share thumbnails larger?
facebook
null
_cstheory.36053
I'm looking for various researches which consider specific subclasses of Context-Free Grammar class, i.e. some specific described cases, which differ from well-known:deterministic/non-deterministic ambiguous/unambiguousregular/non-regularAs an example of such non-standard subclasses are Visibly pushdown grammar described here. Any additional examples will be much appreciated.More specifically, I'm wondering, are any described subclasses which can help distinguish this two very similar cases?$G_1$:$S\rightarrow aD$, $S\rightarrow cD$$D\rightarrow abc$, $D\rightarrow abDc$$G_2$: $S\rightarrow aD$, $S\rightarrow cD$$D\rightarrow abc$, $D\rightarrow abDc$, $D\rightarrow aDb$, $D\rightarrow aDc$It is clear that both of them are unambiguous non-determ. CFG, and $\mathcal{L}(G_1)\subset\mathcal{L}(G_2)$. But these are common properties, I'm looking for specific differences.
Known and described subclasses of Context-Free Grammars class
fl.formal languages;grammars;context free
Density might be interesting concept for you. The density function is defined as$$\delta_L(n) := |L\cap \Sigma^n|,$$where $\Sigma^n$ denotes the set of all strings of length $n$ over $\Sigma$.Your first language seems to have density values of only 0 and 1 while the second goes up to 3. So the first is 1-slender, the second is not following the terminology of Lucian Ilie's work.
_computerscience.1450
I'm making a voxel engine in OpenGL and wondering how many 3D textures I can have at once. They are fairly large (256x256x256 in GL_R32UI format). I want it to be able to run on any graphics card supporting OpenGL 3.3, if possible. I'm accessing them all from the same fragment shader, by the way. So how many can I have? Will 8 work? Thanks!
How many 3D textures does OpenGL support
opengl;3dtexture
As gllampert pointed out in the comments the value is hardware dependent. You can retrieve it with glGet, using GL_MAX_COMBINED_TEXTURE_IMAGE_UNIT. You can find how different hardware performs here.However, in OpenGL 3 there is a lower bound of at least 48 simultaneously used textures, no matter which type. [source]
_softwareengineering.289215
Title is an abstraction of what I am actually doing, but in essence the same. The main entity I will be dealing with are the Employees themselves; Send package to employee John Doe. In order to work with a given Employee, I need to use the IOffice they belong to, and each office has a different way of delivering that package to that Employee. I am not really interested in how the package is delivered, just that it gets there.In the first way mentioned in the title, it would look something like this:public class Employee{ private IOffice _office; public Name { get; set; } public void Send(Package package) { _office.Send(package, this) }}public class SomeOffice : IOffice{ public void Send(Package package, Employee employee) { // Implementation of how this office gets the package to employee }}Doing it this way, I can simply have a list of all employees, and employees in the same office can share the same IOffice object.The second approach like so:public class Employee{ public Name { get; set; }}public class SomeOffice : IOffice{ public List<Employee> Employees { get; set; } public void Send(Package package, Employee employee) { // Implementation of how this office gets the package to employee }}This makes it harder to have instance of Employee and send a package to it. You must get the instance you want from the Employees-collection, and pass it into the office's Send-method. However, it keeps the Employee-class simpler, and makes the IOffice take care of everything.There are probably other ways to go about this also. How would you do it?Edit: For concrete example, replace IOffice with a CommunicationInterface, such as SerialPort, UdpClient, TcpClient etc. At the end of each endpoint there are different Devices (Employees). These devices all behave similarly, but how to send them a request (Package) and get a response back differs.From my program, I want to have a list of all available devices, across all CommunicationInterfaces, and send requests to them, and get responses back.
Should instance of Employee contain a reference to instance of Office, or should Office contain an array of Employee?
design;object oriented;architecture
null
_webmaster.91630
I'm setting up a LAMP stack on my local machine for testing. How do I make sure the web server is not remote accessible?
When setting up LAMP for testing at a local machine, how do I make sure it's not remote accessible?
security;lamp
I have a better approach to this:There is a Listen directive in apache that allows you to specify which ports and IP addresses you want apache to work with.For 100% security away from the outside world, use the following setting:Listen 127.0.0.1:80That IP address 127.0.0.1 represents localhost and is never accessible to anyone outside. Any outsider trying to access your site via that address will either get a bunch of errors or resources from their computer only.When using this setup, start your URLs with http://127.0.0.1 when running tests.
_codereview.56145
I started a project to build my own invoicing and management system to take the place of prohibitively expensive QuickBooks software. This will be broken down in 5 steps, with the current step in bold font, and previous steps linked. Design the DB schema and table relationships, and insert data for standards tablesCreate and test procedures and functionsDesign application behaviorDesign user interfaceDesign export methods and formatsHere is the original questionI got a chance to sit down with a friend who's a really good SQL dev. Here are the changes I made.Remove underscores from column names.Further normalize my Person data, i.e., address phone & email, to account that a person can have multiple of each. Also normalized even more by adding types for each. Changed ProjectType to Product as it makes more sense that way. Added prices and more product information in the Product table.Changed ProjectDetail to InvoiceDetail and tied it to Invoice and Project and (obviously) to Person and Address.Normalized InvoicingPayment with PaymentType. Normalized Invoice with an InvoiceStatus table. (thanks @200_success for pointing out BOOLEAN was a bad idea!)Changed the order the tables are created so that FOREIGN KEY constraints are declared in the CREATE TABLE statement. Markdown throughout for clarity.DROP DATABASE IF EXISTS PsychoProductions;CREATE DATABASE PsychoProductions;USE PsychoProductions;---- Create table with standard values -- to be referenced to by other tables-- And insert some values in those tables---- Person typesCREATE TABLE PersonType ( PersonTypeId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (PersonTypeId), PersonTypeName VARCHAR(30) );INSERT INTO PersonType (PersonTypeName)VALUES ('Staff'), ('Partner'), ('Customer'), ('Vendor'), ('Session musician');-- Billing methodsCREATE TABLE BillingMethod ( BillingMethodId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (BillingMethodId), BillingMethod VARCHAR(30) );INSERT INTO BillingMethod (BillingMethod)VALUES ('Unassigned'), ('Net 30'), ('Net 15'), ('Cash on delivery'), ('Cash with order');-- Product typesCREATE TABLE ProductType ( ProductTypeId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (ProductTypeId), ProductTypeName VARCHAR(150), ProductTypeCost DECIMAL (6,2), ProductTypeStandard BOOLEAN, -- Set to False if ad hoc project type ProductTaxable BOOLEAN DEFAULT False -- No need to tax product if not physical good );INSERT INTO ProductType (ProductTypeName, ProductTypeCost, ProductTypeStandard, ProductTaxable)VALUES ('Basic musical arrangement (3 or fewer)', 30, True, False), ('Basic musical arrangement (4 or more)', 25, True, False), ('Advanced musical arrangement (3 or fewer)', 50, True, False), ('Advanced musical arrangement (4 or more)', 40, True, False), ('Instrumental leasing (3 or fewer)', 25, True, False), ('Instrumental leasing (4 or more)', 20, True, False), ('Instrumental leasing (NAPH 3 or more)', 20, True, False), ('Graphic design (album sleeve)', 80, True, False), ('Graphic design (full CD & sleeve)', 150, True, False), ('Graphic design (full CD, sleeve & booklet)', 200, True, False), ('Graphic design (flyers)', 40, True, False), ('Graphic design (t-shirt)', 30, True, False), ('Graphic design (logo, sticker, small items)', 25, True, False), ('Rush uplift charge (Basic project)', 10, True, False), ('Rush uplift charge (Advanced project)', 20, True, False);-- Invoice Status typesCREATE TABLE InvoiceStatus ( InvoiceStatusID INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (InvoiceStatusId), InvoiceStatus VARCHAR(30) );INSERT INTO InvoiceStatus (InvoiceStatus)VALUES ('Open'), ('Paid'), ('Partially Paid'), ('Cancelled');-- Transaction typesCREATE TABLE TransactionType ( TransactionTypeId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (TransactionTypeId), TransactionType VARCHAR(10) );INSERT INTO TransactionType (TransactionType)VALUES ('Debit'), ('Credit');-- Address typesCREATE TABLE AddressType ( AddressTypeId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (AddressTypeId), AddressType VARCHAR(20) );INSERT INTO AddressType (AddressType)VALUES ('Unique'), ('Physical'), ('Shipping'), ('Billing'), ('Mailing');-- Phone typesCREATE TABLE PhoneType ( PhoneTypeId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (PhoneTypeId), PhoneType VARCHAR(20) );INSERT INTO PhoneType (PhoneType)VALUES ('Mobile'), ('Business'), ('Home'), ('Fax'), ('Pager');-- Email typesCREATE TABLE EmailType ( EmailTypeId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (EmailTypeId), EmailType VARCHAR(20) );INSERT INTO EmailType (EmailType)VALUES ('Business'), ('Personal');-- -- Create master tables which will contain actual business data-- /* CREATE ALL CORE TABLES RELATED TO PERSONS */-- This table will contain primary person informationCREATE TABLE Person ( PersonId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (PersonId), PersonTypeId INT NOT NULL DEFAULT 3, FOREIGN KEY (PersonTypeId) REFERENCES PersonType(PersonTypeId), FirstName VARCHAR(40) NOT NULL, LastName VARCHAR(40), Organization VARCHAR (100), Website VARCHAR(100), DefaultBillingMethodId INT NOT NULL DEFAULT 1, FOREIGN KEY (DefaultBillingMethodId) REFERENCES BillingMethod(BillingMethodId), Active BOOLEAN DEFAULT True, CreationDate TIMESTAMP DEFAULT NOW() );-- Addresses hereCREATE TABLE Address ( AddressId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (AddressId), PersonId INT NOT NULL, -- One-to-many relationship FOREIGN KEY (PersonID) REFERENCES Person(PersonID), AddressTypeId INT NOT NULL DEFAULT 1, -- Unique FOREIGN KEY (AddressTypeId) REFERENCES AddressType(AddressTypeId), Address VARCHAR(100), City VARCHAR(40), State VARCHAR(2), ZipCode VARCHAR(5) );-- Phone numbers hereCREATE TABLE Phone ( PhoneId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (PhoneId), PersonId INT NOT NULL, -- One-to-many relationship FOREIGN KEY (PersonId) REFERENCES Person(PersonId), PhoneNumber VARCHAR(20) NOT NULL, PhoneTypeId INT NOT NULL DEFAULT 1, -- Mobile FOREIGN KEY (PhoneTypeId) REFERENCES PhoneType(PhoneTypeId) );-- Emails hereCREATE TABLE Email ( EmailId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (EmailId), PersonId INT NOT NULL, -- One-to-many relationship EmailAddress VARCHAR(50) NOT NULL, FOREIGN KEY (PersonId) REFERENCES Person(PersonId), EmailTypeId INT NOT NULL DEFAULT 1, -- Business FOREIGN KEY (EmailTypeId) REFERENCES EmailType(EmailTypeId) );/* CREATE ALL TABLES RELATED TO PROJECTS */-- This table will contain primary project informationCREATE TABLE Project ( ProjectId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (ProjectId), RequestPersonId INT NOT NULL, FOREIGN KEY (RequestPersonID) REFERENCES Person(PersonID), AssignPersonId INT, FOREIGN KEY (AssignPersonID) REFERENCES Person(PersonID), ProjectName VARCHAR(200) NOT NULL, Description TEXT, OrderDate DATE NOT NULL, DueDate DATE, CompleteDate DATE );-- Line number of product in project (tied to transactions)/* CREATE ALL TABLES RELATED TO MONEY */-- Invoices hereCREATE TABLE Invoice ( InvoiceId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (InvoiceId), ProjectId INT NULL, -- Not all invoices will be tied to a project FOREIGN KEY (ProjectId) REFERENCES Project (ProjectId), InvoiceByPersonId INT NOT NULL, FOREIGN KEY (InvoiceByPersonId) REFERENCES Person(PersonId), BillToPersonId INT NOT NULL, FOREIGN KEY (BillToPersonId) REFERENCES Person(PersonId), BillToAddressId INT NOT NULL, FOREIGN KEY (BillToAddressId) REFERENCES Address(AddressId), ShipToAddressId INT NULL, -- Most invoiced products are not physical products FOREIGN KEY (ShipToAddressId) REFERENCES Address(AddressId), InvoiceStatusId INT NOT NULL DEFAULT 1, -- Open FOREIGN KEY (InvoiceStatusId) REFERENCES InvoiceStatus (InvoiceStatusId), InvoiceDate DATE NOT NULL, InvoicePaidDate DATE NULL );CREATE TABLE InvoiceDetail ( /* InvoiceDetailId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (InvoiceDetailId), */ InvoiceId INT NOT NULL, FOREIGN KEY (InvoiceId) REFERENCES Invoice(InvoiceId), InvoiceSequenceId INT NOT NULL, ProductTypeId INT NOT NULL, FOREIGN KEY (ProductTypeId) REFERENCES ProductType(ProductTypeId), Quantity INT NOT NULL DEFAULT 1, TaxableRate DECIMAL(5,2) );-- Monetary transactions will be logged hereCREATE TABLE AccountingTransaction ( TransactionId INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (TransactionId), TransactionTypeId INT NOT NULL, FOREIGN KEY (TransactionTypeId) REFERENCES TransactionType(TransactionTypeId), ProjectId INT NULL, -- Not all transactions will be tied to a project FOREIGN KEY (ProjectId) REFERENCES Project (ProjectId), InvoiceId INT NULL, -- Ditto for invoice FOREIGN KEY (InvoiceId) REFERENCES Invoice(InvoiceId), InvoiceSequenceId INT NULL, -- Ditto PaidByPersonId INT NOT NULL, FOREIGN KEY (PaidByPersonId) REFERENCES Person(PersonId), PaidToPersonId INT NOT NULL, FOREIGN KEY (PaidToPersonId) REFERENCES Person(PersonId), TransactionDate DATE NOT NULL, TransactionNote VARCHAR(1000) );
Revision 1 - Step 1: PsychoProductions management tool project
sql;mysql
NormalizationDatabase schemas should typically be normalized, and, it appears that this schema is reasonably well structured. I don't see any massive normalization problems, but I see some issues:Time-sequence data - your Product table contains the Product cost value. Are your products going to be the same cost forever? If the cost changes, it will screw up your invoice data.....Address has a PersonID attached to it. This is not completely unusual, but, when you have Invoice, you have both a BillToPersonId, and a BillToAddressId (and the address has a person), so what if they are different? Is the PersonId on the Address redundant? I think there's a contradiction in there somewhere....DatesFor audit/tracking purposes, you should normally put a CreatedTimestamp on every record. You have one on the Person, but nothing else. The Timestamp should be a DateTime value.Many of your current Date-based columns should also contain a Time component, not just a Date.PersonYou have the table Person, but, what if the person is a business or something? This seems very specific to me, not being able to do transactions with non-person entities...Column TypesI feel that your column types are all rather small, and conservative.FirstName and LastName are limited to 40 characters. That's shortEMailAddress is limited to 50 chars?Address is 100 chars?City is 40 chars (Poor Tweebuffelsmeteenskootmorsdoodgeskietfontein)?other columns should be extended too.
_cstheory.21281
I have formulas in Presburger arithmetic (with initial , but I can apply quantifier elimination so they are quantifier-free) that are fairly complicated, yet, in many useful cases, are equivalent to very simple formulas such as conjunctions of simple arithmetic constrains (e.g. p>=-1, p<q, y>=q, y<n, n>=2 for a quantified formula taking up a whole screen page).Is there any good simplification method for such formulas? (preferably in an off-the-shelf implementation)
Simplification of Presburger formulas in practice
lo.logic
null
_codereview.26754
I came across the following code in our code base:public interface ICloneable<T>{ T Clone();}public class MyObject : ICloneable<MyObject>{ public Stuff SomeStuff { get; set; } T Clone() { return new MyObject { SomeStuff = this.SomeStuff }; }}The pattern of using a generic interface and then doing class A : Interface<A> looks pretty bizarre to me. I have a feeling it's either useless or that it could be changed into something less intricated.Can someone explain if this is right/wrong, and how it should be changed?
Generically typed interface
c#;design patterns;generics
You see this same pattern in the framework for IEquatable<T>. The most frequent use case for this is writing a generic method where you constrain the generic parameter type to implement the interface. This then allows you to write code in terms of a statically typed method Equals<T>. Another example is IComparable<T>, which is very handy for implementing methods relying on sorting without having to use the older style of providing an external Comparator class. For instance, the default comparison mechanism for the LINQ OrderBy method uses IComparable<T>, if it's available. In both of these cases, it's very natural to say that an instance of a type is comparable or equatable to other instances of the same type.
_codereview.166159
I'm creating a minecraft clone (for practice), in scala, using largely functional programming. When a chunk doesn't have a mesh loaded into VRAM, it create a Future for the vertex and index arrays, and gives it to a low priority ExecutionContext (so as not to freeze the main game loop). The render loop (in the OpenGL thread) checks if the Future is completed, and if it is, uploads the data to VRAM, after which the chunk can be rendered. Here's the ChunkRenderer class, I'm providing the whole thing for context, but I'm only really looking for a review of the meshData future. I would appreciate reviews as to how I can make this process faster, as well as the code in general.case class ChunkRenderer( chunk: Chunk, texturePack: TexturePack, world: World, previous: Option[ChunkRenderer] ) extends RenderableFactory { val meshData: Future[(Array[Float], Array[Short])] = Future { // first, compute the exposed surfaces type SurfaceMap = Map[Direction, List[V3I]] // compute the given surface of the given block def surface(m: SurfaceMap, v: V3I, s: Direction): SurfaceMap = (world.blockAt(v), world.blockAt(v + s)) match { // if the target is non-existent, the face is invisible case (None, _) => m // if the target is translucent, the face is invisible case (Some(t), _) if t isTranslucent => m // if the cover is opaque, the face is invisible case (_, Some(c)) if c isOpaque => m // if the cover is translucent (and the target is opaque), the face is visible case (_, Some(c)) if c isTranslucent => m.updated(s, v :: m(s)) // if the cover is non-existent (and the target is opaque), the face is visible case (_, None) => m.updated(s, v :: m(s)) // in all other cases, the face is invisible case _ => m } // compute all surfaces of a block def block(m: SurfaceMap, v: V3I): SurfaceMap = (Stream.iterate(v)(identity) zip Directions()).foldLeft(m)({ case (m, (v, s)) => surface(m, v, s) }) // do the computation val empty: SurfaceMap = Directions() zip Stream.iterate(Nil)(identity) toMap val blocks: Seq[V3I] = (Origin until V3I(chunk.size, chunk.size, chunk.size)) map (_ + (chunk.pos * chunk.size)) val exposed: SurfaceMap = blocks.foldLeft(empty)(block) //.mapValues(_.map(_ - (chunk.pos * chunk.size))) // fold the exposure sets into vertex data and indices type VertDatum = (V3F, Color, V2F) val vertSize = 6 // convert from size in bytes to size in floats val p = 1 val n = 0 val offset = chunk.pos * chunk.size def addSquareIndices(verts: List[VertDatum], indices: List[Short]): List[Short] = indices .::((verts.length + 0).toShort) .::((verts.length + 1).toShort) .::((verts.length + 2).toShort) .::((verts.length + 0).toShort) .::((verts.length + 2).toShort) .::((verts.length + 3).toShort) var data: (List[VertDatum], List[Short]) = (Nil, Nil) data = exposed(Up).foldLeft(data)( (data, b) => data match { case (verts, indices) => val r = texturePack(world.blockAt(b).get.tid) ( verts .::((b + V3F(n, p, p), Color.WHITE, V2F(r.getU, r.getV))) .::((b + V3F(p, p, p), Color.WHITE, V2F(r.getU2, r.getV))) .::((b + V3F(p, p, n), Color.WHITE, V2F(r.getU2, r.getV2))) .::((b + V3F(n, p, n), Color.WHITE, V2F(r.getU, r.getV2))), addSquareIndices(verts, indices) ) } ) data = exposed(West).foldLeft(data)( (data, b) => data match { case (verts, indices) => val r = texturePack(world.blockAt(b).get.tid) ( verts .::((b + V3F(n, p, p), new Color(0.85f, 0.85f, 0.85f, 1f), V2F(r.getU, r.getV))) .::((b + V3F(n, p, n), new Color(0.85f, 0.85f, 0.85f, 1f), V2F(r.getU2, r.getV))) .::((b + V3F(n, n, n), new Color(0.85f, 0.85f, 0.85f, 1f), V2F(r.getU2, r.getV2))) .::((b + V3F(n, n, p), new Color(0.85f, 0.85f, 0.85f, 1f), V2F(r.getU, r.getV2))), addSquareIndices(verts, indices) ) } ) data = exposed(East).foldLeft(data)( (data, b) => data match { case (verts, indices) => val r = texturePack(world.blockAt(b).get.tid) ( verts .::((b + V3F(p, p, n), new Color(0.8f, 0.8f, 0.8f, 1f), V2F(r.getU, r.getV))) .::((b + V3F(p, p, p), new Color(0.8f, 0.8f, 0.8f, 1f), V2F(r.getU2, r.getV))) .::((b + V3F(p, n, p), new Color(0.8f, 0.8f, 0.8f, 1f), V2F(r.getU2, r.getV2))) .::((b + V3F(p, n, n), new Color(0.8f, 0.8f, 0.8f, 1f), V2F(r.getU, r.getV2))), addSquareIndices(verts, indices) ) } ) data = exposed(South).foldLeft(data)( (data, b) => data match { case (verts, indices) => val r = texturePack(world.blockAt(b).get.tid) ( verts .::((b + V3F(n, n, n), new Color(0.85f, 0.85f, 0.85f, 1f), V2F(r.getU, r.getV))) .::((b + V3F(n, p, n), new Color(0.85f, 0.85f, 0.85f, 1f), V2F(r.getU2, r.getV))) .::((b + V3F(p, p, n), new Color(0.85f, 0.85f, 0.85f, 1f), V2F(r.getU2, r.getV2))) .::((b + V3F(p, n, n), new Color(0.85f, 0.85f, 0.85f, 1f), V2F(r.getU, r.getV2))), addSquareIndices(verts, indices) ) } ) data = exposed(North).foldLeft(data)( (data, b) => data match { case (verts, indices) => val r = texturePack(world.blockAt(b).get.tid) ( verts .::((b + V3F(n, n, p), new Color(0.8f, 0.8f, 0.8f, 1f), V2F(r.getU, r.getV))) .::((b + V3F(p, n, p), new Color(0.8f, 0.8f, 0.8f, 1f), V2F(r.getU2, r.getV))) .::((b + V3F(p, p, p), new Color(0.8f, 0.8f, 0.8f, 1f), V2F(r.getU2, r.getV2))) .::((b + V3F(n, p, p), new Color(0.8f, 0.8f, 0.8f, 1f), V2F(r.getU, r.getV2))), addSquareIndices(verts, indices) ) } ) data = exposed(Down).foldLeft(data)( (data, b) => data match { case (verts, indices) => val r = texturePack(world.blockAt(b).get.tid) ( verts .::((b + V3F(n, n, p), new Color(0.75f, 0.75f, 0.75f, 1f), V2F(r.getU, r.getV))) .::((b + V3F(n, n, n), new Color(0.75f, 0.75f, 0.75f, 1f), V2F(r.getU2, r.getV))) .::((b + V3F(p, n, n), new Color(0.75f, 0.75f, 0.75f, 1f), V2F(r.getU2, r.getV2))) .::((b + V3F(p, n, p), new Color(0.75f, 0.75f, 0.75f, 1f), V2F(r.getU, r.getV2))), addSquareIndices(verts, indices) ) } ) val (vertices: List[VertDatum], indices: List[Short]) = data // reverse and serialize the vertex data into floats val vertexSerial = vertices.reverse.flatMap({ case (v, color, t) => List( v.x, v.y, v.z, color toFloatBits, t.x, t.y ) }) // compile the vertices into an array (they were already reversed during serialization) val vertArr = new Array[Float](vertexSerial.size) var i = 0 for (f <- vertexSerial) { vertArr.update(i, f) i += 1 } // reverse and compile the indices into an array val indexArr = new Array[Short](indices.size) i = 0 for (s <- indices.reverseIterator) { indexArr.update(i, s) i += 1 } (vertArr, indexArr) } (PriorityExecContext(if (previous isDefined) Thread.MAX_PRIORITY else Thread.MIN_PRIORITY)) var renderable = new DisposableCache[Renderable]({ // create a mesh val mesh = new Mesh(true, 4 * 6 * chunk.blocks.length, 6 * 6 * chunk.blocks.length, new VertexAttribute(Usage.Position, 3, a_position), new VertexAttribute(Usage.ColorPacked, 4, a_color), new VertexAttribute(Usage.TextureCoordinates, 2, a_texCoord0) ) // get the arrays val (vertArr, indexArr) = Await.result(meshData, Duration.Inf) // plug the arrays into the mesh (this uploads them to VRAM) mesh.setVertices(vertArr) mesh.setIndices(indexArr) // create the material val material = new Material material.set(TextureAttribute.createDiffuse(texturePack.texture)) // create the renderable val renderable = new Renderable() renderable.meshPart.mesh = mesh renderable.material = material renderable.meshPart.offset = 0 renderable.meshPart.size = indexArr.length renderable.meshPart.primitiveType = GL20.GL_TRIANGLES renderable }, _.meshPart.mesh.dispose()) /** * Bring this object into an active state, generating resources, and return the renderables. */ override def apply(): Seq[Renderable] = if (meshData.isCompleted) Seq(renderable()) else previous match { case Some(renderer) => renderer() case None => Nil } /** * Return the sequence of factories that this factory depends on for being in an activate state. */ override def dependencies: Seq[RenderableFactory] = if (meshData.isCompleted) Nil else previous match { case Some(previous) => Seq(previous) case None => Nil } /** * Bring this object into an unactive state, and dispose of resources. */ override def dispose(): Unit = renderable.invalidate}Some context:V3F is a vector of 3 floatsV3I is a vector of 3 ints, and it subclasses V3FDirection is a sealed abstract class that represents V3I and represents a unit vector along an axisThere are 6 direction objects: Up=<0,1,0>, Down=<0,-1,0>, North=<0,0,1>, South=<0,0,-1>, East=<1,0,0>, West=<-1,0,0>Directions() returns a sequence of all directionsOnes=<1,1,1>TexturePack.apply(tid: TextureID) returns a TextureRegionAnd finally, since a gif is worth 1000^2 words
Purely functional minecraft-like mesh compiler
functional programming;scala;opengl;minecraft
null
_webapps.103468
I have the step 2 verification on my Gmail account. I can get phone calls on the phone number that I have set up for the step 2 verification. I can't receive text on it due to my phone. How can I reset my password? Can I get a voice text? Can I get a phone call on that number? This has me locked out of my Facebook to.
Locked out of my Gmail account
facebook;google account;facebook chat
null
_unix.53822
Why does emacs depend on Perl? I thought it is all based on C/Lisp?Emacs Version 24.1.4.fc17(Working on a fresh out of the box fedora17)
Emacs depends on perl
emacs
I believe this is a dependency because perl-mode.el and cperl-mode.el have been built-in to Emacs for quite some time, and these modes will not work properly if Perl is not installed on the system.These files can be found in the Emacs git repository under the directory:emacs.git/plain/lisp/progmodes/
_webapps.96100
Yesterday I upvoted one comment, immediately I logged off. My upvote is gone.If I login again it is showing again my upvote.I saw other person's comment karma there is no change in it for my upvote.I asked them (moderators) they reply me: Upvotes: karma is not a 1:1 ratio. Which I didn't quite understand.
Why is my upvote not considered properly in reddit?
reddit
null
_unix.227662
I want to rename multiple files (file1 ... filen to file1_renamed ... filen_renamed) using command find command:find . -type f -name 'file*' -exec mv filename='{}' $(basename $filename)_renamed ';'But getting this error: mv: cannot stat filename=./file1: No such file or directoryThis not working because filename is not interpreted as shell variable.
How to rename multiple files using find
shell;find
The following is a direct fix of your approach:find . -type f -name 'file*' -exec sh -c 'x={}; mv $x ${x}_renamed' \;However, this is very expensive if you have lots of matching files, because you start a fresh shell (that executes a mv) for each match. And if you have funny characters in any file name, this will explode. A more efficient and secure approach is this:find . -type f -name 'file*' -print0 | xargs --null -I{} mv {} {}_renamedIt also has the benefit of working with strangely named files. If find supports it, this can be reduced to find . -type f -name 'file*' -exec mv {} {}_renamed \;The xargs version is useful when not using {}, as infind .... -print0 | xargs --null rmHere rm gets called once (or with lots of files several times), but not for every file.I removed the basename in you question, because it is probably wrong: you would move foo/bar/file8 to file8_renamed, not foo/bar/file8_renamed.Edits (as suggested in comments):Added shortened find without xargsAdded security sticker
_cs.17914
I'm looking for an efficient algorithm for the following problem or a proof of NP-hardness.Let $\Sigma$ be a set and $A\subseteq\mathcal{P}(\Sigma)$ a set of subsets of $\Sigma$. Find a sequence $w\in \Sigma^*$ of least length such that for each $L\in A$, there is a $k\in\mathbb{N}$ such that $\{ w_{k+i} \mid 0\leq i < |L| \} = L$.For example, for $A = \{\{a,b\},\{a,c\}\}$, the word $w = bac$ is a solution to the problem, since for $\{a,b\}$ there's $k=0$, for $\{a,c\}$ there's $k=1$.As for my motivation, I'm trying to represent the set of edges of a finite automaton, where each edge can be labeled by a set of letters from the input alphabet. I'd like to store a single string and then keep a pair of pointers to that string in each edge. My goal is to minimize the length of that string.
How do I find the shortest representation for a subset of a powerset?
algorithms;formal languages;encoding scheme
I believe I found a reduction from Hamiltonian path, thus proving the problem NP-hard.Call the word $w\in\Sigma^*$ a witness for $A$, if it satisfies the condition from the question (for each $L\in A$, there's $m\geq 1$ such that $\{w_{m+i}\mid 0\leq i<|L|\} = L$).Consider the decision version of the original problem, i.e. decide whether for some $A$ and $k\geq 0$, there's a witness for $A$ of length at most $k$. This problem can be solved using the original problem as an oracle in polynomial time (find the shortest witness, then compare its length to $k$).Now for the core of the reduction. Let $G=(V,E)$ be a simple, undirected, connected graph. For each $v\in V$, let $L_v=\{v\}\cup\{e\in E\mid v\in e\}$ be the set containing the vertex $v$ and all of its adjacent edges. Set $\Sigma=E$ and $A=\{L_v\mid v\in V\}$. Then $G$ has a Hamiltonian path if and only if there is a witness for $A$ of length at most $2|E|+1$.Proof. Let $v_1e_1v_2\ldots e_{n-1}v_n$ be a Hamiltonian path in $G$ and $H=\{e_1, e_2, \ldots, e_{n-1}\}$ the set of all edges on the path. For each vertex $v$, define the set $U_v=L_v\setminus H$. Choose an arbitrary ordering $\alpha_v$ for each $U_v$. The word $w=\alpha_{v_1}e_1\alpha_{v_2}e_2\ldots e_{n-1}\alpha_{v_n}$ is a witness for $A$, since $L_{v_1}$ is represented by the substring $\alpha_1e_1$, $L_{v_n}$ by $e_{n-1}\alpha_n$, and for each $v_i$, $i\notin\{1, n\}$, $L_{v_i}$ is represented by $e_{i-1}u_{v_i}e_i$. Furthermore, each edge in $E$ occurs twice in $w$ with the exception of $|V|-1$ edges in $H$, which occur once, and each vertex in $V$ occurs once, giving $|w|=2|E|+1$.For the other direction, let $w$ be an arbitrary witness for $A$ of length at most $2|E|+1$. Clearly, each $e\in E$ and $v\in V$ occurs in $w$ at least once. Without loss of generality, assume that each $e\in E$ occurs in $w$ at most twice and each $v\in V$ occurs exactly once; otherwise a shorter witness can be found by removing elements from $w$.Let $H\subseteq E$ be the set of all edges occurring in $w$ exactly once. Given the assumptions above, it holds that $|w|=2|E|-|H|+|V|$.Consider a contiguous substring of $w$ of the form $ue_1e_2\ldots e_kv$, where $u,v\in V$, $e_i\in E$. We say that $u,v$ are adjacent. Notice that if $e_i\in H$, then $e_i=\{u,v\}$, because $e_i$ occurs only once, yet it is adjacent to two vertices in $G$. Therefore, at most one of $e_i$ can be in $H$. Similarly, no edge in $H$ can occur in $w$ before the first vertex or after the last vertex.Now, there are $|V|$ vertices, therefore $|H|\leq |V|-1$. From there, it follows that $|w|\geq 2|E|+1$. Since we assume $|w|\leq 2|E|+1$, we get equality. From there we get $|H|=|V|-1$. By pigeonhole principle, there is an edge from $H$ between each pair of vertices adjacent in $w$. Denote $h_1h_2\ldots h_{n-1}$ all elements from $H$ in the order they appear in $w$. It follows that $v_1h_1v_2h_2\ldots h_{n-1}v_n$ is a Hamiltonian path in $G$. $\square$Since the problem of deciding the existence of Hamiltonian path is NP-hard and the above reduction is polynomial, the original problem is NP-hard too.
_cs.30216
Nick's Class (NC) is the class of problems that can be decided in poly-log time using a polynomial number of processors.I want to know about the exponential analogue, which would cover problems that can be decided in polynomial time using an exponential number of processors.What I'm looking for is a name for this class and any known relations between this class and other complexity classes, or any canonical problems for the class. It seems straightforward that it would contain NP and co-NP, and i think it is contained within PSPACE, but I'm not sure much else about it.
Exponential analogue of NC?
complexity theory;reference request;parallel computing;complexity classes
null
_datascience.15445
I'm trying to create a network visualization to study the flights to and from a certain airport.My data consists of 123000 rows in this format:Origin_ID Destination_ID Frequency 1726 3504 40000 3504 4517 40 5616 7205 38 ...I'm trying to create a image similar to this:Basically, I want to be able to see the strength of the relationship (based on the frequency) and identify which combinations are most common in that airport.Can it be created in R using this format? Or do you suggest any other open source platform to create this visualization?
Network Visualization - R code - Which Package is needed?
r;visualization;social network analysis;sequential pattern mining
null
_webmaster.13588
Say I have a couple of trivial products in categories.For example:Hair Products- ShampooFood- Bread- CookiesURLs are friendly Rails slugs like/categories/hair-products/products/shampoo/categories/foodWhile looking at my search results, I drew a horrifying conclusion.When using a search query like shampoo, a user lands on the /categories/hair-products page, not on the /products/shampoo page, for Shampoo is one of the words on the hair-products page.Sometimes, when searching for hair products, users tend to land on the Shampoo page, because the word Hair Products is on there (as well as Shampoo in a H1 tag).Trust me, I did some serious SEO work on the site already.Is there a way to have search engine favor a product instead of a category?
Search result priorities
search engines;seo
null
_softwareengineering.141175
I have seen many people around complaining about verbosity in programming languages. I find that, within some bounds, the more verbose a programming language is, the better it is to understand. I think that verbosity also reinforces writing clearer APIs for that particular language.The only disadvantage I can think of is that it makes you type more, but I mean, most people use IDEs which do all the work for you.So, What are the possible downsides to a verbose programming language?
Why is verbosity bad for a programming language?
programming languages
The goal is quick comprehensionVerbose means uses too many words. The question is what too many is.Good code should be easy to comprehend at a glance. This is easier if most of the characters directly serve the purpose of the code.Signal vs NoiseIf a language is verbose, more of your code is noise. Compare Java's Hello World:class HelloWorldApp { public static void main(String[] args) { System.out.println(Hello World!); }}... with Ruby's:print Hello World!Noise wastes mental energy.Cryptic vs ClearOn the other hand, excessive terseness in a language also costs mental energy. Compare these two examples from Common Lisp:(car '(1 2 3)) # 3 characters whose meaning must be memorized# vs(first '(1 2 3)) # 5 characters whose meaning is obvious
_cs.68758
I found a code in the internet for Dijkstra's shortest path algorithm in PHP. The problem is it only shows one possible path. If there are several paths having the same distance, it only outputs one of them.How can I modify this algorithm to produce all shortest paths?//set the distance array$_distArr = array();$_distArr[1][2] = 7;$_distArr[1][3] = 9;$_distArr[1][6] = 14;$_distArr[2][1] = 7;$_distArr[2][3] = 10;$_distArr[2][4] = 15;$_distArr[3][1] = 9;$_distArr[3][2] = 10;$_distArr[3][4] = 11;$_distArr[3][6] = 2;$_distArr[4][2] = 15;$_distArr[4][3] = 11;$_distArr[4][5] = 6;$_distArr[5][4] = 6;$_distArr[5][6] = 9;$_distArr[6][1] = 14;$_distArr[6][3] = 2;$_distArr[6][5] = 9;//the start and the end$a = 1;$b = 6;//initialize the array for storing$S = array();//the nearest path with its parent and weight$Q = array();//the left nodes without the nearest pathforeach(array_keys($_distArr) as $val) $Q[$val] = 99999;$Q[$a] = 0;//start calculatingwhile(!empty($Q)){ $min = array_search(min($Q), $Q);//the most min weight if($min == $b) break; foreach($_distArr[$min] as $key=>$val) if(!empty($Q[$key]) && $Q[$min] + $val < $Q[$key]) { $Q[$key] = $Q[$min] + $val; $S[$key] = array($min, $Q[$key]); } unset($Q[$min]);}//list the path$path = array();$pos = $b;while($pos != $a){ $path[] = $pos; $pos = $S[$pos][0];}$path[] = $a;$path = array_reverse($path);//print resultecho From $a to $b;echo The length is .$S[$b][1];echo Path is .implode('->', $path);?>
Adapting Dijkstra to list all shortest paths
algorithms;shortest path;enumeration
null
_webapps.60751
I've got a Bitbucket account I setup using a username / password. At the moment Bitbucket doesn't allow you to have 2-step authentication on your account, so I want to change my Bitbucket login to be able to login with my Google account (which does have 2-step auth). Does any one know if this is possible?
Change Bitbucket login from username / password to login with Google
multi factor auth;bitbucket
null
_unix.365995
I have a very large file (200GB). Apparently when I transfer it over it did not copy correctly. The sha1 hash on both are different. Is there a way I can divide the file up to blocks (like 1MB or 64MB) and output a hash for each block? Then compare/fix?I might just write a quick app to do it.
Hash a file by 64MB blocks?
hashsum
That quick app already exists, and is relatively common: rsync. Of course, rsync will do a whole lot more than that, but what you want is fairly simple:rsync -cvP --inplace user@source:src-path-to-file dest-path-to-file # from the destinationrsync -cvP --inplace src-path-to-file user@dest:dest-path-to-file # from the sourceThat will by default use ssh (or maybe rsh, on a really old version) to make the connection and transfer the data. Other methods are possible, too.Options I passed are:-c skip based on checksums, not file size/mtime. By default rsync optimizes and skips transfers where the size & mtime match. -c forces it to compute the checksum (which is an expensive operation, in terms of I/O). Note this is a block-based checksum (unless you tell it to do whole files only), and it'll only transfer the corrupted blocks. The block size is automatically chosen, but can be overridden with -B (I doubt there is any reason to).-v verbose, will give some details (which file it's working on)-P turns on both partial files (so if it gets halfway through, it won't throw out the work) and a progress bar.--inplace Update the existing file, not a temporary file (which would then replace the original file). Saves you from having a 200GB temporary file. Also implies partial files, so that -P is partially redundant.BTW: I'm not sure how you did the original transfer, but if it was sftp/scp, then something is very wrongthose fully protect from any corruption on the network. You really ought to track down the cause. Defective RAM is a relatively common one.
_opensource.5881
float hic = dht.computeHeatIndex(t, h, false); dtostrf(hic, 6, 2, celsiusTemp); float hif = dht.computeHeatIndex(f, h); dtostrf(hif, 6, 2, fahrenheitTemp); dtostrf(h, 6, 2, humidityTemp);
hello, I am having some trouble understanding some lines of my project of DHT11 sensor and I am usig my Arduino IDE.
source code
null
_unix.121862
Can you help me with this problem please?How can I set e-mail alerting after SSH login when port 22 is NATed? I use Linux Debian 7.I have used these iptables rules:eth0 - external network, eth1 - internal network iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADEiptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPTiptables -A FORWARD -i eth1 -o eth0 -j ACCEPTThanks for your replies.
SSH E-mail login alert if SSH port is as NAT
debian;ssh;iptables;nat
null
_unix.298915
I want to send a pdf file (autoreply) to the sender (of mail being fetched by fetchmail) via procmail if it was send to [email protected] pdf file is located in /home/peter/docs/file.pdf
Procmail reply with pdf to sender
procmail
null
_softwareengineering.280238
Assume that a node A in the commit tree of a codebase contains a bug but some ancestor B of A is clean from that very bug. Given the topology of the commit tree [B,A] leading from B to A, can we predict c the maximum number of steps needed to locate the commit where the bug appeared, in terms of number of vertices, arrows, merges or cycles and other simple graph invariants?There is two easy cases:If the history from B to A is linear, then c is the binary logarithm of the number of nodes between B and A.If the history is totally parallel, i.e. a family of independent commits A M B, then c is the number of such commits.From the two examples, we learn that linear history leads to cheaper bisection processes, but can we be more precise than this? It is easy to write a program to compute c which I still haven't done yet but I am looking for a pretty algebraic formula. If the problem is hard, it could be interesting and easier to have expected value of c given, for instance, the number of vertices, arrows and merges, or something similar.
What is the maximum number of steps to find a bug using bisecting?
graph;scm
null
_softwareengineering.308700
The language I am using is C#, but I am looking more for help with the algorithm more than I am concerned with which language. I have been trying to develop this algorithm for a while now and I can't seem to get it down.I have a tree like structure which has many parent/child relationships. So leaf #1 can have children, and the children can have children, and so on and so on. This forms a tree. My problem is that I want to be able to list out every possible path down the tree to the bottom.My data is in a C# list of objects and each object has 2 fields, parent and child.So my data could look something like this:Parent Child1 21 32 42 53 63 73 84 94 10My real data is more complex than this as I have thousands of rows. In the sample above here is the results I want to achieve:1 2 4 91 2 4 101 2 41 2 51 3 61 3 7I did see some other similar solutions on this website, but they were a little different than what I need. A leaf in my tree can have more than 2 children as shown above.
Find all paths in a tree type of structure
algorithms
null
_codereview.64896
Let's say I have this form:<form action=/user/store method=POST enctype=multipart/form-data> Name: <input type=text name=name><br> Male: <input type=radio name=gender value=male><br> Female: <input type=radio name=gender value=female><br> Photo: <input type=file name=photo><br> Hobbies: <select name=hobbies[] multiple> <option value=sport>Sport</option> <option value=movies>Movies</option> <option value=music>Music</option> <option value=games>Games</option> </select><br> Accept the terms?<input type=checkbox name=terms><br> <input type=submit></form>If I submit it to a PHP server with my data, this is the result:array (size=5) 'name' => string 'John' (length=4) 'gender' => string 'male' (length=4) 'hobbies' => array (size=2) 0 => string 'sport' (length=5) 1 => string 'music' (length=5) 'terms' => string 'on' (length=2) 'photo' => object(Symfony\Component\HttpFoundation\File\UploadedFile)[9] private 'test' => boolean false private 'originalName' => string 'test.jpg' (length=8) private 'mimeType' => string 'image/jpeg' (length=10) private 'size' => int 130677 private 'error' => int 0I want to have similar results with JavaScript, so I wrote this function://Add event listener for the form submit to the document$(document).on('submit', 'form', function(e){ //Prevent form submitting e.preventDefault(); var form = $(':input').not(':submit,:button,:image,:radio,:checkbox'); $.merge(form, $(':checked').not('option')); var data = [], input, name; form.each(function(i, obj){ input = {}; name = obj.name; if(name.indexOf('[]') < 0) { input['name'] = name; if(obj.type==='file'){ var files = obj.files; input['value'] = files[files.length - 1]; } else { input['value'] = obj.value; } } else { name = name.replace('[]', ''); if(name in data){ var tmp = input['value']; if(obj.type==='file'){ input['value'] = $.merge(tmp, obj.files); } else if(obj.type === 'select-multiple') { input['value'] = $.merge(tmp, $.map(obj.selectedOptions, function(option, i){ return option.value; })); } else { input['value'] = $.merge(tmp, obj.value); } } else { data[name] = {}; if(obj.type === 'file'){ input['value'] = obj.files; } else if(obj.type === 'select-multiple') { input['value'] = $.map(obj.selectedOptions, function(option, i){ return option.value; }); } else { input['value'] = []; input['value'].push(obj.value); } } } data.push(input); });});And this return (JSON format):[ {name:firstname,value:John}, {name:photo,value: { webkitRelativePath:, lastModifiedDate:2014-09-08T18:29:36.000Z, name:test.jpg, type:image/jpeg,size:130677 } }, {name:hobbies,value:[sport,music]}, {name:gender,value:male}, {name:terms,value:on}]Is there any better way to do that? By better code, I mean less code. I don't mind if it will become slower.
Simulate the way that PHP receives the form
javascript;php;jquery;form
null
_codereview.4712
The encryption script:import randomsplitArray = []def takeInput(): rawText = raw_input(Type something to be encrypted: \n) password = raw_input(Please type a numerical password: \n) encrypt(rawText, int(password))def encrypt(rawText, password): for c in rawText: divide(c, password)def divide(charToDivide, password): asciiValue = ord(charToDivide) a = 0 b = 0 c = 0 passa = 0 passb = 0 passc = 0 a = random.randrange(a, asciiValue) b = random.randrange(b, asciiValue - a) c = asciiValue - (a+b) passa = random.randrange(passa, password) passb = random.randrange(passb, password-passa) passc = password - (passa+passb) if isinstance(password, str): print Please enter a number takeInput() else: a += passa b += passb c += passc splitArray.append(str(a)) splitArray.append(str(b)) splitArray.append(str(c))takeInput()f = open(myEncryptorTest, 'w')arrayDelimiterString = .encryptedString = arrayDelimiterString.join(splitArray)encryptedString = . + encryptedStringf.write(encryptedString)f.closeDecryption:#XECryption decryption program#Each character is a set of three numbers delimited by dots#Decryption works by adding these three numbers together, subtracting the ASCII for a space and using that number to decypher #the rest of the array.#User is prompted to see if the message makes sensef = open('myEncryptorTest')encryptedString = f.read()f.close()#separate sets of three numbers into an arraydef sort(): sortedCharArray = [] charBuffer = for c in encryptedString: if c == '.' and charBuffer != : sortedCharArray.append(charBuffer) charBuffer = elif c != '.': charBuffer += c #if the buffer is not empty (e.g. last number), put it on the end if charBuffer != : sortedCharArray.append(charBuffer) charBuffer = crack(sortedCharArray)#add sets of three numbers together and insert into an array decryptiondef crack(charArray): charBuffer = 0 arrayCount = 1 decypheredArray = [] for c in charArray: if arrayCount % 3 == 0: arrayCount = arrayCount + 1 charBuffer = charBuffer + int(c) decypheredArray.append(charBuffer) charBuffer = 0 else: arrayCount = arrayCount + 1 charBuffer = charBuffer + int(c) decypher(decypheredArray)#subtract ASCII value of a space, use this subtracted value as a temporary passworddef decypher(decypheredArray): space = 32 subtractedValue = 0 arrayBuffer = [] try: for c in decypheredArray: subtractedValue = c - space for c in decypheredArray: asciicharacter = c - subtractedValue arrayBuffer.append(asciicharacter) answerFromCheck = check(arrayBuffer) if answerFromCheck == y: #print value of password if user states correct decryption print Password: print subtractedValue raise StopIteration() else: arrayBuffer = [] except StopIteration: pass#does the temporary password above produce an output that makes sense?def check(arrayBuffer): decypheredText = stringArray = [] try: for c in arrayBuffer: try: stringArray.append(chr(c)) except ValueError: pass print decypheredText.join(stringArray) inputAnswer = raw_input(Is this correct?) if inputAnswer == y: return inputAnswer else: stringArray = [] return inputAnswer except StopIteration: return sort()f.close()As I say, I'm looking for advice on how to improve my code and writing code in general. I'm aware that my code is probably an affront against programmers everywhere but I want to improve. These two scripts are for the hackthissite.org Realistic 6 mission. I won't be using them for encrypting anything of great importance.
XECryption encryption and decryption script
python;cryptography
null
_cstheory.1569
The Walsh-Hadamard transform (WHT) is a generalization of the Fourier transform, and is an orthogonal transformation on a vector of real or complex numbers of dimension $d = 2^m$. The transform is popular in quantum computing, but it's been studied recently as a kind of preconditioner for random projections of high-dimensional vectors for use in the proof of the Johnson-Lindenstrauss Lemma. Its main feature is that although it's a square $d\times d$ matrix, it can be applied to a vector in time $O(d \log d)$ (rather than $d^2$) by an FFT-like method. Suppose the input vector is sparse: it has only a few nonzero entries (say $r \ll d$). Is there any way to compute the WHT in time $f(r,d)$ such that $f(d,d) = O(d \log d)$ and $f(r,d) = o(d \log d)$ for $r = o(d)$ ?Note: these requirements are merely one way of formalizing the idea that I'd like something that runs faster than $d \log d$ for small $r$.
Sparse Walsh-Hadamard Transform
ds.algorithms;linear algebra
Index the WHT rows by an integer x, for $0 \le x \lt d$. So x has log d bits. Similarly, index the columns. The (x,y) position is $(-1)^{\langle x,y \rangle}$ where the exponent is the dot product of length log d. Assume r is a power of 2, rounding up if necessary. Break the dxr matrix into rxr blocks by letting the first log r bits vary and fixing the other log(d/r) bits in each of the d/r ways. This rxr block is a smaller WHT matrix of size r, except there may be some columns missing, repeated, or negated. In any case, preprocess the vector easily then do an rxr WHT in time r log r, then repeat d/r times for total time d log r.Example:d = 4.WHT H is+++++-+-++--+--+Arbitrary set of columns is 00 and 10 (leftmost and two over from that):+++++-+-Row blocks are++++and----In each block, there are repeated columns, missing columns and, in the second block, negated columns. Preprocess a vector $(a,b)^T$ into $(a+b,0)^T$ and multiply by 2x2 WHT:+++-Then preprocess $(a,b)^T$ into $(-a-b,0)^T$ and multiply by 2x2 WHT:+++-
_softwareengineering.147886
When programmers talk about data structures, are they only talking about abstract data types like lists, trees, hashes, graphs, etc.? Or does that term include any structure that holds data, such as composite types (class objects, structs, enums, etc.) and primitive types (boolean, int, char, etc.)?I've only ever heard programmers use the term to reference complex data structures or abstract data types, however the Wikipedia article that provides a list of data structures includes both composite types and primitive types in the definition, which is not what I expected (even though it does make sense).When looking around online I see other places that refer to the term data structure in the programming sense as only referring to abstract data types, such as this lecture from Stony Brook University's Department of Computer Science which statesA data structure is an actual implementation of a particular abstract data type.or this wikibook on data structures, which uses the term in sentences like this:Because data structures are higher-level abstractions, they present to us operations on groups of data, such as adding an item to a list, or looking up the highest-priority item in a queueSo why do I only ever hear programmers referring to complex data structures or abstract data types when they use the term data structure? Do programmers have a different definition for the term than the dictionary definition?
When programmers talk about data structures, what are they referring to?
terminology;data structures;definition
The generic definition of data structure is anything that can hold your data in a structured way, so yes this would include composite types and primitive types in addition to abstract data types. For example, a string is a data structure as it can hold a sequence of characters in a structured way.However, the term also has another meaning to programmers.Since the term data structures is so broad, developers usually use a more specific term to identify what they are talking about, such as class or data object or primitive type, and the specific term used for most complex or abstract data types is data structureThis is why you hear data structure most frequently being used for abstract data types like Arrays, Lists, Trees and Hashtables, and not for things like primitive data types
_webapps.97941
BackgroundI had a little startup where I purchased a domain name and connected it to a Google Drive account (for work). Using Google Drive I added all the documents related to work (i.e., accounting, etc). My start-up ended and so I want to end my ownership of that domain as well as end my Google Drive account membership (which is paid).ProblemI already synced Google Drive on my local machine, but I noticed that all the docs that are of type Google Sheets or Google Docs are simply links to the Google doc on the cloud. (I can't access them without internet, and when I do access them it verifies that I'm logged in with the appropriate google account.) This is bad news because this means that I won't be able to access those docs once I no longer have a membership with the said Google Account.QuestionHow can I permanently backup all these Google docs so that I can access them offline and without having the membership with the corresponding Google account?
How to backup .gsheet (Google Sheets) permanently before closing Google account
google spreadsheets;google drive;google apps;google documents
There are three options: Share your files and folders with an account out of the domain and then copy the files from that account.Download your data using the feature formerly named Google Takeout. Download each file.
_unix.26073
chkconfig portreserve offSo that portreserve will not run at next boot. Can any bad things happen? I mean I think it's better to use KISS so that minimal applications will listen on 0.0.0.0.
Is it a bad idea to disable portreserve?
scientific linux
null
_webapps.16549
I have a few Chrome plugins published on the Google webstore with analytics and I noticed today that the protocols for the URL different depending on what I happen to have copy pasted. Will Google Analytics detect visitors to these pages on both HTTP and HTTPS?
Do Google Analytics track both HTTPS and HTTP traffic?
google analytics
Short answer: yes.Long answer: it depends on which version of the tracking code you are using. Meaning that if you are using an older version (urchin.js), you need to copy/paste a different tracking code into your secure pages. (Here is the man page: http://www.google.com/support/analytics/bin/answer.py?answer=55483)
_reverseengineering.13611
I'm trying to reverse a fighting game to understand how the physics works. It's written in ActionScript and it's deployed with Adobe AIR. In the game directory, there's a bunch of swf files that contain graphics, UI, levels, sound effects, animations, etc but none of them contain any code when I decompile them using Sothink SWF decompiler. There's an EXE file but it looks like it's just a wrapper. There's also a Game.swz and Init.swz which I suspect contain the physics code. Opening them with Sothink throws an error that it's not a valid SWF format. Renaming them doesn't work. How can I decompile a swz file?EDIT: The duplicate question is regarding swf files. This question is regarding swz files.
How to decompile flash swz file?
decompilation;actionscript;flash;swf
null
_unix.185872
I am currently working with some SSH tunnels configured. Many times when I loose connection to the Internet or hibernate my laptop, I need to reconfigure the tunnels (i.e., do killall ssh and then set up the tunnels once again).What is the best way to automate it?
How can I configure Linux to reopen my SSH tunnels after the connection has been restored?
ssh;ssh tunneling
Sounds like autossh (Automatically restart SSH sessions and tunnels) could be something for you:http://www.harding.motd.ca/autossh/To keep tunnels alive, and to administrate them in general.Should be on most distros base repos, so just use one of the following:apt-get install autossh # debpacman -S autossh # archyum install autossh # rhel
_unix.112301
Is there a file type or method for using files in which certain files look and behave like symbolically linked files, but contain extra meta information for changing the data when they are read? For example:If I have a file called hello-world.txt, and I build a symbolically linked file pointing to it, ln -s /path/to/hello-world.txt /path/to/symlinkI can not change the contents of the symlink without changing hello-world.txt, as well as changing all of the other files that point to hello-world.txt. I realize that symbolically/hard linked files are suppose to remain the same(because they are the same file in a sense), but it would be great, if I could write a file that references a particular file, and then also provides change details that get applied every time the file is read. I tried to create my own file type using plain text that looked like this:/path/to/file/being/referencedchange statementchange statement...I would then give these files their own file extension and mime type, and create an intermediate program to redirect to the referenced file, and basically try to mimic the behavior of a symbolic link. The problem is, the file doesn't really reference another file, and so actually reading the file(by using the cat command, or having a c++ processor pull in the file, etc), will not get the same information back that a symbolically linked file would. I'm not sure if I can provide a hook for this kind of thing, and have a program output the correct text for whatever is reading it. Maybe something with piped files? I don't know.What would be the best way to achieve this kind of behavior on the file system? Perhaps an analogy that best describes the kind of dynamic capabilities I want files to have, is how php files work on the web. The same file can be referenced using additional parameters, and that changes the contents of a file read by the user.
Something like a symbolically linked file, but with change details attached?
linux;filesystems;files;symlink;mime types
null
_unix.302444
Let's assume you have a pipeline like the following:$ a | bIf b stops processing stdin, after a while the pipe fills up and writes on stdout from a will block (until either b starts processing again or it dies).If I wanted to avoid this, I could be tempted to use a bigger pipe (or, more simply, buffer(1)) like so:$ a | buffer | bThis would simply buy me more time, but in the end a would eventually stop.What I would love to have (for a very specific scenario that I'm addressing) is to have a leaky pipe that, when full, would drop some data (ideally, line-by-line) from the buffer to let a continue processing (as you can probably imagine, the data that flows in the pipe is expendable, i.e. having the data processed by b is less important than having a able to run without blocking).To sum it up I would love to have something like a bounded, leaky buffer:$ a | leakybuffer | bI could probably implement it quite easily in any language, I was just wondering if there's something ready to use (or something like a bash one-liner) that I'm missing.Note: in the examples I'm using regular pipes, but the question equally applies to named pipesWhile I awarded the answer below, I also decided to implement the leakybuffer command because the simple solution below had some limitations: https://github.com/CAFxX/leakybuffer
Leaky pipes in linux
linux;pipe;fifo;buffer
Easiest way would be to pipe through some program which sets nonblocking output.Here is simple perl oneliner (which you can save as leakybuffer) which does so:so your a | b becomes:a | perl -MFcntl -e \ 'fcntl STDOUT,F_SETFL,O_WRONLY|O_NONBLOCK; while (<STDIN>) { print }' | bwhat is does is read the input and write to output (same as cat(1)) but the output is nonblocking - meaning that if write fails, it will return error and lose data, but the process will continue with next line of input as we conveniently ignore the error. Process is kind-of line-buffered as you wanted, but see caveat below.you can test with for example:seq 1 500000 | perl -w -MFcntl -e \ 'fcntl STDOUT,F_SETFL,O_WRONLY|O_NONBLOCK; while (<STDIN>) { print }' | \ while read a; do echo $a; done > outputyou will get output file with lost lines (exact output depends on the speed of your shell etc.) like this:127681276912770127711277212773127775610756117561275613you see where the shell lost lines after 12773, but also an anomaly - the perl didn't have enough buffer for 12774\n but did for 1277 so it wrote just that -- and so next number 75610 does not start at the beginning of the line, making it little ugly. That could be improved upon by having perl detect when the write did not succeed completely, and then later try to flush remaining of the line while ignoring new lines coming in, but that would complicate perl script much more, so is left as an exercise for the interested reader :)Update (for binary files):If you are not processing newline terminated lines (like log files or similar), you need to change command slightly, or perl will consume large amounts of memory (depending how often newline characters appear in your input):perl -w -MFcntl -e 'fcntl STDOUT,F_SETFL,O_WRONLY|O_NONBLOCK; while (read STDIN, $_, 4096) { print }' it will work correctly for binary files too (without consuming extra memory).
_unix.356451
You can use the -6 flag in cURL or wget to use the IPv6 address of a domain, like google.com.I was unable to fetch pages by explicitly passing an IPv6 address.I tried:wget -6 http://[fe80::a00:27ff:fe00:80b9]:8080/That host is definitely running a server on on 8080 over IPv6 on my local network, confirmed with netstat and ifconfig. When I run the above I get Connecting to fe80:a00:27ff:fe00:80b9:8080... failed: Invalid argument. It's obvious by the error message that the IPv6 address and port are not interpreted as I was expecting.When I google, all examples are for using -6 and a domain name, couldn't find an example with explicit IPv6 address.
wget and cURL an explicit IPv6 Address
wget;curl;ipv6
null
_webapps.52082
I invited someone to join Yahoo! groups. They emailed the -subscribe address and their request appeared in the Management section under email activity. Even though I invited them first, will they still get an email message from Yahoo! asking them to confirm their membership?
Confirmation process for Yahoo Group membership
yahoo groups
null
_cs.22849
Suppose an algorithm goes through a list of n integers and for every iteration of the loop it is needs to check if the current evaluated element of the list is even. If it is even, return the index of the integer that is evaluated as even.How come the algorithm would have 2n+1 comparison?I thought linear search would have n comparision because it is going through n elements. +1 comparison for the if statement. So that would make the algorithm O(n+1) comparison, no?. Where did the extra n come from?Pseudo-code:procedure last_even_loc(a1,a2,...,an:integers);location = 0;for i = 1 to n if (a_i = 0) (mod 2) then location = ireturn location;
Why is there a 2n+1 comparison for a linear search algorithm?
algorithms;algorithm analysis;runtime analysis
procedure last_even_loc(a1,a2,...,an:integers)1.location = 0;2.for i = 1 to n3. if (a_i = 0) (mod 2) then location = i4.return location;statement 1 is executing only once.statemet 2 is executing total n+1 times.statement 3 is executing total n times.statement 4 is executing only once.The running time of the algorithm is the sum of running time of all the statements executed.so running time=1+1+n+(n+1)=O(2n+3)=O(n).so there is total n+1+n=2n+1 comparisons(statement 2 and 3).
_codereview.62142
I am a beginner in Python and am currently learning about making beautiful code. I've started with a small program. Can you please review my code? def inputAndRes(): try: a = raw_input(Enter no. :- ) n = int(a) except Exception: print ----------------------------------- print Please Enter No. , ( The Number you entered is not a number ) inputAndRes() else: if(n>0): print Number is Positive if(n<0): print Number is Negative if(n==0): print Number is ZERO wantRestart()def checkYN(checkResult): if (checkResult == 'Y'): inputAndRes() if (checkResult == 'N' ): print ( !!! Program Quit !!!) else: print (Invalid Option Make sure that you type (Y / N)) print (Continue..) wantRestart()def wantRestart(): print \nWant to Restart Program, if Yes then Enter 'Y' and if No enter 'N' :- , try: checkResult = raw_input( ) except Exception as e: print (Sorry Error Occured: ,e) else: checkYN(checkResult) #...Start of Program...inputAndRes()
Determining whether a number is positive, negative or zero
python;beginner
null
_webmaster.106334
i tried searching on internet about this question and how Firebase handle indexing and how owner of the content running on Firebase hosting can influence crawling by Google bot. Personally, i am running website built with Polymer + Firebase as backend (so basically SPA) with like 150 visitors per week and my website has common navigation with 5 links:/price-list/contact/projects/about-us/homeAlso projects page has nested/projects/interior/projects/architectureGoogle indexed my website far more than it should and currently results like this appear when googling/projects/interior/[[id]]/[[name-of-interior-project]]So back to my question: How can i influence Google bot to forbid crawling specific sites? It is enough to just upload robots.txt or sitemap file along with other files like i would normally do with common hosting?Thanks!
Firebase hosting, sitemaps and robots.txt
seo;googlebot;firebase
Yes, you can block access to specific urls with robots.txt rule. If the page has real, unique url and google is indexing it, it doesn't matter that it's a SPA and by default links are loaded without page refresh.Disallow: /projects/interior/should match /projects/interior/[id]/[name] but not /projects/interiorYou can test robots.txt rules with Google Search Console to be sure.
_softwareengineering.68426
BACKGROUND:I develop custom WordPress plugins for my clients that they then distribute via the WordPress plugin repository. I'm increasingly running into clients who want my WordPress plugins to consume SOAP web services developed by their internal development teams (and as an aside, thus far every one of these SOAP web services have been developed using ASP.NET).From my experience, especially within the realm of WordPress plugin development, interacting with RESTful web services is almost always trivial, and they just work. From my admittedly third-hand knowledge of actually consuming SOAP web services via WordPress plugins, especially ones that are widely distributed to mostly non-technical WordPress users, embedding a SOAP client is fraught with peril as there are so many things that can cause a SOAP web service call to fail; wrong local SOAP stack, missing local SOAP stack, malformed service response, etc. etc.What I am finding is that many of the business people in decision-making positions within my (prospective) clients have little-to-no knowledge of the tangible differences between RESTful web services and SOAP-based web services. To these people a web service is a web service; it's 6 of one, 1/2 dozen of the other. They tend to think What's with all the fuss?Further the ASP.NET developers at these client, developers who have been immersed in the Visual Studio toolset have been conditioned by Microsoft's excellent developer tools marketing to see SOAP as the easy way; just add Visual Studio and the SOAP web service works like magic! And it does, at least until you try to use some other stack to access the web service and/or until you are trying to get people who are not using Visual Studio or adopt the web service; then the picture is very different.When these developers hear me advocate they implement a RESTful web service instead if I get push back I am getting one of two responses; they say:Why go to all the effort of creating a RESTful web service when I've already created a SOAP web service for you to use? You are just creating more work for me and I have other things to do.There is no benefit to RESTful web services; SOAP is actually much better because I can create an object and then I can program it just like an object. Plus SOAP is used by enterprise developers and we are an enterprise development shop; REST is just not for serious use.As an aside I think one reason I get these responses is because ASP.NET developers often have little-to-no exposure to REST (isn't this article really on the fringe for most ASP.NET developers?) I think they really don't know how little work it takes to create an HTTP GET-only RESTful web service once they already have all the code implemented for a SOAP web service. And I think this happens because Microsoft's approach is to give tools to developers so they don't feel the need to learn the details. Since Visual Studio claims to take care of so many things for developers why should a developer care to learn anything that Visual Studio claims to handle? I know that's what I thought when I used to code web sites for the Microsoft platform. It wasn't until I moved to PHP that I realized what HTTP headers were and that I realized the difference between a 301 and a 302 HTTP status code, and most importantly that I realized these concepts were both easy to understand and vitally important to understand if one wants to create a robust and effective site on the web.MY QUESTION:What I am asking is how do I counter these responses and get my prospective clients to consider creating a RESTful web service? How can I get them to see the many benefits that using a RESTful web service can offer them? Also how can I get them to see the large potential downside of releasing a WordPress plugin that potentially incurs a large support cost?NOTE:If you disagree with my premise that calling RESTful web services are preferable to calling SOAP web services from within a WordPress plugin then please understand that I'm asking for help from people who agree with my premise and ideally I'm not looking to debate the premise. However if you feel the need to argue then please do so in a respectful manner recognizing that we each have the right to our own opinions and that you might never be able to sway me to agree with yours. Which of course, should be okay.
Convincing a Client to Offer a RESTful Web Service instead of a SOAP Service?
rest;web services;communication;client relations
So while I tend to agree with your position, I'm still going to throw out some ideas for balance. First the issue:Why go to all the effort of creating a RESTful web service when I've already created a SOAP web service for you to use? You are just creating more work for me and I have other things to do.The problem is that they've already done work. More than likely, based on your description, these web services are WCF services. Microsoft has really taken the pain out of creating SOAP based web services, so from a development/maintenance perspective it makes sense from the client's perspective just to use the WCF stack. If they had gone through the trouble of rolling the SOAP stack themselves, you wouldn't have such a hard sell.The problem, as I see it, is that Wordpress (a PHP technology) is ill equipped to handle SOAP. SOAP is a standard non-standard adaptation of the HTTP protocol. In short, instead of your typical request and HTTP header information as part of a normal client, there is an XML body as well. That XML is usually mapped to objects and has its own header information. In short, it's not something that PHP is designed for out of the box. Are you using PHP:SOAP? Hopefully it makes using SOAP easier.However, more on practical strategy later.There is no benefit to RESTful web services; SOAP is actually much better because I can create an object and then I can program it just like an object. Plus SOAP is used by enterprise developers and we are an enterprise development shop; REST is just not for serious use.This is actually much easier to deal with. In short, most web services I've seen have very simple request models. All you need to pass in is an ID and some authentication token. That's the trivial type of interaction that RESTful web services thrive on. You can return an XML or JSON bound representation of your object quite easily. In fact, the MS stack has the binding logic for both of those. Very rarely does a client need to send complex hierarchical data to the server.Practical ways to make both of you happyI know it may sound silly, but have you considered a web service wrapper? Something that translates the REST calls you need to keep Wordpress happy into the SOAP based calls that makes your client's WCF services happy? That might be the most peaceful way of dealing with it. Having done RESTful web services using ASP.NET MVC, it would be trivial to keep things in the Microsoft stack where you need it, and perform the translation to the PHP stack in a sane manner.
_unix.327539
I saw a person that while using their terminal, it output a joke and changed the colors and laughed at the user. It said something along the lines of leaking colors into the console since (year). I don't remember what it was, but I'd like to use it because the Kubuntu Konsole gets very boring after long hours of use, and I'd like to liven things up a bit. Any ideas on what it is/where I can get it?What I'm looking for is something that does it automatically -- without specific input from the user to run a script or command (or even a command run at startup). The thing I'm looking for changed the color themes of the shell at random intervals and joked about the color change. Perhaps it was just a different terminal program (I don't know if that is the right term) than Konsole that is built into Kubuntu.
Random colors and jokes in the shell/terminal
shell;terminal;colors;console;random
null
_unix.305079
I have a Kortek touch screen that is exposed to the system, as shown in /proc/bus/input/devices, via two drivers :hid-generic and hid-multitouchI do not want hid-multitouch driver to expose Kortek touchscreen meaning I want to disable Kortek from hid-multitouch.Is there a way I can do this ?can I use quirks ? and if yes, how ?
How to disable a device (hardware) from using hid-multitouch driver?
modprobe;hid
null
_cs.70360
Let $M$ be a square matrix and $S(M) = \sum_{i<j} m_{i,j}$ the sum of the elements in the upper triangular part of $M$. Is there an efficient algorithm to find a permutation matrix $A$ that minimizes $S(A M A^{\top})$
Optimal permutation of matrix rows and columns
sorting;permutations
null
_unix.361065
Tomorrow I was trying to boot into Debian, but I got stuck in a login loop. I would be able to login, and the desktop screen would be shown for five seconds, after which it would log me out and show the login screen again. Since yesterday I have been researching about the issue and have found some possible solutions, but none seem to be working. I am running Debian Jessie with Gnome. Many of the proposed solutions are about ownership of certain files in the home directory, but this is not the case here, because I checked everything. Also problems with .xinitrc, .xsession or .Xauthority are not the issue.So after trying all the above, I tried starting gdm3 from root in tty and it would give me an error message. It is saying:i915 firmware: failed to load i915:skl_dmc_ver1.bin (-2)i915 firmware: direct firmware load for i915:skl_dmc_ver1.bin failed with error -2Failed to load DMC firmware [https://01.org/linuxgraphics/intel-linux-graphics-firmwares], disabling runtime power managementSo I guess something seems to go wrong with the Intel graphics drivers? Or atleast I think so? I am not really an expert on all this stuff. Does anyone have an idea what might cause this? Or how I could solve this? Or what steps next to take? Everything used to work great for several months and I guess I did something wrong the day before yesterday, but for the love of Moore I cant seem to remember what I was doing then, nothing special atleast.
Stuck in login loop (Debian)
debian;gnome;login;gdm3
null
_unix.123706
While running Gromacs benchmarks in different setups (intra-node vs 2, 3, and 4 nodes connected with Infiniband), we have noticed severe performance degradation.In order to investigate, we have created a test program that uses MPI_Alltoall() to transfer data packages of various sizes (4 bytes to 2 MB) among all nodes.Both the internal timings of the test program and statistics gathered from IntelMPI's I_MPI_STATS facility show the same pattern: for small amounts of data, the resulting bandwidth is acceptable, whereas for larger payloads, behaviour becomes erratic: some of the transfers take extremely long (about 2.15 seconds), so average performance collapses. These very long delays seem to be occurring stochastically, so they may be absent or small sample sizes (e.g. 100 transfers per payload size). Here is some sample data taken with 4 nodes at 1000 transfers per size:# Message size Call count Min time Avr time Max time Total timeAlltoall1 2097152 1000 5649.09 13420.98 2152225.97 13420980.692 1048576 1000 2874.85 13000.87 2151684.05 13000867.133 524288 1000 1404.05 8484.15 2149509.91 8484153.994 262144 1000 719.07 5308.87 2148617.98 5308866.745 131072 1000 364.78 9223.77 2148303.99 9223767.046 65536 1000 206.95 5124.41 2147943.97 5124409.447 32768 1000 120.88 12562.09 2147678.85 12562089.688 16384 1000 36.00 57.03 93.94 57034.259 8192 1000 22.89 34.80 103.00 34803.87We are using QDR Infiniband via an unmanaged switch, and IntelMPI 4.0.3.I have tried to check with MPI out of the way by setting up a ring-like transfer (node1 -> node2 -> node3 -> node4 -> node1) with ib_send_bw, but did not observe any problematic behaviour:#bytes #iterations BW peak[MB/sec] BW average[MB/sec]16384 10000 1202.93 1202.9132768 10000 1408.94 1367.4665536 10000 1196.71 1195.85131072 10000 1195.68 1180.71262144 10000 1197.27 1167.45524288 10000 1162.94 1154.151048576 10000 1184.48 1151.312097152 10000 1163.39 1143.604194304 10000 1157.77 1141.848388608 10000 1141.23 1138.36My question: is there any way to look deeper into this to find out what the root cause of the problem is? I have already looked through the IntelMPI reference manual, but not seen anything helpful except for I_MPI_STATS.
How to debug MPI timing problems
mpi
null
_unix.166873
Example:This is {the multilinetext file }that wants{ to bechanged} anyway.Should become:This is that wants anyway.I have found some similar threads in the forum, but they don't seem to work with multi-line curly brackets.If possible, I would prefer some one-line method, like solutions based on grep, sed, awk... etc.EDIT: Solutions seem to be OK, but I have noticed that my original files include curly brackets nesting. So I am opening a new question. Thanks you everybody: How can I delete all text between nested curly brackets in a multiline text file?
How can I delete all text between curly brackets in a multiline text file?
text processing;sed;awk;grep
$ sed ':again;$!N;$!b again; s/{[^}]*}//g' fileThis is that wants anyway.Explanation::again;$!N;$!b again;This reads the whole file into the pattern space.:again is a label. N reads in the next line. $!b again branches back to the again label on the condition that this is not the last line.s/{[^}]*}//gThis removes all expressions in braces.On Mac OSX, try:sed -e ':again' -e N -e '$!b again' -e 's/{[^}]*}//g' fileNested BracesLet's take this as a test file with lots of nested braces:a{b{c}d}e1{2}3{}5Here is a modification to handle nested braces:$ sed ':again;$!N;$!b again; :b; s/{[^{}]*}//g; t b' file2ae135Explanation::again;$!N;$!b againThis is the same as before: it reads in the whole file.:bThis defines a label b.s/{[^{}]*}//gThis removes text in braces as long as the text contains no inner braces.t bIf the above substitute command resulted in a change, jump back to label b. In this way, the substitute command is repeated until all brace-groups are removed.
_unix.16133
Backstory: Recently, it was explained to me that to upgrade any package via terminal on a linux machine, I will need to use the distributions package management system to install or upgrade the package.Since I use CentOS, which is a RHEL variant, the rpm command will need to be executed in terminal to accomplish this (I believe so). Therefore if I need to upgrade or install a package I will first use the wget command to download the package and then the rpm command to install it. The process is clear till here!The actual question: However, to use the wget command to download the package, I will need a url that points to the package. How should I find this url? My personal research has shown that there are sites like rpm.pbone.net (the only one I know off) to search for these packages. A search for 'firefox' (selecting search for rpms by name) as the keyword has given results for a whole lot of distributions. CentOS 5 is listed on page 3 but the latest version seems to be 3.6.18. Given that version 5.0.2 is available for Fedora (another RHEL variant), where is the latest version of firefox for CentOS? I am unsure which package should I download to upgrade firefox.Plea: It'll be great if someone can point out how should I go about searching for packages to install on CentOS. Is there an official site for CentOS packages similar to rpm.pbone.net (which I believe is unofficial). I am currently, just for practice, searching for the mozilla firefox and vlc's latest releases.
How should I search for packages to install on CentOS 5.5?
centos;install;package management;rpm
Since I use CentOS, which is a RHEL variant, the rpm command will need to be executed in terminal to accomplish this (I believe so)While RPM is used to work with the actual packages, RHEL and friends now use yum to make it less tedious.Yum lets you install software through repositories, local or remote collections of RPM packages and index files, and handles dependency resolution and the actual fetching & install of the files for you.You can find the list of repositories configured on your machine by peeking in the /etc/yum.repos.d/ directory.However, to use the wget command to download the package, I will need a url that points to the package. How should I find this url?By finding the appropriate .rpm file and downloading it? Or perhaps I don't understand what your question is. Regardless, if you're grabbing RPM files from somewhere on the internet, they're probably going to also have a yum repo set up, in which case it would be far more prudent to actually install their repo package first.Hilariously, you do this by downloading and installing an RPM file.My personal research has shown that there are sites like rpm.pbone.net (the only one I know off) to search for these packagesWhile that site lets you search many known RPM packages, and you might find some handy bits and pieces there, I wouldn't try using it for things you care deeply about.EPEL is a handy repository.You can also take a peek at atrpms and RPMForge, though use them with caution. They are sometimes known to offer package replacements that may end up causing the worst sort of dependency hell ever experienced. It took me a few weeks to sort out a mess that someone made with clamav.If you use either of those repositories, please consider setting their enabled flag to 0 in their config files in /etc/yum.repos.d/ and using the --enablerepo=... command line switch to yum.Given that version 5.0.2 is available for Fedora (another RHEL variant), where is the latest version of firefox for CentOS?There are two bad assumptions here.First, you have the Fedora/RHEL relationship reversed. RHEL is generally based on Fedora, not the other way around. RHEL 5 is similar to Fedora 6. Any packages built for Fedora 6 have a high chance of operating on RHEL 5. However, Fedora is bleeding edge, and releases have a 12-month lifespan. Nobody is building packages for Fedora 6 any longer, it went end of life back in 2007ish.Second, if you're trying to use CentOS 5 as a desktop OS in this day and age, you're insane. It's prehistoric. In fact, for a while modern Firefox versions wouldn't even run on CentOS 5 because of an outdated library. That's now resolved. Mozilla provides official (non-RPM) builds suitable for local installation and execution that you can use instead. Just head over to http://getfirefox.com/ for the download.CentOS, being based on RHEL, inherits RHEL's packaging policy. RHEL never moves to newer non-bugfix versions of anything, as their goal is general stability. For example, CentOS 5 will be stuck with PHP 5.1, PostgreSQL 8.1, Perl 5.8 and Python 2.4 forever. RHEL sometimes provides newly named packages with newer versions, like python26 and php53 so that system administrators that expressly want new versions can kind of have access to them.I am unsure which package should I download to upgrade firefox.You almost certainly will not find such a package. If you want FF5 on CentOS 5, you should probably do a local installation of the official binaries from Mozilla.I am currently, just for practice, searching for the mozilla firefox and vlc's latest releases.atrpms currently seems to offer vlc. (I would not recommend simply grabbing the RPM from that page and installing it, but using yum to install it from the atrpms repo.) The official VLC RHEL download page recommends RPMForge instead, though they're shipping an older version there. Yes, that means that both of them offer vlc. Remember how I recommended setting enabled to 0? Yeah, this is why.I want to take a moment to re-emphasize that you should not try using CentOS 5 as a desktop OS right now. Red Hat's update policies indicate that RHEL 5 will stop getting non-bugfix updates at the end of the year, and stop getting anything but security and critical bug fixes at the end of next year. It'd basically be like installing XP on a new machine. RHEL 6 has been out for a while. The CentOS folks had to completely redo their build environment in order to accommodate it. Apparently the CentOS 6 images are being distributed to mirrors now, or so their QA calendar suggests. We'll see. Regardless, it would be a slightly better idea for a new installation today, if you expect the machine to have a long life in production.On the other hand, if you're seriously looking at Linux on the desktop, consider a distribution that keeps itself up to date with modern software, like Fedora itself or even something Debian-based like Ubuntu. Ubuntu has a lot of mindshare in desktop installs, and it seems like apt repositories (apt is their yum-like tool) are far, far more easily found than yum repositories.
_webapps.35339
I love Reddit's interface, I'd like to use Reddit to access the feeds that I normally use Google Reader for. How can I treat arbitrary RSS feeds, such as BBC News and ycombinator, like subreddits?
How can I use Reddit as a traditional RSS feed aggregator?
rss;reddit
null
_cs.44250
Recently I was faced with the following Graph traversal problem:Given an arrangement of buildings in form of a DAG. All the buildings have to be colored, but there is an order for that represented by edges in the DAG. If there is an edge from A->B, this would mean building B can only be colored after building 'A' has been colored. Also given is an array which provides the initial cost of coloring each building (vertex) in the graph. The actual cost of coloring a building 'v' is I(v) * C(v), where 'I(v)' is the number of buildings colored before 'v' plus '1' and C(v) is its initial cost of coloring provided in the array. Find the minimum actual cost of coloring all the buildings in the graph.If there an is an edge from A->B, building B can be colored only after building A has been colored. Of course, there can be multiple routes to B in the graph, for instance from A->B and C->B. In this case if the provided array is (2,1,3) for coloring (A, B, C), and we choose to color building in the order of (A, B, c) then total cost = 1*C(A) + 2*C(B) + 3*C(C) = 1*2 + 2*1 + 3*3 = 13. However we cannot have an ordering where B is colored first.I came up with a brute force solution to this problem. Topological sorting might help, but I couldn't really figure out how. Any suggestions?
Traversing a graph with respect to some partial order
algorithms;graphs;graph traversal
This can be formulated as an instance of integer linear programming, and then handed to an ILP solver. You could try that to see if it leads to any speedups.The formulation: introduce zero-or-one variables $x_{v,t}$, where $x_{v,t}=1$ means that vertex $v$ was colored at time $t$. (Here $v$ ranges over vertices of the graph, and $t$ ranges over $1,2,\dots,n$, where $n$ is the number of vertices.) The precedence constraints mean that for every edge $v\to w$, we introduce an inequality$$x_{w,t} \le x_{v,1}+x_{v,2}+\dots + x_{v,t-1}.$$Also we require that every vertex be colored exactly once: $x_{v,1}+\dots+x_{v,n}=1$ for each $v \in V$, and you can only color one vertex at each time instant: $\sum_{v \in V} x_{v,t}=1$, for each $t=1,\dots,n$. Now the goal is to minimize the objective function$$\Phi = \sum_{v,t} t C(v) x_{v,t},$$which is a linear function of the variables. So, this is an ILP instance and can be fed to an off-the-shelf ILP solver.ILP solvers incorporate a number of clever heuristics. If you're lucky, it's possible that one of them might help solve the problem faster than brute-force enumerating all valid topological sorts of the graph.
_unix.61765
I'm working with small network and I want to start network explorer from a terminal. When I tried to type xdg-open network:///server it opened google chrome and did nothing. I also tried to type smb://server but it hasn't helped me. I really need to run it from terminal. Does anybody know how can I do it?
Using xdg-open for accessing network with normal explorer
networking;terminal;open files;protocols
null
_unix.67888
Here's my scenario:SetupThere are 3 machines:A: on the internet : has ip (a.a.a.a), has port pa openB: my server / gateway : has ip (b.b.b.b), has port pb openC: on the internet : has ip (c.c.c.c), has port pc openConstraintsThe owner of machine A offers a service via port pa that must be accessed on machine C via port pc. The problem is, the owner of A can only allow to directly connect with my server, machine B on port pb.Note that, A and C are on the internet, so in effect, I have to act as a gateway between two machines on the internet (the literature I've found in most firewall docs concerns acting as a gateway between the internet and your local network).RequirementsMy task is to make sure I give machine C the service offered by A via my server B, in such a way that traffic from A:pa ends up on C:pc and traffic from C:pc ends up on A:pa.So, how can I achieve this, say using iptables or another Linux / Unix utility? Is it even possible?Hypothetical Solution:Here's an Idea I have in mind, but am not sure it's legit or makes sense:iptables -t nat -A PREROUTING -p tcp --source a.a.a.a --source-port pa \--destination b.b.b.b --destination-port pb -j DNAT --to-destination c.c.c.c:pcand iptables -t nat -A PREROUTING -p tcp --source c.c.c.c --source-port pc \--destination b.b.b.b --destination-port pb -j DNAT --to-destination a.a.a.a:pa
Port Forwarding Between 2 Internet Machines
iptables;firewall;port forwarding
null
_unix.310298
Where is the GNU checksum file format defined?I don't see any mention of the checksum file format at the GNU documentation website
Where is the GNU checksum file format defined?
gnu;hashsum;checksum
The documentation for the sha2 utilities points to the documentation for md5sum which saysFor each file, md5sum outputs by default, the MD5 checksum, a space, a flag indicating binary or text input mode, and the file name. Binary mode is indicated with *, text mode with (space). Binary mode is the default on systems where its significant, otherwise text mode is the default. If file contains a backslash or newline, the line is started with a backslash, and each problematic character in the file name is escaped with a backslash, making the output unambiguous even in the presence of arbitrary file names.The checksum files are simply the output of the corresponding utilities, so the above documents their format too.
_softwareengineering.104361
I'm in the early stages in the design of a system that will essentially be split into two parts. One part is a service and the other is an interface with the service providing data through something like OData or XML. The application will be based on the MVC architectural pattern. For the views, we are considering using either XSLT or Razor under ASP.NET.XSLT or Razor would help to provide a separation of concerns where the original XML or response represents your model, the XSLT or 'Razor view' represents your view. I'll leave the controller out for this example. The initial design proposal recommends XSLT, however I suggested the use of Razor instead as a more friendly view engine.These are the reasons I suggested for Razor (C#):Easier to work with and build more complicated pages.Can easily produce non-*ML output, eg csv, txt, fdfLess verbose templatesThe view model is strongly typed, where XSLT would need to rely onconvention, eg boolean or date valuesMarkup is more approachable, eg nbsp, newline normalization, attibutevalue normalization, whitespace rulesBuilt in HTML helper can generate JS validation code based on DTO attributesBuilt in HTML helper can generate links to actionsAnd the arguments for XSLT over razor were:XSLT is a standard and will still exist many years into the future. It is hard to accidentally move logic into the viewEaser for non programmers (which I don't agree with).It's been successful in some of our past projects.Data values are HTML-encoded by defaultAlways well formedSo I'm looking for aguments on either side, recommendations or any experience making a similar choice?
Is Razor or XSLT better for my project?
c#;asp.net mvc;xslt;razor
I HAVE successfully used XSLT as a web presentation tier... in 1999. In the last 12 years, much better options have come along. Do yourself a big favor, and use Razor. It's a pleasure.
_cogsci.7604
I'm a computer science major. Currently I'm working on a project for which I want to expose users to different levels of information depending on their distance from displays.Anyone who ever saw a control room of a power plant or the management software of brokers knows that the persons are bombarded with information (directly like numbers, and aggregated like graphs). I think that it might be a good idea to reduce the shown information depending on the distance and the direction one is looking.I'm pretty sure that psychologists have already done research in that regard, but I'm totally unable to find any paper on that subject.Is there a specific scientific term for this kind of information flood?
What is the scientific term for information overload?
cognitive psychology;terminology;attention;human factors
If what you are seeking is how to present material so that cognitive overload does not occur, you are in the realm of learning theory.[1]Cognitive load theory and schema (learning) theory go hand in hand in. Schemas are frameworks of information (like a steel-framed skyscraper in your mind); they start as very basic (This is a cell) and become more complex and facile (NADH-Q oxidoreductase, Q-cytochrome c oxidoreductase, and cytochrome c oxidase are mitochondrial transmembranous enzyme complexes responsible for oxidative phosphorylation, etc.) They allow (and form) Long Term Memory (LTM). We need a framework (cell) into which we can stick a fact before we can remember it for more than a very few minutes. The more we know about something (the better our schemas are), the more easily we learn. Working Memory (WM) allows us to process what we are exposed to and place it into a schema so that we can remember it. Like a computer, we have limited WM (processing ability) available to us at any given time. Efficient processing results in placing material into a schema which then facilitates Long Term Memory (LTM). Inefficient Processing results in an inability to understand what one was just exposed to. Failed schema identification means leads to inability to use information.Where does cognitive load come in? Cognitive Load takes up processing speed (reducing WM). If cognitive load is great enough, all WM is used up, and we will be unable to identify/form a schema. There are several types of Cognitive load: intrinsic (how complex the information is), extrinsic/ineffective (a bunch of things including distractions, emotionally demanding states [e.g. stress], and especially the way in which material is presented, e.g. inducing splitting of attention, etc.) and germane (what's left over to actually form schemas). They are (kind of) additive. Good schemas reduce cognitive load (increasing WM).The linked site presents different models of presenting information that promotes schema formation, identification and processing in different situations, and links to further work.1 Schema Theory and Cognitive Load Theory
_unix.159471
I'm using vim and have installed Pathogen and vim-fugative, but when editing a file under source control on github.com I use the command :Gbrowse and nothing happens.I've tried most of the advice I can find on the web about setting up particular browsers on the web but none of it worked. So I tried looking at the script that was trying to start up the browser (so I have removed references about the browser from .gitconfig).So I found the shell script: git-web--browse and modified it to generate logging statements to a log file (to see if I could work out what was going on).But when I use :Gbrowse I get no logging statements (so it looks like vim is not even calling this script).Any ideas what I'm doing wrong?InfoSystem Mac OS X (10.9.5)$ uname -aDarwin Martins-MacBook-Pro.local 13.4.0 Darwin Kernel Version 13.4.0: Sun Aug 17 19:50:11 PDT 2014; root:xnu-2422.115.4~1/RELEASE_X86_64 x86_64$ which vim/usr/bin/vim$ vim --versionVIM - Vi IMproved 7.3 (2010 Aug 15, compiled Aug 24 2013 18:58:47)Compiled by [email protected] version without GUI. Features included (+) or not (-):-arabic +autocmd -balloon_eval -browse +builtin_terms +byte_offset +cindent-clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments-conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con +diff +digraphs-dnd -ebcdic -emacs_tags +eval +ex_extra +extra_search -farsi +file_in_path+find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv+insert_expand +jumplist -keymap -langmap +libcall +linebreak +lispindent+listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape-mouse_dec -mouse_gpm -mouse_jsbterm -mouse_netterm -mouse_sysmouse+mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg -osfiletype+path_extra -perl +persistent_undo +postscript +printer -profile +python/dyn-python3 +quickfix +reltime -rightleft +ruby/dyn +scrollbind +signs+smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary+tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo+vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: $VIM/vimrc user vimrc file: $HOME/.vimrc user exrc file: $HOME/.exrc fall-back for $VIM: /usr/share/vimCompilation: gcc -c -I. -D_FORTIFY_SOURCE=0 -Iproto -DHAVE_CONFIG_H -arch i386 -arch x86_64 -g -Os -pipeLinking: gcc -arch i386 -arch x86_64 -o vim -lncursesI have installed Pathogen and, in ~/.vimrc: 19 20 21 execute pathogen#infect()In ~/.vim/bundle:$ ls -l ~/.vim/bundle/total 0drwxr-xr-x 13 Loki staff 442 Oct 4 15:18 gundo.vimdrwxr-xr-x 9 Loki staff 306 Oct 4 14:08 tabulardrwxr-xr-x 8 Loki staff 272 Oct 26 2012 vim-colors-solarizeddrwxr-xr-x 8 Loki staff 272 Oct 4 17:59 vim-fugitivedrwxr-xr-x 7 Loki staff 238 Oct 4 13:43 vim-unimpaired
Gbrowse not working correctly on MAC
vim;git;github
null
_softwareengineering.166618
Our T-SQL developer just gave his two weeks notice. We have been asked if our team of four developers would like an additional developer.We are offered to do our own T-SQL / Entitiy Framework development or we could get another dedicated T-SQL developer.What are the pros and cons with having a dedicated T-SQL developer on the team?Which would you prefer and why?
What are the pros and cons for having a dedicated T-SQL developer on your team?
team;tsql
If you have at least one person on your team already who can:Use a database profiler to get accurate useful metricsAnalyze an execution plan and know how to affect said execution planCorrectly name the tradeoffs between having indexes/not and the effects of adding/removing columns from said index.Define a clustered indexDictate from memory what normalized and denormalized structures are and what the levels of normalization are for.You might already have another t-sql developer who can keep the team at least in line. My list isn't comprehensive but if those things are there he can probably keep you guys out of trouble.If you don't have someone on your team who can do those things and your team does have their hands in the database, you need someone who can do those things. Unless you're completely unconcerned with the quality of the database (a valid stance if the system is tiny enough, read: under 15 tables perhaps.)
_unix.234182
I am trying to print something into a file through the script which will be sent as an attachment to the mail.Now what i am willing is to preserve a proper formatting within the file.Can i print maintaining the tabular format similar to something what we do it html tables.Suppose, in a loop the code is appending something like: `print |App Name:$1\tRegion:$2\tEnvironment:$3| >> file_attch.dat`Now the argument size can vary, say $1 can be 7-10 character , $2 is 5-15 character and $3 is 10-20 character.Now printing successive line of varying character length is making the attachment look odd.Can i do something to reserve first 10 character for $1, 15 characters for $2 ad say 20 character for $3. Kind of formatting within print which would server the purpose.UPDATE:MAy be i didnt framed the question properly. Here is a example : App:PROPIA Region:silo2 Env:INT App:SRO Region:silo3 Env:SYSIn the above case as PROPIA and SRO vary with character length, the tabular format is not preserved. I wanted something like below: App:PROPIA Region:silo2 Env:INT App:SRO Region:silo3 Env:SYSSomething like reserving 10 character for 1st argument and next 10 for 2nd argument, irrespective of the ;ength of actual character
Logging to a file in tabular format
shell script;scripting;logs;printf
null
_unix.311268
I was reading through the original bourne shell signal handling implementation and noticed an expression inside a comment was quoted this way:/* `stakbot' is preserved by this routine */ ^^^^^^^^also the zsh user guide The command `bye' is identical to `exit'and IIRC the zsh man pages use this same notation.Why? Is it to prevent the code from being accidentally being interpreted as a backtick expression and executed ?or is it simply a convention?
What is the reason to quote executable code with `.........'?
shell;c
That's just using the backtick as an opening quote; it's the equivalent ofstakbot is preserved by this routineandThe command bye is identical to exitusing only ASCII characters.
_cs.19755
I'm trying to write an algorithm that detects the most common subset of at least size $k$, from a collection of sets. If there are ties for the most common subset, I want the one of them whose size is as large as possible.For example if I have:s1 = {A, B, C }s2 = {A, B, C, D}s3 = { B, C, D}Then the most common subset of size $\ge k=2$ is {B, C}. As another example, if I have:s1 = {A, B, C D}s2 = {A, B, C, D}s3 = { B, C, D}Then the most common subset of size $\ge k=2$ is {B, C, D}. It's important that in this instance the algorithm would give me {B, C, D} and not {B, C}, {B, D} etc. Note that I'm not interested in the longest common subset (a different problem), I'm interested in the longest most common subset if you will. I also don't care about enumerating all the different subsets, I just want to find the most common.Is there an efficient algorithm for this problem?I have an algorithm for this problem, but I don't think it's very efficient. For $k=2$ I enumerate all subsets of size 2 and count how many times each one appears in the collection. If the most-frequently occurring pair is more frequently occurring than any other pair then that must be the most common subset. If there is more than one with the same (maximum) frequency then I look at the sets they are contained in. If these overlap exactly then I take the union of the pairs and that gives me the most common subset (with size > 2).I think this could be related to the maximum clique problem but I'm not certain.Note that just taking the intersection does not give the correct answer. For instance, if I haves1 = {A, B }s2 = { C, D}s3 = {A, C, D}then the intersection is the empty set, but the most common subset is {C, D}.
Most common subset of size $k$
algorithms;graphs;sets;data mining
This problem is known as frequent itemset mining, or more precisely, maximal itemset mining. You can take a look at the Apriori algorithm and the FP-Growth algorithm.
_webapps.9971
I would like to watch BBC programmes from abroad. Anybody has a step by step process of how to accomplish just that?I heard it will be possible to do just that some time in the second half of 2011 against payment.Additional editI suppose if I set a custom proxy to my browser connection I could watch it, right? Any reliable/safe/free proxy I could use?
How to watch BBC iPlayer outside UK
bbc;iplayer
From my understanding, the restriction on video sites such as the BBC is based on the IP address you are connecting from. The only way around this is to connect from a UK IP address using a VPN or similar. If you are leaving a machine at home, you can install OpenVPN or similar. It may cause an increase in latency and a decrease in bandwidth, but you might still be able to watch the video.
_codereview.138032
This is my first implementation the binary search algorithm. I have tested it and so far it is okay. But I need an experienced eye to review the code and recommend best practices.bool search(int value, int values[], int n) { bool x = false; int *max; int *min; int *mid; min = values; max = min + n - 1; mid = min + n / 2; while (true) { if ((value == *mid || value == *min || value == *max)) { x = true; break; } else if (max - min <= 1) { x = false; break; } else if (value < *mid) { n = n / 2; mid = min + n / 2; max = min + n - 1; } else if (value > *mid) { n = n / 2; mid = max - n / 2; min = max - n - 1; } } return x;}
Implementation of the binary search algorithm
algorithm;c;pointers;binary search
Variable namesThe name x is a very bad name in this case. You should change it to found or something. Even better instead of x = true;break;just do:return true;and change the last: return x; to return true;. It is always good to eliminate variables if you can.Don't modify parameters/argumentsIt is confusing when you change the value of the input parameters for example with n = n/2;. This makes the code hard to follow. Instead choose mid as mid = min + (max - min + 1)/2. This makes your intent much clearer.Avoid while(true) when possibleSome times it is impossible to avoid an infinite loop such as while(true) this is not the case. You should avoid formulating infinite loops like this because it makes it harder to analyze the termination condition of the loop to verify that it will terminate.while(true)should be:while(max - min > 0 )Simplify branches by realising invariantsRealize that mid is always defined by the min and max variables hence we can define it inside of the loop as a temporary. By doing so and keeping in mind that the array is sorted we can simplify the code as follows:bool search(int value, int values[], int n) { int *min = values; int *max = min + n; // Max is exclusive while (max - min > 1) { int* mid = min + (max - min)/2; if(value < *mid){ max = mid; // Max still exclusive as value < *mid } else if( value > *mid){ min = mid; } else{ // Its not smaller and not larger. Must be equal. return true; } } return *min == value;}
_unix.139996
Is this possible to provide a customized 'complete' function to the readline library that goes inside socat? I mean something quicker than recompiling the readline, some hook or text file configuration?socat readline EXEC:applicationIn the example above, I want to be able to do tab completion of a set of predefined commands.
How to provide a customized function 'complete' to readline of socat
readline;socat
The only hack I can think of, would be to make a directory with fake binaries using the same names as the list you provide, symlinked to some inert executable script like:#!/bin/shecho This is a fake binary, and should never execute. Please check your path.exit 0Then make sure your PATH is pointing to this directory as the last one. Now hitting TAB should make your shell interpreter think you're looking for another file or binary.
_unix.232305
The nice command allows you to adjust the scheduling priority (niceness) of a program. On all Unix-like systems I've used, niceness is specified by a range of integers, where -20 is the most favourable scheduling priority, 0 is the default, and 19 is the least favourable.Having 0 as the default niceness is intuitive enough, but why were -20 and 19 selected as endpoints of the range? Why not -128 and 127, which would exactly fit in a signed 8-bit byte? Or why not -100 to 100, which is more intuitive to decimal-minded humans, or similarly but slightly more ergonomically, -99 to 99? Was the -20 to 19 range selected arbitrarily, or does it have some relationship to the internals of the scheduler that nice originally interfaced with? (I understand that there is no such relationship today, at least for Linux, whose scheduler uses priorities in the range 0 to 139. However, I'm interested in the historical reasons for the -20 to 19 range.)
Why does niceness range from -20 to 19?
history;scheduling;nice;priority
null
_unix.266957
I have created a replacement for xterm cursor that will allow me to see the cursor (Windows, incidentally, calls the xterm cursor the i-beam, which is rather more descriptive) much better. Problem is, I've yet to be able to correctly convert the PNG to the xcursor format. Anyone know what I can do here?OS: Linux Mint 17.3 (Ubuntu-based; understands Debian packages)
Need to replace one cursor of a set; have already created the PNG for the cursor
png;xcursor
null
_unix.259699
When I log in through the login screen, I'm presented with the following message:This is the failsafe xterm session. To get out of this mode type 'exit' in the window in the upper left cornerAlong with an OK button that brings me to a small terminal in the top left of the screen. Once I exit, the system brings me back to the login screen.I am able to launch applications from the terminal such as Chrome, though the windows take up less than half of the screen with no way to resize them.This happened after I rebooted into Windows. I have checked .xsession-errors and it only has one line that says beginning X session or something like that. I have also ran apt-get and upgrade all of the packages, however the issue still persists. On the login screen there is an option to choose Default or Cinnamon, neither of which work.I'd like to know how I can get my normal desktop back.
Stuck in failsafe xterm session
linux mint;x11;desktop environment
null
_webapps.76168
I need help with the question on the picture. Can anyone make a formula for me?Thanks
Show number in a cell - based on a value from another cell
google spreadsheets
null
_webmaster.82075
Question is regarding implementation of cross-domain canonical over a desktop and m. site. desktop and m. site have identical pages hence the option is to add the following tagsDesktop pages: <link rel=alternate media=only screen and (max-width: 640px) href=http://m.example.com/page-1>Mobile pages: <link rel=canonical href=http://www.example.com/page-1>Question is what if the desktop site contains canonical tags pointing to the respective pages before the cross domain work is carried out.Desktop pages: <link rel=alternate media=only screen and (max-width: 640px) href=http://m.example.com/page-1>& a canonical tag on the desktop pagesI would like to know if the above is best practice to have both a rel alternate and rel canonical tag on the same page. I feel the answer is a NO & never since the m. site is showing the rel canonical.
Cross-Domain Canonical - Desktop & Mobile
seo
Although a single URL for all devices is highly preferred, yes you can use them both on the same page. It works as long as the desktop version is always the canonical, and the mobile is always the alternate. According to Google:To help our algorithms understand separate mobile URLs, we recommend using the following annotations:On the desktop page, add a special link rel=alternate tag pointing to the corresponding mobile URL. This helps Googlebot discover the location of your sites mobile pages.On the mobile page, add a link rel=canonical tag pointing to the corresponding desktop URL.In the article that contains this quote, they actually use the exact same examples as you do. See this article here: https://developers.google.com/webmasters/mobile-sites/mobile-seo/configurations/separate-urls?hl=en
_cs.78166
I'm studying for my theory of computation exam and came accross the following question:Construct an appropriate Turing machine for the following language and prove or disprove it's semi-decidability:$A := \{w \in \{0, 1\}^*\ |\ \exists x \in \{0, 1\}^* .f_w(x) = |x| \}$The way I interpret this is that $A$ consists of all TM encodings $w$ which describe a TM $M'$ that for at least one input $x\in\{0, 1\}^*$ outputs the length of $x$. So now I try to come up with a TM $M$ for $A$, this TM should simulate TM encodings $w\in\{0, 1\}^*$ and check whether the TM $M'$ it simulates computes $f(x) = |x|$ for at least one $x$. In order to do this, $M$ would have to try computing all possible inputs for $M'$ and check if $f(x) = |x|$ is true for at least one input, if such an input $x$ exists than $M$ will terminate and accept the encoding of $M'$ which is $w$. If such an input $x$ doesn't exist then $M$ will keep simulating $M'$ forever and therefore won't halt. Hence I conclude that $A$ is semi decidable since it halts if a solution exists and otherwise doesn't.Does this make any sense?Also, since I need to prove/disprove $A$'s semi-decidability, I conclude that $A$ is not decidable (Rice's theorem), is that correct?
Semi decidability proof
turing machines;semi decidability
What you have to do is systematic search for a string $x$ such that $f_w(x) = |x|$ or in other words $M_w(x) = |x|$. Given $w$ your TM $M$ should try ALL input strings $x$, not just to simulate $M_w$ on a single $x$. If you wait $M_w(x)$ until it halts then you ($M$) may stuck in case $M_w$ does not halt on $x$. So, that option is not possible.One possible way is to systematically try all pairs $(x,i)$ which means simulate $M_w(x)$ for $i$ steps. Your strings may be encoded an put in one to one correspondence with integers. So a pair $(i,j)$ means simulate $i$th string $s_i$ for $j$ steps on $M_w$. Each time you simulate $(i,j)$ you check if after $j$ steps $M_w(s_i)$ halts and if yes then compare its output with $|s_i|$. If the output is equal to $|s_i|$ then ACCEPT and halt, otherwise you go to the next pair $(i,j)$. Thus so on until you ($M$) find a pair $(i,j)$ such that $M_w(s_i)$ halts after $j$ steps and its output is equal to $|s_i|$. In other words, you simply give a chance to your machine $M$ to run on every $x$ for a certain number of steps, and after running for the certain number of steps you check if it halts and if it halts you compare its output with the input length. Now to prove that it is semi-decidable notice that if indeed there is $x$ such that $f_w(x) = |x|$ then there are integers $m$ and $n$ such that $s_m = x$ and $M_w(s_m)$ halts after $n$ steps. You will eventually find that pair $(m, n)$ after a finite number of steps since you systematically try all pairs $(i,j)$. If such pair does not exist at all then your machine never halts and will run forever. Thus that set is semi-decidable. Similar post
_codereview.64662
I am trying to realize a java.sql.ResultSet into a map, in Scala.import java.sql.{ResultSet, ResultSetMetaData}class DbRow extends java.util.HashMap[java.lang.String, Object] {}object freeFunctions { def realize(queryResult: ResultSet): Vector[DbRow] = { val md = queryResult.getMetaData val colNames = for (i <- 1 to md.getColumnCount) yield md.getColumnName(i) var rows: Vector[DbRow] = Vector.empty while (queryResult.next()) { val row = new DbRow for (n <- colNames) { row.put(n, queryResult.getObject(n)) } rows = rows :+ row } rows }}I feel this could (should) be less verbose. If while/yield comprehensions existed, or somecomprehension for create a map. The client code needs a java map for now, but if an elegantenough solution can produce a Scala map, I could convert it.
Realizing a SQL ResultSet into a Map in Scala
scala;jdbc
null
_cs.72385
Is there any closed from expression that can express the number of input pins of a binary tree based on height of the tree and its number of nodes?In case that the binary tree is a full tree, it is easy to find the number of input pins. In this case we will have $2^{H-1}$ nodes at the last level of the tree ($H$ is the height of the tree). Thus, the # of input pins will be $2^H$ because each node has 2 input pins.How can we generalize this for a binary tree which is not a full trees?
A closed form expression for # of inputs of a binary tree and its number of nodes
trees;dynamic programming;binary trees
There is a known expression linking the number of input pins and the number of nodes in a binary tree. It does not use the height of the tree.For any binary tree the number of leaves equals the number of internal nodes plus one.We can prove this by induction. Basis. A tree with one node has two leaves. Induction step. Whenever I replace a leaf by a new node, both the number of nodes and the number of leaves increase by one (we lose a leaf, and get two new leaves in return).
_cseducators.917
My university has decided that it should give a core (that is compulsory) statistics class to its computer science undergraduates. This opens the interesting question of what should be in such a class that everyone has to take.As far as I know most CS degrees don't currently contain statistics classes so it's hard to compare to what already exists.Does anyone have any experience of this and what would people recommend for a new statistics class for computer science students?
What statistics should be in a computer science degree?
curriculum design;undergraduate
My first thought to provide a bit of something for you to go on was CS2013: The ACM/IEEE Joint Curriculum Guidelines for Undergraduate Degree Programs in Computer Science. This is available at https://www.acm.org/education/CS2013-final-report.pdf. I've copied out two relevant passages from that here:Computer science curricula should be designed to provide students with the flexibility to work across many disciplines. Computing is a broad field that connects to and draws from many disciplines, including mathematics, electrical engineering, psychology, statistics, fine arts, linguistics, and physical and life sciences. Computer Science students should develop the flexibility to work across disciplines. (p.20)Similarly, while we do note a growing trend in the use of probability and statistics in computing (reflected by the increased number of core hours on these topics in the Body of Knowledge) and believe that this trend is likely to continue in the future, we still believe it is not necessary for all CS programs to require a full course in probability theory for all majors. (p.50). It is worth searching that document for 'statistics'. It is mentioned in many other areas, but mostly as something learned in other areas such as Networks, HCI, Cryptography, etc.The December 2013 issue of ACM Inroads had a good article (http://dl.acm.org/citation.cfm?id=2537777) on the role of mathematics in CS. This included a section on The Current State of Mathematics in Computer Science Curricula. The article states that The most-connected mathematical topic [to CS] by far is probability and statistics and has some references to such. They also present the mathematics requirements of 25 'high quality' CS programs. Stats and Prob was required for 15, Calculus for 21 and Discrete Mathematics for 22.http://dl.acm.org/citation.cfm?id=1240202 discusses statistics in liberal arts CS curricula. Searching the CITIDEL syllabus collection (citidel.villanova.edu/ then look for syllabus collection) for 'statistics' might be worth a try, although the website has been varying between unresponsive to slow to responsive lately. Also, some of the material in there is a bit dated, but then again, statistics dates fairly well! Given that in 2013 60% of sampled 'high quality' CS programmes had statistics, maybe it is more prevalent than you think. Perhaps the best way to obtain the most up to date information is to check the websites of several good CS programmes and hope that if they include statistics, the syllabi are available online.
_computergraphics.4748
We know that in PNG,BMP,etc... the pixel value stored is not in the linear RGB space. But I found no document saying anything about the alpha channel. Is the alpha channel stored in image files in linear space or not?
Should the alpha channel be gamma corrected
gamma
We know that in PNG,BMP,etc... the pixel value stored is not in the linear RGB space.This is not necessarily true. You can store whatever color space you want into an image, it doesn't even need to be colors (such as normal maps).The alpha channel is generally linear. The alpha channel doesn't get displayed, but it is generally a non-color term used for transparency (or whatever else). Because they don't need to display on a monitor, there's no reason to store in in gamma space. If you did, you would unnecessarily lose precision at the lower end of the alpha values. Normals maps follow a similar line of reasoning, as explained very well by Julien Guertault.
_unix.318607
I have a file with 500 columns. I need to remove some columns, which the names are described in a list in another file. For examplefileA: id1 id22 id43 id4 id5 id6 id7 id68 id9 id10 id11 TT AA AG TC TT AA AG TC DD AA CC TT AC GG TC TT AG AG TC AD AA DC fileB: id1 id5 id10 id68Desired Output: id22 id43 id4 id6 id7 id9 id11 AA AG TC AA AG DD CC AC GG TC AG AG AD DC
Remove a entire column in file A which contain the names of fileds in file B
awk
null
_unix.215644
I thought it would be easier for me to mount flash drives automatically if I did the following to fstab:/dev/sd1i /mnt/usb(sd1i is found from sysctl hw.disknames)I rebooted the box with the USB 3.0 flash drive still inserted in the USB 3.0 port.During the boot process, the following errors were detected:/dev/rsd1i: BAD SUPER BLOCK: MAGIC NUMBER WRONG/dev/rsd1i: Unexpected inconsistency: Run fsck_ffs manuallyThe following file system had an unexpected inconsistency: ffs: /dev/rsd1i (/mnt/usb)Automatic file system check failed; help!Enter pathname of shell or RETURN for sh:I checked out the article How to use ed to edit /etc/fstab in single user mode (http://www.openbsdsupport.org/ed_and_fstab.html) which discussed about how to use ed to modify lines but not to delete them.Some help would be much appreciated.
Need to remove a line in fstab on OpenBSD
fstab;openbsd
You don't need to use ed unless you really want to.Once you're at a single-user prompt (just hit Enter at the Enter pathname of shell or RETURN for sh: prompt, do the following:Mount the root filesystem as read-write, then mount the /var and /usr filesystems (this will allow you to run vi or any other editor of your choice)# mount -uw /# mount /var# mount /usrOnce those are mounted, edit /etc/fstab and remove the offending line.Reboot.# rebootYour system should then restart correctly in multi-user mode.
_cstheory.10002
Search engines are increasingly being relied on as information gatekeepers, yet the criteria used by search engines to rank results is opaque to users. How can users be sure their results aren't biased or tampered with in some way to benefit some interest at the expense of search result quality?Governments routinely demand that search providers remove or lower the ranking of websites deemed politically undesirable. Businesses may pay providers to boost certain results over others to increase their revenues. Firewalls may meddle with results before they're transmitted back to users.Even seemingly innocuous changes to ranking algorithms that might not on the surface appear to be biased, could actually be deviously designed to harm websites that share some common attribute (unrelated to actual quality).Is it possible to detect search engine bias, by say monitoring results over a period of time and evaluating whether some hidden variable (perhaps a political affiliation) is a driving factor in the change in website rankings?A sneaky provider may gradually over time lower the ranking of targeted websites (and perhaps random websites as well to distract users). What are the limits on how much bias a provider can introduce without detection? Or is it possible to always conceal such interference by deviously selecting weighted ranking criteria that incidentally produce the intended result (by way of data snooping).Does any of this change if the ranking criteria is made public? Do we need to open-source the criteria search engines use?This reminds me of the result that detecting whether or not a complex financial instrument such as a CDO has been tampered with by the seller is equivalent to solving the densest-subgraph problem:http://www.cs.princeton.edu/~rongge/derivative.pdfThanks!
Is there a way to detect search engine bias?
ds.algorithms;data mining
null
_unix.180670
I was wondering if there's a way to run shell commands that affect only a certain directory and its subdirectories.I'm using PHP and I want to make an app that allows the user to execute shell commands from a web page, but I want these commands to be restricted to a directory only
Terminal sandbox commands
shell;terminal;php;sandbox
null
_softwareengineering.347120
A simple DB containing two tables could be:CREATE TABLE Stock ( StockID INT StockDesc VARCHAR SupplierID INT PRIMARY KEY (StockID) FOREIGN KEY (SupplierID) REFERENCES Supplier(SupplierID))CREATE TABLE Supplier ( SupplierID INT SupplierName VARCHAR PRIMARY KEY (SupplierID) )(Assume each stockID has only one supplier)A typical SQL query would be:SELECT StockID, StockDesc FROM Stock, Supplier WHERE Stock.SupplierID = Supplier.SupplierID AND SupplierName = Smiths;My fundamental understanding of Database design alerts me to the fact that I've already established that there is a relationship between STOCK and SUPPLIER tables via the DDL. Why does SQL need to restipulate that fact again in the first WHERE clause?
Why create Foreign Key References in DDL if using Where clause in SQL?
database design
Note: Assuming that this question is about foreign key constraints rather than the WHERE clause, as WHERE seems to be irrelevant to the question. I'm assuming you intended JOIN instead.Foreign key references are constraints on modifying operations such as INSERT, UPDATE, and DELETE. When a constraint exists, it prevents those operations from making changes which might result in a data structure which violates that constraint; constraints serve the purpose of helping to enforce the integrity/validity of the data in a database by preventing invalid modifications. Constraints are not so relevant to SELECT operations because SELECT is non-modifying, so there's nothing really to protect against. The purpose of a SELECT statement is to retrieve rows from one or more tables. SELECT statements only return data from tables which you explicitly request to be queried. If you don't specify that you wish to retrieve data from a referenced table using a JOIN, then any entries referenced through a foreign key constraint will be excluded by default from your results.
_codereview.51787
I'm new to CSS and HTML and while I've achieved a centering of the logo and navigation links, it feels wrong. I'm developing a site for a small non-profit for a side project. I have a logo next to a few navigation links. I put the logo and navigation links in a list.Could someone take a look at this and tell me if my CSS is correct? I just think that there's something off. Or that I'm duplicating CSS code.Here's my JSFiddle to show what I've done.HTML:<body><nav id=top_nav> <ul id=nav_links> <li> <a href=#> <img id=nav_logo src=logo.png/> </a> </li> <li><a id=nav_whoweare class=nav_links_border href=#>WHO WE ARE</a></li> <li ><a id=nav_whatwedo class=nav_links_border href=#>WHAT WE DO</a></li> <li ><a id=nav_explore class=nav_links_border href=#>EXPLORE</a></li> <li><a id=nav_donate href=#>DONATE</a></li> </ul></nav><nav id=bottom_nav></nav>CSS:#top_nav { text-align: center; width: 100%;}#nav_logo { /*display: inline-block;*/ width: 142px; height: 159px; padding-right: 20px; text-decoration: none;}#nav_links li a { padding: 15px 20px; text-decoration: none; font-family: Arial; font-size: 24px; color: #000;}/* When window is resized, doesn't wrap menu*/#nav_links { display: inline-block; padding: 0 0 30px 0; list-style:none; text-align:left; white-space: nowrap;}#nav_links li { vertical-align: text-top; display: inline-block; float: none; margin: 0 -3px 0 0;}/* ------------------------------------*/a.nav_links_border { border-right: 1px solid #f37f43;}#nav_links li a:hover { text-decoration: underline;}#nav_links > img:hover { text-decoration: none;}#bottom_nav { clear: both;}
Correct way to center logo and navigation
beginner;html;css;html5
There is no real correct way to go about it; it all depends on how you want it to look. I can't quite speak to that, only the code you posted, and how it fares.Anyway: Review.General cleanlinessYou should clean up your code (preferably before posting it, since many the following things should be fairly obvious). You've got a bunch of CSS that doesn't do anything:There's a line that's just commented out: Remove it.The #nav_links > img:hover doesn't affect anything. There's no img element immediately descendent from #nav_linksThe #bottom_nav style (and indeed the entire element!) is pointless. It seems only to exist to clear any floats. But there are no floats. Similarly, the #nav_links li style has a float: none declaration which does nothing.You can skip every single link ID (e.g. nav_whoweare); you're not using them. If you find yourself needing them, then add them. But not before.You don't need width: 100% on the nav element. It's a block element, so it'll do that automatically.You don't need display: inline-block on the ul element.I'd also advice you to use a more logical order for you CSS rules. Right now it's haphazard:#nav_links li a#nav_links << high-level#nav_links li#nav_links li a:hover << low-levelbut it would make more sense to mirror the structure of the markup (and just the complexity of the CSS selectors):#nav_links << high-level#nav_links li#nav_links li a#nav_links li a:hover << low-levelClass names and IDsIt should be as simple as possible, but no simpler. In other words, only add stuff when there would otherwise be ambiguity.For instance, there no reason to give the #nav_links list an ID. It's the only <ul> element inside #top_nav, so you can unambiguously reference it as simply #top_nav ul. Or, if you get rid of the bottom_nav element since it's not doing anything, you're left with simply nav ul. (You might want a bottom nav later, but cross that bridge when you come to it; not now).Furthermore, don't use classes like nav_links_border. It's given that it's a navigation link, because it's inside the <nav> element, and it's given that it's a link because it's an <a> element. So you CSS rule should target nav a rather than an overly specific class like a.nav_links_border.What you're left with is just border, but that's not terribly descriptive. Looking at it overall, though, it's also obvious that all the links - with 1 exception - have a border. So, really, you'll want a way to treat the exception differently, rather than add a class to everything else.I'd suggest giving the <li> containing the logo an ID - simply call it logo. This lets you do two things:You can style all the list items the same basic way, and only do something different for #logoYou can get rid of the ID on the <img> element, since you can instead reference it as #logo imgI get this:<nav> <ul> <li id=logo> <a href=#><img src=logo.png/></a> </li> <li><a href=#>WHO WE ARE</a></li> <li><a href=#>WHAT WE DO</a></li> <li><a href=#>EXPLORE</a></li> <li><a href=#>DONATE</a></li> </ul></nav>And this:nav { text-align: center;}nav ul { padding: 0 0 30px 0; list-style:none; text-align:left; white-space: nowrap;}nav li { vertical-align: text-top; display: inline-block; margin: 0 -3px 0 0;}nav a { padding: 15px 20px; text-decoration: none; font-family: Arial; font-size: 24px; color: #000; border-right: 1px solid #f37f43;}nav a:hover { text-decoration: underline;}#logo a { border: none;}#logo a:hover { text-decoration: none;}#logo img { width: 142px; height: 159px; padding-right: 20px;}Here's a jsfiddle - should look the same (apart from less margin, but that's because I remove the <body> tag from your code; jsfiddle add that itself; you shouldn't do it yourself)
_webmaster.53699
When running multiple sites on the same hosting account, one might find that an attacker has attempted to breach one site. A webmaster may want to block the IP address from all sites hosted in the same account.Is there a way to keep the list of banned IP addresses/ranges in a single location that can be referenced by all the websites in the same account?(Example: a file that contains a list of IP addresses/ranges located at the root of the file system and simply referenced in the .htaccess file of each website.)
How to set up a single IP blocklist for multiple sites hosted on the same account
htaccess;apache;security;apache2;ip address
The more efficient way to blacklist IP addresses is to use your operating system's firewall, or by using a module coded specifically for this purpose, like ModSecurity. If you don't have access to your OS and cannot add modules, then you could edit your Apache configuration file with the following (to block the example IP address 111.222.33.444):<Location /><Limit GET POST PUT>order allow,denyallow from alldeny from 111.222.33.444</Limit></Location>Then restart Apache. As covered here, that should work for all your virtual hosts.Alternatively, you can try to use the include directive for each virtual host config as covered here (use deny from for each IP to block instead of allow from).Lastly, if you do not have access to either your OS's firewall, server modules, or server configuration files (as might be the case with a shared web hosting account), you could use a server-side script like Perl to copy the IP's from a central file into each .htaccess file, and schedule this as a cron job so that each virtual host will share the same list of IP's to blacklist/block.
_softwareengineering.144556
Possible Duplicate:How many developers before continuous integration becomes effective for us? I'm new with continous integration though I have used it without know the term. So, I'm interested about the mechanism can be possible to implement and my question is: can a continuous integration server be useful for a team of two developers? (but that write a lot of code)
Is continuous integration useful for a team of two developers who write a lot of code?
continuous integration
null
_unix.3606
I was running rsnapshot as root and I got the following error. Why would this happen? what is .gvfs?rsnapshot weekly slave-ivrsync: readlink_stat(/home/griff/.gvfs) failed: Permission denied (13)IO error encountered -- skipping file deletionrsync: readlink_stat(/home/xenoterracide/.gvfs) failed: Permission denied (13)rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1042) [sender=3.0.7]
root user denied access to .gvfs in rsnapshot?
filesystems;backup;io;rsync;rsnapshot
.gvfs directories are mount points (sometimes). You may want to use the one_fs option in your rsnapshot configuration (so that it passes --one-file-system to rsync).Gvfs is a library-level filesystem implementation, implemented in libraries written by the Gnome project (in particular libgvfscommon). Applications linked with this library can use a filesystem API to access ftp, sftp, webdav, samba, etc.Gvfs is like FUSE in that it allows filesystems to be implemented in userland code. FUSE requires the one-time cooperation of the kernel (so it's only available on supported versions of supported OSes), but then can be used by any application since it plugs into the normal filesystem API. Gvfs can only be used through Gnome libraries, but doesn't need any special collaboration from the kernel so works on more operating systems.A quick experiment on Ubuntu 10.04 shows that while an application is accessing a Gvfs filesystem, ~/.gvfs is a mount point for a gvfs-fuse-daemon filesystem. This filesystem allows any application to access Gvfs filesystems, without needing to link to Gnome libraries. It is a FUSE filesystem whose implementation redirects the ordinary filesystem calls to Gvfs calls.The gvfs-fuse-daemon filesystem does not allow any access to the root user, only to the user running the application (it's up to each individual filesystem to manage the root user's permissions; a classic case where root doesn't have every power is NFS, where accesses from root are typically mapped to nobody).
_scicomp.26269
I've asked this question on mathoverflow too.Let$T>0$$I:=(0,T]$$d\in\mathbb N$$\Lambda\subseteq\mathbb R^d$ be nonempty and open, $$\mathcal V:=\left\{\phi\in C_c^\infty(\Lambda,\mathbb R^d):\nabla\cdot\phi=0\right\}$$ and $$V:=\overline{\mathcal V}^{\left\|\;\cdot\;\right\|_{H^1(\Lambda,\:\mathbb R^d)}}\;,\;\;\;H:=\overline{\mathcal V}^{\left\|\;\cdot\;\right\|_{L^2(\Lambda,\:\mathbb R^d)}}$$$\operatorname P_H$ denote the orthogonal projection from$L^2(\Lambda,\mathbb R^d)$ onto $H$$A_0u:=-\Delta u$ for $u\in\mathcal D(A_0):=H_0^1(\Lambda,\mathbb R^d)\cap H^2(\Lambda,\mathbb R^d)$, $$Au:=\operatorname P_HA_0u\;\;\;\text{for }u\in\mathcal D(A):=\mathcal D(A_0)\cap V$$ and $$B(u,v):=(u\cdot\nabla)v\;\;\;\text{for }u\in L^2(\Lambda,\mathbb R^d)\text{ and }v\in H^1(\Lambda,\mathbb R^d)$$$f:I\to H$$u\in L^2(I,\mathcal D(A))$ with $u'\in L^2(I,H)$ and $$u'(t)+A_0u(t)+B(u(t),u(t))+\nabla p(t)=f(t)\;\;\;\text{for all }t\in I\tag1$$ for some $p:I\to H^1(\Lambda)$Assuming that $\Lambda$ is sufficiently regular such that $(1)$ is well-defined, it can be shown that $(1)$ is equivalent to $$u'(t)+Au(t)+\operatorname P_HB(u(t),u(t))=f(t)\;\;\;\text{for all }t\in I\;.\tag2$$ I want to solve $(2)$ numerically and I'm only interested in $u$ (and not in $p$).I know that there are many references for the numerical study of $(1)$. However, it seems to me that all the considered schemes don't use $(2)$. They only use $(2)$ for theoretical results like existence and uniqueness of solutions. Maybe I'm wrong and I just don't see that these schemes use $(2)$.In any case, my question is: Are we able to provide a numerical scheme which solves $(2)$ directly?Or is there something which prevents us from doing that? My idea is to apply, for example, a semi-implicit Oseen discretization in time, i.e. consider $$\frac{u(t_n)-u(t_{n-1})}h+Au(t_n)+\operatorname P_HB(u(t_{n-1}),u(t_n))=f(t_n)\;\;\;\text{for all }n\in\left\{1,\ldots,N\right\}\tag3$$ with $$t_n:=nh\;\;\;\text{for }n\in\left\{0,\ldots,N\right\}$$ and $h:=T/N$ for some $N\in\mathbb N$. After that for each $n\in\left\{1,\ldots,N\right\}$ $(3)$ should be solvable by a finite element method (or is there some problem that I don't see?).
Time discretization of the variational formulation of the Navier-Stokes equation
finite element;pde;numerical analysis;fluid dynamics;navier stokes
null
_unix.239533
Is OpenSSH an implementation of a SSH server? Is it also an implementation of a SSH client?Is AutoSSH not an implementation of SSH server? Is it an implementation of a SSH client?
Differences between OpenSSH and AutoSSH
ssh
null
_unix.219665
I'm quite new to Linux, and I've found quite a bit of useful information on how to do character counts in a file, but is there a way in Linux/terminal to sort a text file by the number of times a specific character occurs per line?E.g. given:baseballaardvarka man a plan a canal panamacatbatbillSort by the number of occurrences of the letter a yielding:a man a plan a canal panamaaardvarkbaseballcatbatbillRegarding cat and bat at one occurrence of a each, I don't care if the order of lines with equal counts get reversed, just interested in a general sort of lines by character frequency.
How to sort file by character occurrences per line?
text processing;sort
null
_unix.256162
My headphones are working some kind of strange.The sound is very low even when set to maximum, and the speech is almost unhearable. When I listen to music I can't hear speech as well as when watching movies. Also there is some kind of strange noise.But, when I try to unplug my headphones halfway everything works perfectly.The problem is definitely not in headphones because I tested them on Windows.I use Debian 8 with Cinnamon 2.6.13 on it.UPD: The same situation on Linux Mint(on the same machine). Looks like this is some kind of drivers problems.
Headphones only work correcly when plugged halfway
linux;debian;audio;cinnamon
null
_hardwarecs.1627
Ravensbuger sells tiptoi books which have almost invisible codes that can be read by a digital pen, which then makes sounds if you have uploaded the appropriate sounds to the pen before. The format of the codes as well as the file format for the pen have been reverse engineered and it's now possible to create own games etc.Unfortunately the quality of my laser printer is not good enough for the pen to recognize the codes.So I am now looking for a printer which can be used to print tiptoi compatible pages. From the last Make magazine, the requirement is just 1200 dpi, but I don't want to just buy any 1200 dpi printer. If someone could suggest a printer and has successfully printed tiptoi codes, that would be great.The printer shouldprint DIN A4be black and white only (no color needed)print in 1200 dpi or highercost less than 200 / 220 USDwork with Windows (Linux support welcome)have Ethernet connection
Laserprinter to print codes recognizable by tiptoi
printer
null
_unix.220436
It seems that this is a common problem, but there haven't been any satisfactory solutions posted yet, despite existing for years. Briefly, the problem is that when I mount my iPad in Linux Mint, I cannot find the Documents on My iPad device that everyone claims appears automatically. The other one, just My iPad appears fine with the generic folders inside for books and pictures. I need to be able to find that device in order to move some movies over, but it seems like no one is sure how to do it. I've tried restarting everything, reinstalling libimobiledevice, and anything else that's been suggested before. Here are the other questions which are essentially the same, but never properly resolved:https://askubuntu.com/questions/575197/can-not-see-documents-on-nicks-ipad-device-when-plugging-my-ipad-into-my-ubun?rq=1https://askubuntu.com/questions/442511/mounting-ipad-in-ubuntu-13-10-cant-find-documents-on-ipad-device?rq=1https://askubuntu.com/questions/490227/cant-find-documents-on-usernames-ipad-in-12-04-with-an-ipad-mini?rq=1I'm running Mint 17.2 on a fresh install (literally just the other day), 64-bit, and it's an iPad 4 with iOS 8. If there's any other information you need let me know. If anyone else is in the same situation, could you verify that you either can or cannot see the device in question?
Unable to find Documents on iPad device when mounted
linux mint;mount;devices;ios
null