id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
βŒ€
_unix.226713
I'm currently using Ubuntu but I have printing set up with CUPS. Is it possible to share a CUPS printer with a VirtualBox Windows guest? Or, namely Windows 2000.
How do I share a CUPS printer with a Windows VirtualBox guest?
windows;virtualbox;cups
YESSharing a printer from a Linux/CUPS host to a Windows Virtual Machine is easy. I'll show you how to do this on Windows 2000 because it's probably the most old, and difficult (and, somewhat practical) configuration to get working.Add a Host Only Network: File β†’ Preferences β†’ Network β†’ Host Only Networks β†’ Add Host Only Network. This will create your a network to the guest. The host will have the IP address of 192.168.56.1.Add a Network Adapter in the Machine Settings of VirtualBox. Right click the Virtual Machine β†’ click Settings β†’ click Network β†’ Set Attached to on Adapter 1 (or any adapter) to Host Only Adapter.Go to the website http://localhost:631/. On the top tab click Administration β†’ check both Share printers connected to this system and Allow printing from the Internet β†’ click the Change Settings button.On the top tab click Printers β†’ click the printer you want to share, and copy the url. For me, mine is http://localhost:631/printers/Samsung-M262x-282x. You want to replace localhost with 192.168.56.1, and copy that.Now we have two options. One of them is to track down the original driver, the other is to use the PostScript Definition file that Linux is using, and install the ability for Windows to use that. If the printer is a PostScript printer, that's a ton easier. We're going to assume it is. A sane printer daemon should be able to read .ppd's (PostScript Printer Definitions), but Windows can't. In order to obtain that ability we need to install some third party software.Now you need to share /etc/cups/ppd/. Right click the Virtual Machine β†’ click Settings β†’ click Shared Folders β†’ click Add a Shared Folder (icon on right). In the Folder Path put /etc/cups/ppd/. Click both Automount and Read-only.Now in the Virtual Machine, you want to install the Adobe Universal PostScript Windows Driver I think this may come in versions of Windows newer than win2k. You can download this direct to the Virtual Machine, or you can save it to the host and share the directory you saved it to (just like we did above.)Run the file you just downloaded (winsteng.exe).Click Next.Click ACCEPT (the EULA screen)Click It is connected to the network (Network Printer) to add a Network Printer.Click NextPaste the address from above (in 3.2).. Should looking something like http://192.168.56.1/printers/<something>.Click Yes to install the driver.Click Browse to find a more suitable driver. ;)Click Network.uncheck Reconnect on Loginclick Browse β†’ expand Virtual Box Shared Folders β†’ expand \\Vboxsvr β†’ click \\VBOXSVR\ppd β†’ click the OK button.Click Drive β†’ click whatever drive you just added, defaults to E:.Click the printer on the left.Click the OK button.Make sure the new printer driver is selected (this should be the same screen from Step 5 above.Click Next.Choose to print the test page.Click InstallYou're done! I don't suggest you configure it unless you're special. Click a few Nexts and Finish.These instructions were written with an Abandonware copy of Windows 2k Pro SP4
_softwareengineering.107319
We read on Wikipedia > Iterative deepening depth-first search thatThe space complexity of IDDFS is O(bd), where b is the branching factor and d is the depth of shallowest goal.Wikipedia also gives some decent pseudocode for IDDFS; I pythonified it:def IDDFS(root, goal): depth = 0 solution = None while not solution: solution = DLS(root, goal, depth) depth = depth + 1 return solutiondef DLS(node, goal, depth): print(DLS: node=%d, goal=%d, depth=%d % (node, goal, depth)) if depth >= 0: if node == goal: return node for child in expand(node): s = DLS(child, goal, depth-1) if s: return s return NoneSo my question is, how does the space complexity include the branching factor? Does that assume that expand(node) takes up O(b) space? What if expand uses a generator that only takes constant space? In that case, would the space complexity still be a function of the branching factor? Are there situations where it is even possible for expand to be a constant-space generator?
Space complexity of Iterative Deepening DFS
algorithm analysis;big o
You're right. Wikipedia is wrong! Does anyone have the book referenced on Wikipedia to find out what they mean (maybe they're talking about an optimization of some sort)?Check out http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.91.288 and http://intelligence.worldofcomputing.net/ai-search/depth-first-iterative-deepening.html
_cstheory.20278
Consider the following empty intersection problem:INPUT: a ground set $[n]:=\{1, \ldots, n\}$; a set family $S_1, \ldots, S_m$ s.t. $S_i\subseteq [n]$ for every $1\leq i\leq m$ TASK: on query $X\subseteq [n]$ decide whether there exists $1\leq i\leq m$ s.t. $S_i\cap X = \emptyset$.TASKbis: on batch queries $X_1, \ldots, X_k\subseteq [n]$ decide whether $S_i\cap X_j=\emptyset$ for each $i,j$.I'm in deep search for references about this problem, both TASK as well as the batched version TASKbis: fast algorithms (sequential, parallel, randomized, any other model is appreciated), also complexity lower bounds or other characterizations...Observe that we can impose special values of $m$, for example consider $m=n^2$, or similar ...
Testing empty set intersection
cc.complexity theory;ds.algorithms
null
_softwareengineering.310562
I am working with some low-level (by that I mean code that can't use C++ exceptions and/or the standard library) code that makes heavy use of classes.Basically, every class contains a bool initialize(); method that is called right after instantiation to initialize all its components, underlying objects and such. This is done because constructors in C++ can't return a value. Also every method that allocates memory, uses system API that may fail etc. must be checked for positive value.However, this approach becomes very annoying after a while. Consider the following code:bool createHelloWorldString(String* string){ String str1; if (!str1.initialize()) return false; String str2; if (!str2.initialize()) return false; // Need to check return value as this method may fail because it dynamically allocates memory if (!str1.set(Hello )) return false; if (!str2.set(world)) return false; if (!str1.append(&str2)) return false; return string->append(&str1);}Note: This may not be the best example, but it clearly shows wheres the problem.Are there any other ways to handle errors or am I stuck with this?
Low-level error handling
object oriented;c++
null
_webapps.25510
What is impressive about Wikipedia is that information is updated so quickly.Any big event, like a death of a celebrity or whatever, it'll appear on there quicker than some news sites.I'm just wondering though, Wikipedia also keeps current information that is relatively minor.For example, this is the page of Roger Federer. ( http://en.wikipedia.org/wiki/Roger_federer )Now he's a high profile tennis player but he's not really famous for his double's record. Nevertheless, his current doubles ranking is displayed. So my question is, is his current doubles ranking just updated manually by someone every time it changes or is there some sort of automated process that Wikipedia sources this data from somewhere in order to update this statistic?
Are some parts of Wikipedia updated automatically?
wikipedia
null
_webapps.66851
Is it possible to create an RSS feed for all the tweets listed on the start page at twitter.com?In the screenshot below I am referring to the tweets which are listed on the right side of the screen.
Twitter RSS feed for start page tweets
twitter;rss;twitterfeed
Officially no, you can't. Twitter did support RSS feeds once but it was removed. (Which was an idiotic anti-user decision... but I digress.)There is a workaround though. You can't get a RSS feed of your entire Twitter feed, but you can get RSS feeds for individual users (but not hashtags) and hashtags using this tutorial and Google App Script that parses the output of Twitter widgets into a RSS feed.
_datascience.15448
Suppose, I have a data set of different types of cards with eight features in my hand. I want to find features to predict the classes of the cards (diamonds=1, hearts=2, clubs=3, and, spades=4).------------------------------------------------------------------------------| f1 | f2 | f3 | f4 | f5 | f6 | f7 | f8+--------+--------+---------+---------+---------------------------------------| | | | | | | |f1 column is for class-labels and the rest of them are features.I want to classify the cards using two times higher apriori probability only for red suits.How can I do this? Can I do this without modifying my bayes() function and parzen() function?Source Code and resources:Training data: train.txtTest data: test.txtDriver Programfunction errcf = bayescls(train, test, hpdf, apriori, winwidth)% Bayes classifier% train - training set; the first column contains label% test - test set; the first column contains label% hpdf - handle to function used to compute probability density% apriori - row vector of a priori probabilities for all classes% winwidth - window width (just for Parzen window hpdf function) clpdf = hpdf(train, test(:,2:end), winwidth); clpr = clpdf .* repmat(apriori, rows(test), 1); [val lab] = max(clpr, [], 2); errcf = mean(test(:,1) ~= lab);PDF calculation using Parzen Windowfunction pdfmx = pdfparzen(train, test, winwidth)% computes probability density for all classes% using Parzen window approximation% train - train set; the first column contains label% used to compute mean and variation for all classes% test - test set (without labels)% winwidth - width of the Parzen window % pdfmx - matrix of probability density for all classes% class with label idx is stored in pdfmx(:,idx)classnb = rows(unique(train(:,1)));pdfmx = ones(rows(test), classnb);for samp=1:rows(test)for cl=1:classnbclidx = train(:,1) == cl;indiv = zeros(sum(train(:,1) == cl), columns(test));for feat=1:columns(test)indiv(:,feat) = normpdf(test(samp,feat), train(clidx, feat + 1), winwidth);endpdfmx(samp,cl) = mean(prod(indiv,2));endendMatlab command line usage>> ercf_parzen = bayescls(train, test, @pdfparzen, 0.25 * ones(1,4), 0.1);ercf_parzen = 0.35581
Classify features using two times higher apriori probability only for specific class(es)
classification;probability;naive bayes classifier
null
_webapps.92236
I have created my app page on Facebook. I have added the required setting like Google Play Package Name, Class name and key hashes but still when try to access that app page it is showing below given error instead of taking us to play store. I have search a lot in web but did not get the relevant post. But I assume this issue can be fixed by Facebook settings. I would really appreciate any suggestion.
Content not found Facebook
facebook;facebook pages
null
_codereview.103856
This code hides a message in an image by altering the blue values in the top left part. This is just for fun as I think it would be easy for a cryptographer to find and decrypt this.OriginalWith message We will meet at London bridge at dawnAs you can (cannot) see, as long as the message is short enough, the naked eye has difficulty noticing the difference. Zooming in the top left corner will reveal the trick though.from PIL import Imagefrom itertools import zip_longestimport doctestdef read_and_write_after_tranformation(infile, outfile, transformation): # Based on a StackOverflow answer. im = Image.open(infile) list_of_pixels = list(im.getdata()) im2 = Image.new(im.mode, im.size) im2.putdata(transformation(list_of_pixels)) im2.save(outfile)def read_rgb(infile): return list(Image.open(infile).getdata())def steganographic_encrypt(message, rgb_tuples): Encodes a message in the blue values of the pixels on the top left corner of the image. >>> steganographic_encrypt(Hello, World!, [(a, b, c, luminosity) for a, b, c, luminosity in zip(range(12, 30), range(20, 48), range(100, 128), range(100, 128))]) [(12, 20, 72, 100), (13, 21, 101, 101), (14, 22, 108, 102), (15, 23, 108, 103), (16, 24, 111, 104), (17, 25, 44, 105), (18, 26, 32, 106), (19, 27, 87, 107), (20, 28, 111, 108), (21, 29, 114, 109), (22, 30, 108, 110), (23, 31, 100, 111), (24, 32, 33, 112), (25, 33, 113, 113), (26, 34, 114, 114), (27, 35, 115, 115), (28, 36, 116, 116), (29, 37, 117, 117)] return [(red, green, blue if ascii_val is None else ascii_val, luminosity) for (red, green, blue, luminosity), ascii_val in zip_longest(rgb_tuples, map(ord, message))]def steganographic_decrypt(rgb_tuples): Decodes a message from the blue values of the pixels on the top left corner of the image. >>> steganographic_decrypt([(12, 20, 72, 100), (13, 21, 101, 101), (14, 22, 108, 102), (15, 23, 108, 103), (16, 24, 111, 104), (17, 25, 44, 105), (18, 26, 32, 106), (19, 27, 87, 107), (20, 28, 111, 108), (21, 29, 114, 109), (22, 30, 108, 110), (23, 31, 100, 111), (24, 32, 33, 112), (25, 33, 113, 113), (26, 34, 114, 114), (27, 35, 115, 115), (28, 36, 116, 116), (29, 37, 117, 117)]) 'Hello, World!qrstu' return ''.join([chr(blue) for (red, green, blue, luminosity) in rgb_tuples])if __name__ == __main__: doctest.testmod() read_and_write_after_tranformation( cc.png, cc2.png, lambda image: steganographic_encrypt(We will meet at London bridge at dawn, image) ) print(steganographic_decrypt(read_rgb(cc2.png))[0:100])
Steganographic message hiding
python;image;cryptography;steganography
null
_unix.68147
Here is a screenshot of booting Arch.I guess the reason is that I force poweroff my Arch linux many times.(I already force poweroff my Arch because my firefox flash plugin use too much memory to stop my system.)Note: I can boot my Windows 7 system on the same drive disk. So I think it is not a disk problem, mostly is a partion problem.Update: I check out more information, the partion /dev/sda9 is /home directory. And always error on same sector 798717984. I use DiskGenius software under Windows to check error. Then found an error. and that partion is not formated.I want to recover my Arch linux. How to solve this ?If I can not fix this error, then how to get the partion data out ?Update2: I really hope to save this partion data out. Because I have a lot of important things in this partion. I think the first step is backup this bad partion or whole hard drive into an image file (what image file ?), then let someone who can fix this partion to fix.More update:After I use DiskGenius software to fix the partion sector error.Then I use e2fsck to check. get error:fsck.ext4: Bad magic number in super-block while trying to open /dev/sda9./dev/sda9: The superblock could not be read or does not describe a correct ext2 filesystem.VFS: can't find ext4 filesystem.(my this broken partion /home -> /dev/sda9 is ext4 when I create it before.)And I execute command # mke2fs /dev/sda9 to get block information:OS type: LinuxBlock size: 4096 (log=2)Fragment size=4096 (log=2)Stride = 0 blocks, stripe width = 0 blocks65536 inodes, 261888 blocks13094 blocks (5.00%) reserved for super userFirst data block = 0Maximum filesystem bloack = 2684354568 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuper block backup stored on blocks: 32768, 988304, 163840, 229376
I/O errors on hard disk on Linux boot
linux;boot;io
Are you able to login and use Arch Linux too, other than the error messages cluttering the console? If so, then most likely your hard drive is failing, just not completely dead yet. The line that says I/O error indicates that the kernel encountered an error trying to read data from the harddrive, and the lines beginning with ata1.00 provide detail about the internals of the read request in the hardware. Windows doesn't show such messages, which is probably why you don't see any problems there, yet.If you had file system corruption caused by killing the power, then the kernel should still be able to read the data from the drive, but wouldn't be able to interpret what files the data belongs to. That would result in a different set of errors.Another way to tell would be to reboot and see if you get a similar error but with different details, e.g. a sector number different than 798717984. If so, that means the error is occurring somewhat randomly, which is another sign of hardware failure. Again, this is mostly likely your hard drive, though it is possible another hardware component could be failing.I'd suggest making a backup and replacing the drive before it's too late.EDIT AFTER OP UPDATE:If only one sector is bad, you could use e2fsck -c -y as goldilocks suggested and continue to use the drive if that fixes the error. But modern drives have transparent error-correction built-in, and in my experience, by the time the OS starts to detect errors in the course of normal I/O, the drive is very close to the end of it's life.Regardless of what course of action you take, absolutely make sure you have a good backup of the entire drive before trying to repair anything!
_webmaster.48865
There is a .io domain name I want to buy when it becomes available in 12 months time.What is the best tool to buy this automatically?
How to backorder a .io domain name
domains;domain registration
null
_unix.61303
I have a script that sync (mirror mode) four folders between two disks. While it is running it shows the directory where it is in this very moment, and at the end it shows bytes sent, speed, etc.I'm wondering if there is a parameter that shows the changes made at the end. For example:Copied a,b,c from A/asd to B/asdDeleted d,e,f from B/asd
Is there any rsync parameter for show at the end the changes made?
rsync
null
_unix.75378
I'm trying to write a vim syntax file for SVN dump files. The part I can't figure out is the section like this:Fooprop: Val1Text-content-length: 20Barprop: Val2 <- Blank lineabcdefghiabcdefghiNext-item-prop: Val3How do I set up a syntax rule that says N characters after the first blank line following 'Text-content-length: N' where N is a number?
Vim syntax highlight length-delimited fields
vim
null
_softwareengineering.344987
In my appliaction I am using EF. I have the service that can provide some information and clients who want to ask for that information.Let's imagine that I have a User, which has Address, the list of equipments, the list of subscriptions, the list of Balances. Each Balance has the list of Operations. public class User{ public User() { Equipments = new HashSet<Equipment>(); Balances = new HashSet<Balance>(); Subscriptions = new HashSet<Subscription>(); } public long Id { get; set; } public string LastName { get; set; } public DateTime DOB { get; set; } public long AddressId { get; set; } public Address Address { get; set; } public ICollection<Equipment> Equipments { get; set; } public ICollection<Balance> Balances { get; set; } public ICollection<Subscription> Subscriptions { get; set; }}public class Equipment{ public long Id { get; set; } public string Model { get; set; } public int StateId { get; set; } public long UserId { get; set; } public User User { get; set; }}public class Balance{ public Balance() { Operations = new HashSet<BalanceOperation>(); } public long Id { get; set; } public decimal Amount { get; set; } public int CurrencyId { get; set; } public long UserId { get; set; } public User User { get; set; } public ICollection<BalanceOperation> Operations { get; set; }}public class BalanceOperation{ public long Id { get; set; } public DateTime OperationDate { get; set; } public long BalanceId { get; set; } public Balance Balance { get; set; }}public class Address{ public int Id { get; set; } public string City { get; set; } public int HouseNumber { get; set; }}public class Subscription{ public long Id { get; set; } public int ServiceId { get; set; } public bool IsActive { get; set; } public long UserId { get; set; } public User User { get; set; }}Also I have a DTO UserInfo ,that can contain all that information. Clients want to aks for UserDto at different places of the app. But they don't need all the information all the time. For example, sometimes I will need only user with its Balances without operations. Sometimes - only equipments and subscriptions. Sometimes - balances WITH operations.So, what I want is the client to use one method to get info by providing some kind of includes.int includes = (int)(IncludeEnum.Adress | IncludeEnum.Subscriptions);UserInfo userInfo = _service.GetUserInfo(id, includes);The problem is how to build the architecture of the server, what patterns to use etc. Depends on Includes I will build different queries to DB and fill UserInfo in proper way. The server has to provide info and not to provide not needed info and even not to aks that info in DB. So it would reduce the number of requests. In EF i can use Include() to get related objects. Sometimes I can build queries using LINQ ( from ... join ... select). Or may be all my thoughts are wrong? And I have to do all the things in different way?
Creating different queries to DB for one object
c#;architecture;.net;entity framework
null
_webmaster.72071
I have a website whose name coincides with another popular domain. Meaning if you add two letters at the end of my website name, you get the name of already popular domain. After registering and getting my website running for couple of months i realized it as when I used to type my site name in google, the auto-suggestion tool completed my query with that particular popular domain. Since the site was running and known to few people, I thought regular updation of content will solve this issue. But, still after 6 months, google auto-suggests that name. So, my basic concern is whether the name chosen is very bad for SEO and brandability? Will I be always over-shadowed by the older, popular domain?Just brief me how to go about this as I want the sitelinks and auto-suggestion to work for me and not against me
Impact of domain name coinciding with other popular domain
google;domains;google search console
null
_cs.43370
Is there a way to find the nth string of characters from an alphabet, without having to store all of the combinations?Example: Alphabet $A = \{a,b,c\}, n=12$.All possible combinations in lexicographic order are $C = \{a, ab, abc, ac, acb, b, ba, bac, bc, bca, c, ca, cab, cb, cba\}$.You can see that there's no repetition of characters in any string and the empty string is not a member of $A$.So, the nth string is: $ca$.Clearly, to find this I'm iterating over the set $C$.The problem is that I have to generate all these strings, wich takes a really long time, and then search for the nth one. If the alphabet is large $(1 \leq A \leq 26 )$ the set $C$ grows too fast (not sure how much) and is impossible to store.My questions is if exists a way to find these nth one, without generating all the strings.Also, I'm not sure if this question goes in this StackExchange site.
Get the nth lexicographic string of all possible combinations of an alphabet
algorithms
It is better if you include the empty string as the $0$th string in lexicographic order. Here is how you convert an index into a word.First, we are going to find the first letter of the word, and an internal index among all words starting with this letter. We do this by counting how many words start with each letter, including the empty letter. I'll let you figure out how to do that. In your case, we get $1,5,5,5$. This means that $[0,1)$ corresponds to the empty sword, $[1,6)$ to words starting with $a$, $[6,11)$ to words starting with $b$, $[11,16)$ to words starting with $c$.Given an index, you locate which interval it belongs to, and record the starting character and the internal index inside the interval. In your example, $12$ belongs to $[11,16)$ (and so starts with $c$), and its internal index is $1$.If the index is $0$, then we are done. Otherwise, we recurse. In your example, we are left with the alphabet $\{a,b\}$ for which the intervals are $[0,1),[1,3),[3,5)$. The current index is $1$, so the word at this level of recursion starts with $a$, and the new index is $0$. That means that the recursion ends here. The outputs is thus $ca$.
_unix.36982
Is it possible to set up system mail on a linux box to be sent via a different smtp server - maybe even with authentication? If so, how do I do this?If that's unclear, let give an example. If I'm at the command line and type:cat body.txt | mail -s just a test [email protected] it possible to have that be sent via an external SMTP server, like G-mail ?I'm not looking for a way to send mail from gmail from the command line but rather an option to configure the entire system to use a specific SMTP server, or possibly one account on an SMTP server (maybe overriding the from address).
Can I set up system mail to use an external SMTP server?
linux;smtp;email
I found sSMTP very simple to use.In Debian based systems:apt-get install ssmtpThen edit the configuration file in /etc/ssmtp/ssmtp.confA sample configuration to use your gmail for sending e-mails:# root is the person who gets all mail for userids < [email protected]# Here is the gmail configuration (or change it to your private smtp server)mailhub=smtp.gmail.com:[email protected]=yourGmailPassUseTLS=YESUseSTARTTLS=YESNote: Make sure the mail command is present in your system. mailutils package should provide this one in Debian based systems.Update: There are people (and bug reports for different Linux distributions) reporting that sSMTP will not accept passwords with a 'space' or '#' character. If sSMTP is not working for you, this may be the case.
_cs.57438
I need to find an upper limit for the runtime of $f(n)$.f(n):{ g(n,1)}g(n,k):{ if n<=0 return; for(i=1; i<=n; i++) { print I love data structures!; k++; g(n-1, k); } return;}I tried to think of it this way:for(i=1; i<=n; i++) $\rightarrow (n+1)c_2$g(n-1, k) $\rightarrow ng(n-1)$$$f(n) = g(n,1) + c_f = c_1 + (n+1)c_2 + nc_3 + nc_4 + ng(n-1) + c_f$$I am not sure about the recursion runtime analysis:g(n-1,k) $\rightarrow ng(n-1)$Thank you!
Converting a algorithm to a runtime function
runtime analysis;recursion
The recursion for $g$'s running time is $$T(n)=n(T(n-1)+1)$$ where $1$ is the (constant) work done from the print statement and incrementing $k$. $g(n-1,k)$ and the constant work are called $n$ times from the for loop.The recursion stops at $g(0,k)$ which has running time $T(0)=1$.Solving this recursion on Mathematica (I suspect doing it by hand would involve induction or inspection) gave the result:$$T(n)=\Gamma(n+1)+en\Gamma(n,1)$$The complexity of this is$$T(n)=O(\Gamma(n+1)+en\Gamma(n,1))$$$$T(n)=O(n\Gamma(n,1))$$Since $n$ is an integer, then we can reduce this to$$T(n)=O(n!)$$
_webmaster.68027
I have site www.example.com which someone previously configured with wildcard DNS records and with no 301 redirects. As you may have expected, Google's index is now full of garbage like w.example.com, pop.example.com, ns1.example.com, foobar.example.com, etc. All of the indexed subdomains are almost exact copies of the main domain, duplicating every content from the latter one.I've set up 301 redirects if requests come from everything except www.example.com, but seems that it is not enough because after almost three months since the redirects where set up, all the incorrect subdomains still pile up on Google's index.So the question is: what could I do to speed up this cleanup process?I have heard of sitemap assisted redirects, but some people suggest that this approach is not viable, since Webmaster Tools may reject sitemaps containing URL's causing redirects.
Cleanup incorrectly indexed subdomains due to wildcard DNS
seo;google;sitemap
The 301 redirects should be resolving this issue but if you do not think that is working you can try also using canonical URLs to resolve this. By giving each page on the main site a canonical URL you will be telling Google that URL is the main URL that you want indexed for that content. Any other URL that pulls up that page should be considered a duplicate of that page and to only consider that page for indexing and ranking purposes.
_cstheory.16508
I see here and there mention of the $\lambda_I$-Calculus (in which every variable must be used at least once) and the $\lambda_K$-Calculus (in which a variable can also be unused). Are they equivalent? Why has the latter kinda obscured the former?EDITBy equivalent, I mean they have the same expressive power, namely, being universal or Turing complete.
Are the $\lambda_I$-Calculus and the $\lambda_K$-Calculus equivalent?
lambda calculus
You've basically answered the question yourself.$\lambda K$ is just another name for the standard, untyped lambda calculus.$\lambda I$ is a strict subset of $\lambda K$.$\lambda I$ doesn't allow terms where one abstracts over a variable but doesn't use it. So$$K = \lambda xy.x \in \lambda K$$but$$ K \not\in \lambda I$$Thanks to this restriction, $\lambda I$ has some interesting properties, in particular if $M$ has a normal form then so do all its sub-terms.Barendregt, H. P. The Lambda Calculus: Its Syntax and Semantics contains some notes about $\lambda I$, namely:... the $\lambda I$ calculus is sufficient to define all recursive functions (since $K_1 := \lambda xy . yIIx$ satisfies $K_1xc_n = x$ for each of Church's numerals $c_n := \lambda fz . f^n z$ - it is also the case that for each finite set $n$ of nf's, we can find a local $K$ for $n$ $K_n$ such that $K_nMN = M$ for each $N$ in $n$).... The $\lambda I$ calculus corresponds to the combinatory logic with primitive combinators $I$, $B$, $C$, and $S$. ...
_datascience.19897
I'm learning JS, HTML and CSS, but I doubt JS is very good at Data Analysis. So, what would you guys recommend me learning to start my career in Data Science? What's the best programming language for processing data?P.S. I love statistics and programming so I think this will be fun.
Best Programming Language for Data Science
data;programming
This is no doubt a duplicate, but here's how I'd weigh in on the major languages:R:Fantastic support for packages and specialised stats analysiscommunity, you can find a package to do just about anything you needand it will be relatively easy to use.Is good for throwing together prototypes and performing exploratory analysis.Is Free and Open Source.Slower than Python. Basically don't loop over anything. It's an odd language for a programmer to use (coming from a software devbackground). Clearly designed by mathematicians.Relatively little choice of good IDEsPython:Fast.Also very good as a general purpose language so has 'broader' package support. Free and Open Source.Easy to use for Big Data applications. Not as streamlined for analysis as R.Syntax can be difficult to read (no surrounding braces to make it obvious where functions/ if statements end).Can be particularly tedious working with Dataframes compared to R.MATLAB:Generally slower.Has very impressive packages for signal processing/image recognition and all the cool stuff.Is very readable and easy to comprehend generally.Is NOT free. Student licenses are available. Was quite complicated for me to get my hands on one though...Has very good support for mathematical analysis similar to R, but much better matrix functions.Personal recommendation: Python. Kill two birds with one stone, learn good general to advanced programming concepts and data science at the same time.Good article: https://www.linkedin.com/pulse/r-vs-python-matlab-octave-julia-who-winner-siva-prasad-katru
_unix.375907
I have a project with some lua and some bash files. I want to loop over all files and depending on the shebang I want to execute a validity check.
validity check all files in a folder depending on the shebang
shell script;files;shebang
null
_webapps.96069
I set up a form to specify the location of household appliances in a residence. The form is based on a unique code assigned to each household. (it's working properly)To avoid entry errors, I would like once the unique code entered, the user can see the detail of the device before confirming the new location. The information is present in the associated google spreadsheet, is it possible via a google script to show it directly in the form page?Example:Question 1: Select ID: E000025-> Display: You're moving the SAMSUNG 32 TV BLACKQuestion 2: Select the new location: Apartment 200Thanks for your help
Is it possible to show dynamic informations in a google form based on a previous response?
google spreadsheets;google apps script;google forms
Short answerNo, it's not possible.ExplanationAt this time Google Apps Script doesn't include a way to modify a form when a user open it to respond. An alternative is to use Google Apps Script to create a web app.Referenceshttps://developers.google.com/apps-script/reference/forms/https://developers.google.com/apps-script/guides/triggers/https://developers.google.com/apps-script/guides/web
_unix.3086
Is there such a thing as a full-text indexing engine, that can be queried from the command line and ideally wouldn't require using a gui at all ?I'm especially interested in indexing my ebooks and papers, so that's a mixture of pdf, epub and a few djvu. (Open)Office docs would be nice, but much lower on my list.
Command-line-friendly full-text indexing?
command line;search
null
_webmaster.1487
How can I check Google Page Rank of all pages of a website without installing any tool on my machine?
How to check pagerank of all pages of a website?
seo;search engines;pagerank
null
_cs.29017
I need to compute the permanent of a 10*100 matrix. All the entries are either 0 or 1.All I know is that I can compute the permanent of all 10*10 submatrices and then sum it to get the desired answer. But this involves 100C10=10^13 operations which is too much in my case.Is there a better algorithm for this task?
computing permanent of a 0-1 rectangular matrix
algorithms;combinatorics;matrices
null
_softwareengineering.14033
Possible Duplicate:How can I get the word out about a new (open-source) library I've developed? I have hosted my latest project, a JVM-based MIDI processor/API called Mjdj MIDI Morph, on Github (here and here). Now I need to bring some interest to it, even if it's negative interest (so I can improve it). I've looked up open source list on Google and end up with such things as this page on Wikipedia, which makes it quite clear that they don't want your project if it's new. Where should I list my project? Short of adwords and talking it up in forums and trade shows, where should I submit my URLs?
Where Can I List My Opensource Project?
open source;marketing
Get it announced on Freshmeat Freecode (no longer accepting updates as of 2014-06-18). Also, remember to get to a usable state or people will only use it once and keep that impression forever...
_unix.305073
I want to extract text infos from layer ( like font, font-style, font-size and content ) with the name and number of layer. All available command line on standard repo are an option. I know it can be done from Photoshop scripting, but for the sake of science I would like to do it from a Unix server, and maybe later extract all infos from multiple file in a zip and process them with multiple tools.
Extract text layer from PSD ( ImageMagick or GiMP )
command line;imagemagick;gimp
GIMP has the script-fu scheme extension that can be run from the command line. This will be sketchy because I have not written any scheme in some 3-4 years, but here goes nothing:Assuming the following script in a file called sc.sch:(define (go-by-layers no layers) (while (< 0 no) (let* ((layer (vector-ref layers (- no 1)))) (display Layer name: ) (display (car (gimp-item-get-name layer))) (newline) (if (< 0 (car (gimp-item-is-text-layer layer))) (begin (display This is a text layer) (newline) (display Font: ) (display (car (gimp-text-layer-get-font layer))) (newline) (display Text: ) (display (car (gimp-text-layer-get-text layer))) (newline) ) ) (if (>= 0 (car (gimp-item-is-text-layer layer))) (begin (display Not a text layer) (newline) ) ) (set! no (- no 1)) ) ))(let* ((layers (gimp-image-get-layers 1))) (display Number of Layers: ) (display (car layers)) (newline) (go-by-layers (car layers) (cadr layers)) (display end) (newline))(gimp-quit 0)We can do:$ gimp zz.psd -b - < sc.sch 2>/dev/nullWelcome to TinyScheme, Version 1.40Copyright (c) Dimitrios Souflists> go-by-layersts> Number of Layers: 2Layer name: BackgroundNot a text layerLayer name: Layer 1Not a text layerend#tThis is quite hacky since we are running the batch mode from STDIN and redirecting the script in. We also get the prompt output, which is quite ugly, but should work with most GIMP versions.How does this work:Since we have only one image loaded we know it is named 1.We get the layers with (gimp-image-get-layers 1)The layers are a fixed vector so we walk through them using vector-ref (inside a while)(gimp-item-is-text-layer layer) provides us with information whether we can execute text specific operations on the layer.gimp-text-layer-get-* give us info about the text layer.For non-text layers we print less info.How to get a function reference for script-fu?In GIMP go to Filters -> Script Fu -> Console. And in there, next to the text field where you can insert scheme commands, you get a button Browse that gets the reference for you version of GIMP.Disclaimer: this is poorly tested, I only have a simple two layer (without any text) PSD to test it.
_hardwarecs.1249
I want to use my nexus 5x to make presentations, but apparently, there is no video support. I tried an asus miracast dongle, but that is spotty at best. So it appears that my only good option is to use a chromecast device, but my university uses the 802.1x protocol, which chromecast apparently does not support. Is there a way to use a travel router to do what I want? I see two options:The router does not connect to the internet and simply allows my phone and the chromecast to communicate. I don't know if chromecast needs internet access to work.It connects to the internet using the 802.1x protocol, but then provides a different protocol (11n, etc) for my phone and the chromecast. Will either of these options work? Or some other setup? Specific hardware suggestions are of course welcome.
How to access 802.1x network with non-802.11x device
wifi;access point
null
_hardwarecs.2747
On my motherboard no SATA3 controller. I want to purchase a separate high-quality controller that can simultaneously process 4 SSD drives at maximum speed and performance.Required the controller to operate simultaneously at a speed of 6 Gbps on each port.(RAID function is not needed)
PCI-e SATA III controller with 4 ports
ssd;sata;raid controller
null
_webmaster.57801
I heard that the sitemap for Google can not exceed 10 Mb and 50 kURLs. This is not enough for me. What shell I do to pass through this obstacle? Create sitemap for each category?
If the size of sitemap is limited: how do I tell to Google all of my URLs?
google;sitemap
Just use Sitemap index files:You can provide multiple Sitemap files, but each Sitemap file that you provide must have no more than 50,000 URLs and must be no larger than 10MB (10,485,760 bytes). If you would like, you may compress your Sitemap files using gzip to reduce your bandwidth requirement; however the sitemap file once uncompressed must be no larger than 10MB. If you want to list more than 50,000 URLs, you must create multiple Sitemap files.If you do provide multiple Sitemaps, you should then list each Sitemap file in a Sitemap index file. Sitemap index files may not list more than 50,000 Sitemaps and must be no larger than 10MB (10,485,760 bytes) and can be compressed. You can have more than one Sitemap index file. The XML format of a Sitemap index file is very similar to the XML format of a Sitemap file.The Sitemap index file must:Begin with an opening <sitemapindex> tag and end with a closing </sitemapindex> tag.Include a <sitemap> entry for each Sitemap as a parent XML tag.Include a <loc> child entry for each <sitemap> parent tag.The optional <lastmod> tag is also available for Sitemap index files.Note: A Sitemap index file can only specify Sitemaps that are found on the same site as the Sitemap index file. For example, http://www.yoursite.com/sitemap_index.xml can include Sitemaps on http://www.yoursite.com but not on http://www.example.com or http://yourhost.yoursite.com. As with Sitemaps, your Sitemap index file must be UTF-8 encoded.
_cs.13235
I just not sure does empty set have a context-free grammar in Chomsky normal form?That is, for $B=\emptyset$, then a context-free grammar is $S \to S$, I think which doesn't have a Chomsky normal form. I am not sure. Can some one explain?
Does the empty language have a CFG in CNF?
formal languages;context free;formal grammars
null
_vi.5313
I want to essentially be able to use a command similar to dt#, where the # represents any numerical character.My use case is modifying a script where I have a few server instances with long names that I wanted to abbreviate, from:instance0instance1instance2instance3toi0i1i2i3Is that possible without a regex? It would be useful to have something similar that includes a way to represent any alpha or symbol too.
How can I execute a command with a motion to the next/previous number?
cursor motions
Placing your cursor on the second character in your string (n), you could use d/\d. I suppose this does count as regex still, but looking at the documentation of t and f, they both use {char} which does not seem to include character groups or types such as \d represents digits in a regex pattern.i[n]stance0instance1instance2instance3d/\di0instance1instance2instance3
_webmaster.44321
On our company site we have pages for multiple products. Some of the products become deprecated. These products are of no interest for us because they give no income.Pages for deprecated products attract relatively high number of visitors (more than the half combined). Keywords for deprecated and not-deprecated products have very-very little in common.Is it wise (from SEO perspective) to move all the pages for deprecated products to other sites? For us it basically doesn't matter what will happen with deprecated products and pages related to them but we of course don't want to affect other products in any negative way. The whole point of all this is to tell search engines that our site is more focused on what will left.
Shall I remove pages for deprecated products?
seo
From an SEO standpoint it is not wise to remove pages for deprecated products. Those pages have history and inbound links that give credibility to your website. If you remove those pages and return 404 status, or redirect those pages to your homepage, you will lose any link juice associated with inbound links to those products.I would suggest leaving the pages up, but putting notice on those pages that the products are no longer available, (maybe no longer supported as well), and putting advertisements for your current products on those pages. If you are getting visitors there, I would view it as a rich opportunity to sell more products to your loyal customers. Maybe they don't even know about the new features available in a new product line.
_unix.342349
I found this asked by someone 2 years ago and it worked great. sed 's/$/ /' SUDIP1>SUDIP2But I have found if I add white space to every line, it messes up later sed commands. I'd like to search for EQU:EQU 888-111-2222 T 1234then move down one line and add the extra spaces at the end of these following types of lines and leaving alone all other lines in between.888-111-2222 T 1234(6 white spaces here)Several people in this site have marked my question as a duplicate, but it is not. The question they point out was searching for a string after the letter C appears, and then replacing it with a different string. My question is different because it involves adding white space to the end of the line and not changing the text on the line. Please don't be so quick to call a question a duplication. It affects people's reputation and points!
Add white space to next line only after specific pattern
sed
If you have GNU sed, try:sed '/^EQU/ {n; s/$/ /}' SUDIP1>SUDIP2Or POSIXly:sed '/^EQU/ {ns/$/ /}' SUDIP1>SUDIP2
_unix.151860
I am on OS X trying to ssh into a ubuntu 12.04 server. I was able to SSH in -- until abruptly stuff stopped working. I've read online to use the -v to debug this. Output is shown below. If I ssh into a different box and then ssh from that box to the server I am able to login. I have no idea how to debug this problem but would like to learn. $ ssh -v me@serverOpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011debug1: Reading configuration data /etc/ssh_configdebug1: /etc/ssh_config line 20: Applying options for *debug1: /etc/ssh_config line 53: Applying options for *debug1: Connecting to server [IP] port 22.debug1: Connection established.debug1: identity file /Users/me/.ssh/id_rsa type 1debug1: identity file /Users/me/.ssh/id_rsa-cert type -1debug1: identity file /Users/me/.ssh/id_dsa type -1debug1: identity file /Users/me/.ssh/id_dsa-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.2ssh_exchange_identification: read: Connection reset by peerSo far (on advice of message boards) I have looked for a hosts deny file -- but there is no such file on my machine. $ cat /etc/hosts.deny cat: /etc/hosts.deny: No such file or directoryI have admin access on client machine but not on server.
ssh_exchange_identification: read: Connection reset by peer
ssh
null
_cstheory.31832
SAT is NP-complete, QBF is PSPACE-complete, DQBF is NEXPTIME-complete. Is there any extension of QBF or restriction of DQBF that is EXPTIME-complete?Added later: a definition of DQBF can be found here: https://www.react.uni-saarland.de/publications/sat14.pdf
EXPTIME-complete propositional satisfiability problem
cc.complexity theory
null
_unix.371867
I want to loop over a dataframe, I want to compare one of the elements of the actual row and the next row.for example, I have a data frame that looks like this: V1 V2 V3 V4 1 chr1 10 1000 2000 2 chr1 10 2000 3000 3 chr1 10 4000 5000 . . .I would like to compare the element of the 1st row and 4th column with the element of the 2nd row and third column, and if they are the same do something, then the element of the 2nd row and 4th column with the element of the 3th row and 3th column, do something and so on. So I am trying something like this:for (i in 1:nrow(my_dataframe)){ if (my_dataframe[i, 4] == my_dataframe[i+1 , 3]) { print(OK) } }So this would give me for example 1 OK with my example data frameHowever it looks that R doesn't like the i + 1, because is giving me the next error:Error in if (tabla4subset[i, 4] > tabla4subset[i + 1, 3]) { : missing value where TRUE/FALSE neededSome one know how to do this?
Loop over a data frame comparing elements of the firts and second row
r
null
_unix.98647
I'm trying to detect what filesystems a kernel can support. Ideally in a little list of their names but I'll take anything you've got.Note that I don't mean the current filesystems in use, just ones that the current kernel could, theoretically support directly (obviously, fuse could support infinite numbers more).
Can I list the filesystems a running kernel can support?
linux;filesystems;kernel
Can I list the filesystems a running kernel can support?Well, answer /proc/filesystems is simply wrong it reflects only those FSes that are already brought in use, but there're usually a way more:ls /lib/modules/$(uname -r)/kernel/fsAnother source is /proc/config.gz which might be absent in your distro (and I always wonder why?! in case).
_softwareengineering.275719
I read the recent article Longest x86 Instruction http://blog.onlinedisassembler.com/blog/?p=23I attempted to reproduce the curious disassembly issue on a Win7x86 development platform using masm and as the article suggested, redunant prefixes.Talk is cheap, so here's a toy program (masm32):.386 .model flat, stdcalloption casemap:noneincludelib \x\x\kernel32.libincludelib \x\x\user32.libinclude \x\x\kernel32.incinclude \x\x\user32.incinclude \x\x\windows.inc.codestart:db 0F3hdb 0F3hdb 0F3hdb 0F3hdb 0F3hdb 0F3hdb 0F3h;...6 more bytes laterdb 089hdb 0E5hend startinvoke ExitProcess, NULLAfter linking and assembling, I opened the resulting executable in windbg. To my disappointment, when I single step, unassemble the $exentry, etc. windbg simply sees the prefixes/bytes as individual instructions, says 'to hell with it' and executes only the valid instructions.Is there something I'm missing?
Longest x86 Instruction
windows;assembly
null
_reverseengineering.13348
I try to list argument of function by use this command.Python>from idaapi import *Python>tif = tinfo_t()Python>get_tinfo2(here(), tif)But it show False in the output. when I use this command.cmt = GetType(ScreenEA());print cmtIt show None in the output.I don't know why May be I have not enable some option in IDA or my program have no argument. Why I can't show argument in IDA Pro?
Why I can't list argument of function from IDA Pro?
ida;disassembly;assembly;idapython
null
_cs.66133
Lets say I have a random number generator that spits out uniform numbers from 0 to 1Next, I have a shape defined by a series of vertices, like { [0, 0.4], [0.5, 0.2], [1, 0.4] }In those vertices, the first value determines the position and second the distribution weight in that point (interpolated in between).Now I need a magic function to redistribute the uniform random numbers into a distribution that would statistically match the shape of the vertices provided. Essentially, if I were to run 10000 random numbers through the function, sort them into buckets like { [0 - 0.1], [0.1 - 0.2] ... } and count the buckets so that [x = x, y = count], it should produce a shape similar to if I were to use the original shape as [x, y].To illustrate ms paint at its finest. Anyways, the question - I'm looking for an algorithm to build that magic function. The shapes that I want to feed it would be based on real world statistics which often don't fit any of the neat looking distributions like gauss or normal, I've seen all kinds of wonky lines.
Redistributing a set of uniformly distributed numbers to an arbitrarily defined shape
algorithms;sets;statistics
null
_softwareengineering.344666
In our company, we have large ear files to which developer groups contribute jars. This is a many-to-many relation, where each ear has 30-50 jars and some of the jars are within 10 or more ear files. We are thinking about introducing more automatic testing to move in the direction of CI. Integration tests usually happen on the ear level, not for individual jars. So my dilemma is as follows: When a developer checks in code, the CI server builds and unit-tests his jar. Then, there would be the need for integration tests. How should I handle this?I could build the jar as SNAPSHOT and then trigger a SNAPSHOT build for the ear(s) as well. This is nice and easy, but SNAPSHOT builds are really only for development and should not be promoted to a release stage.I could construct a minimal test ear for each group of jars. If a jar is built, the test ear dependency list is updated with the newest version number and then built. These tests would be meaningful, but they would not test the real ears, just test ears.I could automatically update dependencies in the real ears, which could confuse people.Any ideas?
Continuous integration - jar as part of an ear
java;continuous integration;maven
null
_unix.136871
I am trying to scan wifi networks via Linux terminal running on a virtual machine. I am running virtual machine on Mac OS. On MAC terminal, I can see all Wi-Fi networks using (airport) command and can connect to one network.The NetworkAdapter setting for Virtual machine is set to Share with MAC.On Linux terminal, When I do ifconfig -a, I geteth0 & loHowever, when I type : sudo iwlist eth0 scan , I get error message:sudo iwlist eth0 scaneth0 Interface doesn't support scanning.Can someone explain how I can do that with Linux running on virtual machine? What I am doing wrong?
scan and connect to wifi from linux terminal on virtual machine
linux;osx
What you're doing wrong is to assume that whatever software you're using to run the virtual machine will pass the WiFi extension through to your VM.If you ranlspci in the terminal of your VM you'd most likely find that it sees anIntel, Realtek or AMD wired adapter.
_unix.84577
I'm trying to setup a tmux (1.9) key-binding to swap to windows. Unfortunately, I can't seem to figure out a way to send a variable for the swap. For example, if you hit C-b . It will then prompt where to move the window to. How can I get the same behavior with swap?I have:bind-key > swap-window -t {something else here...}
Send variable through tmux binding
tmux
I figured it out. The best way to do it is to use the command-prompt feature.bind > command-prompt -p swap with swap-window -t '%%'
_unix.123511
I have a font with the name Media Gothic. How can I find the file name of that font in Linux? I need to copy that file to another system. I've tried:find /usr/share/fonts/ -name '*media*'But this gives no results. gothic gives some other fonts. TTF is a binary format so I can't use grep.
Find font file from font name on Linux
find;fonts;search;file search
Have you tried ?fc-list | grep -i mediaAlso give a try to fc-scan, fc-match
_webapps.37152
I want to print those words or statements that I've used Google translate on. Is there any way to export them to spreadsheet, say Google Docs?
How can I export my history in Google Translation?
google translate
Google translate doesn't keep a history of things you've translated. Since this is the case there isn't no way to export your previous translation history.
_unix.319594
I am using this command to find the folders I want and count up the size. find . -type d -name 'tmp_c*' | xargs du -hcs {} \; + My version of find does not support -exec. But, this works. However I am not sure if its giving me the right totals on the directories that contain my search string. When I run the command and pipe to less, I see it counting up each folders size, and then it outputs a total every so often. Like this:140K ./r/g/userid/attach/tmp_c_241091464_268K ./r/g/userid/attach/tmp_c_58367014_undefined2.3G totalIf I redirect the output to a file then grep on total, I get this:2.3G total978M total1.1G total2.0G total1.1G totalI think this is giving me the right numbers. But how can I take this command one step further and have it sum up the totals for a grand total on one line?
Find size of directories recursively and get a total
find;directory;disk usage
null
_webapps.31983
I'm using the Munich theme for Tumblr, but it doesn't naturally show tags. I've followed the instructions here and at the followup Tumblr FAQ for adding tags, but when the tags show up, they're far separated from the rest of the post and are in a chunky Times New Roman 12-point blue, not the style of the rest of the blog. Has anyone had any luck adding tags to the Munich theme on Tumblr?
Showing tags in Tumblr in Munich theme
tumblr;tumblr themes
null
_softwareengineering.297899
In the GNU Affero General Public License, there is a section that reads:The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version.So clearly I couldn't make a modified version of a piece of software and then put it on the public internet for everyone to access (assuming network software living on a server) without also making the source code available.But what is defined by public here? Do subscription based services where only select paying customers have access count as public?
In the GNU Affero GPL, do paying customers count as the 'public'?
licensing;legal;gnu
The text you quoted is part of the preamble, it is not part of the legally binding terms and conditions.The relevant part is article 13, which states that you have to make the source code available to everybody to whom you make the object code available. Note that this is no different from the normal GPL. The difference between the normal and the Affero versions of the GPL is not to whom you make the source code available, but rather what counts as making the object code available. In the normal GPL, only giving the object code away counts, whereas in the Affero GPL also making it available as a service counts.To repeat: no version of the GPL requires you to make the source code available to the public. In fact, no version of the GPL requires you to make the source code available at all.All versions of the GPL only require you to make the source code available iff you also make the object code available and only to those people to whom you make the object code available.What the Affero GPL adds is the idea that Software-as-a-Service is also a form of making the object code available and thus also requires making the source code available to the users of the service.
_codereview.159940
I am doing some practice interview questions for class, one of the questions is:Given an undirected graph G, find the minimum spanning tree. Function should take in and output an adjacency list.Here is the code that I have which works using Kruskal's algorithm:### Question 3 main function and helper functions.# Within code v = vertice, r = root, e = edge, u = union, m = make, f = find.# Global Variables for simplifying code.parent = dict()rank = dict()# Find vertices.def f(v): if parent[v] != v: parent[v] = f(parent[v]) return parent[v]# Make vertices.def m(v): parent[v] = v rank[v] = 0# Creates union between vertices.def u(v1, v2): r1 = f(v1) r2 = f(v2) if r1 != r2: if rank[r1] > rank[r2]: parent[r2] = r1 else: parent[r1] = r2 if rank[r1] == rank[r2]: rank[r2] += 1# Main Function.def Question3(G): for v in G['vertices']: m(v) edges = list(G['edges']) MST = set() for e in edges: v1, v2, weight = e if f(v1) != f(v2): u(v1, v2) MST.add(e) return MSTG = { 'vertices': [0, 1, 2, 3, 4, 5, 6, 7], 'edges': set([ (1, 6, 5), (3, 5, 2), (5, 4, 9), (4, 2, 3), (1, 1, 8), (0, 2, 1), (2, 3, 6), (2, 5, 4), (2, 4, 9), (2, 1, 7), ]) }print Minimum Spanning Tree of G:print Question3(G)# Expected Output.# Minimum Spanning Tree of G:# set([(1, 6, 5), (2, 4, 9), (2, 5, 4), (2, 1, 7), (2, 3, 6), (0, 2, 1)])print ---End Question 3---Are there any unnecessary bits of code in my solution and/or is there a more efficient way I could be going about this problem?
Finding minimal spanning tree of a graph using Kruskal's algorithm
python;algorithm;tree;interview questions;graph
The first two comments already imparted a negative impression of the code something you definitely don't want to do in an interview.# Within code v = vertice, r = root, e = edge, u = union, m = make, f = find.# Global Variables for simplifying code.The first comment indicates that your function names are horrible. The second comment indicates that you don't know how to use variables with the correct scope.G = { 'vertices': [0, 1, 2, 3, 4, 5, 6, 7], 'edges': set([ (1, 6, 5), (3, 5, 2), , (2, 1, 7), ]) }Specifying the vertices is redundant, right? All of the relevant vertices should already be mentioned as one of the endpoints of an edge. (And if that is not the case, then you have a lone disconnected vertex, and it would be impossible to make a spanning tree. Such is the case, in fact, with vertex 7 in your example.)Furthermore, your Question3() function returns a graph as a set of edges. It would also make sense that it accepts a graph as a set of edges.The Question3 function should look more like this:def kruskal_mst(graph_edges): vertex_parents = { v: v for v in itertools.chain(*[(v1, v2) for v1, v2, w in edges]) } vertex_ranks = {v: 0 for v in vertex_parents} mst = set() for e in edges: v1, v2, weight = e if find(v1, vertex_parents) != find(v2, vertex_parents): union(v1, v2, vertex_parents, vertex_ranks) mst.add(e) return mstprint kruskal_mst([ (1, 6, 5), (3, 5, 2), (5, 4, 9), , (2, 1, 7)])Your code ignores edge weights completely. (Note that your solution uses the (2, 4, 9) edge rather than the lower-cost (4, 2, 3) edge. Also, to connect vertex 3 into the MST, it uses the (2, 3, 6) edge rather than (3, 5, 2).)
_unix.19924
Is there a way to return the load averages, excluding any load caused by nice'd processes?We have a load balancing mechanism in place that checks the load of multiple Linux servers, and submits a job to the server having the lowest load. We had a scenario where all servers had too high a load and so no server could be selected in the load balancing. However, I noticed that the servers were handling a bunch of nice'd processes, so although the load averages were high, it was still safe to submit another job.Let me know if clarification is needed. Thanks.
Get load average excluding niced processes
linux;load;nice
You can write up your own script that uses ps to list all processes in the run/runnable state without a nice value greater than 0. The specific syntax you need to use will differ based on your version of ps. Something like this may work:ps -eo state,nice | awk 'BEGIN {c=0} $2<=0 && $1 ~ /R/ { c++ } END {print c-2}'It runs ps collecting the state and nice level of all processes and pipes the output to awk which sets a count variable c and increments it whenever the second column (nice) is less than or equal to 0 and the first column includes R (for runnable). Once it's done it prints out the value of c after subtracting 2. I subtract 2 because the ps and the awk commands will almost always be considered runnable for the duration of the command's execution. The end result will be a single number which represents the number of processes that were runnable at the time that the script executed excluding itself and processes run nicely, which is essentially the instantaneous load on the machine. You'd need to run this periodically and average it over 1, 5, and 15 minutes to determine the typical load averages of the machine.
_datascience.8195
A basic assumption in machine learning is that training and test data are drawn from the same population, and thus follow the same distribution. But, in practice, this is highly unlikely. Covariate shift addresses this issue. Can someone clear the following doubts regarding this?How does one check whether two distribution are statistically different?Can kernel density estimate (KDE) be used to estimate the probability distribution to tell the difference?Let's say I have 100 images of a specific category. The number of test images is 50, and I'm changing the number of training images from 5 to 50 in steps of 5. Can I say the probability distributions are different when using 5 training images and 50 testing images after estimating them by KDE?
Difference between training and test data distributions
machine learning;classification;dataset;image classification
null
_codereview.51004
I have this code that's pretty much straightforward: it slides a box out at the bottom of the page and keeps another element fixed when you scroll past it.What I am wondering is if there's any need for me to throttle the scroll event/handler for performance, considering that it's such a small code. I'm also wondering if it was necessary for me to check for the class (sticky) before I added/removed it.var $window = $(window);var $slidebox = $('#slidebox');var $topShare = $('#top-share');var distance = $('#top-share').offset().top - 8; $window.scroll(function(){ if($window.scrollTop() + $window.height() > $(document).height() - 200) { $slidebox.animate({'right':'0px'}, 300); } else { $slidebox.stop(true).animate({'right':'-326px'},100); } if ($window.scrollTop() >= distance) { if (!$topShare.hasClass('sticky')) { $topShare.addClass('sticky'); } } else { if ($topShare.hasClass('sticky')) { $topShare.removeClass('sticky'); } }});
Sliding a box out the bottom of a page
javascript;performance;jquery
null
_cstheory.32444
Given two point clouds $A,B\subset\mathbb Z^d$, let $A\oplus B$ be their Minkowski sum, defined as the set $\{ a + b : a\in A, b\in B \}$.Is there any known result for the following problem?Given a point cloud $S\subset\mathbb Z^d$, are there two point clouds $A$ and $B$, none of them reduced to a single point, such that $A\oplus B = S$? I am in particular interested in the case $d=2$. A related result is the following: Given a convex set of points $S\subset\mathbb Z^2$, it is $\mathsf{NP}$-hard to decide whether it can be written as the sum of two convex sets $A$ and $B$ [1]. As far as I can tell, this does not imply a hardness result for the above problem since the equality is on the convex hulls only, not on the point clouds. In the terminology I used above, in this problem is only required that $A\oplus B\subset S$. The problem on point clouds may well be easier (not $\mathsf{NP}$-hard in particular) since it is much more constrained.[1] Gao, S., & Lauder, A. G. (2001). Decomposition of polytopes and polynomials. Discrete & Computational Geometry, 26(1), 89-104. A subset $S\subset\mathbb Z^d$ is said convex if all points of $S$ lie on the boundary of the convex hull of $S$.
Minkowski decomposition of lattice point cloud
reference request;cg.comp geom
null
_webapps.84880
I'm trying to use DGET() to match values across several sheets. If I try and match criteria that are times, it always fails. If I replace the criteria with words, it is successful. Here is an image as an example:The one that failed was trying to match the time of 7:00 AM and the one that worked is matching the words StuffStuff. If I do an IF statement between the two times, it is TRUE.How do I get DGET to work with times?
DGET is not successfully matching criteria in Time format
google spreadsheets;formulas
You can use it without =& like this:=DGET(F60:M61,Sat,{Start Time;F67})
_codereview.30155
I have been trying to parse results and make sure when I am doing so that I don't repeat the same data. I tried to use a few different options built into php with no luck. I did find an example of a recursive array search taht seems to work but its very intesive and adds a lot of time to the script. What I'm needing: Does anyone know a better way to handle this without changing the array that I supply to it so something built in like in_array or array_search?ARRAY EXAMPLE: array (size=3) 0 => array (size=3) 'author' => string 'Jim Beam' (length=8) 'id' => string '1' (length=1) 'md5' => string 'f2ebf4d4f333c31ef1491a377edf2cc4' (length=32) 1 => array (size=3) 'author' => string 'Jack Daniels' (length=12) 'id' => string '2' (length=1) 'md5' => string 'd1839707c130497bfd569c77f97ccac7' (length=32) 2 => array (size=3) 'author' => string 'Jose Cuervo' (length=11) 'id' => string '3' (length=1) 'md5' => string '64e989b4330cc03dea7fdf6bfe10dda1' (length=32)CODE EXAMPLE: function recursive_array_search($needle,$haystack) { foreach($haystack as $key=>$value) { $current_key=$key; if($needle===$value OR (is_array($value) && recursive_array_search($needle,$value) !== false)) { return $current_key; } } return false;}$agentArray = array( array('author'=>'Jim Beam','id'=>'1','md5'=>'f2ebf4d4f333c31ef1491a377edf2cc4'), array('author'=>'Jack Daniels','id'=>'2','md5'=>'d1839707c130497bfd569c77f97ccac7'), array('author'=>'Jose Cuervo','id'=>'3','md5'=>'64e989b4330cc03dea7fdf6bfe10dda1'));$fakeMD5 = '84d7dc19766c446f5e4084e8fce87f82'; //StackOverflow MD5$realMD5 = 'd1839707c130497bfd569c77f97ccac7'; //Jack Daniels MD5echo '<b>In_Array:</b> <br/>';$faketest = in_array($fakeMD5,$agentArray);$realtest = in_array($realMD5,$agentArray);var_dump($faketest,$realtest);echo '<b>Search_Array:</b> <br/>';$faketest2 = array_search($fakeMD5,$agentArray);$realtest2 = array_search($realMD5,$agentArray);var_dump($faketest2,$realtest2);echo '<b>Custom Recursive Array Seach Function:</b> <br/>';$faketest3 = recursive_array_search($fakeMD5,$agentArray);$realtest3 = recursive_array_search($realMD5,$agentArray);var_dump($faketest3,$realtest3);RESULTS:In_Array:Fake: boolean falseReal: boolean falseSearch_Array:Fake: boolean falseReal: boolean falseCustom Recursive Array Seach Function:Fake: boolean falseReal: int 1
Searching multidimensional arrays
php;array
null
_unix.52699
Mono 3.0 has been released yesterday. I am really excited by this release and am curious to know when this will be available in Debian testing (Wheezy).Is there a standard timeline set by Debian project for when the newest release of a software will be made available into Testing or in general any of the branches, except Stable?
How soon do new releases get packaged into Debian Testing?
debian;packaging;mono
Currently, Debian testing is in a freeze state. This means that new uploads must be approved by the release team, and generally must fix RC (release critical) bugs. It is very rare for the release team to accept new upstream releases (rather than patches specifically for RC bugs) after the freeze.So the answer to this question is after the following has occurred:The Mono team packages and uploads Mono 3.0 to unstableWheezy is released as stable and Jesse becomes the new testing2-10 days have passed since the upload to unstable (depending on urgency set on the package).In addition to this, if a RC bug is filed against the unstable package before it migrates to testing, the RC bug will bock migration. The severity of the bug will need to be downgraded, or a new version of the package which fixes the RC bug will need to be uploaded.Outside of a time in which testing is frozen, the answer to your question is 2-10 days after the maintainer or team has time to do the work and upload to unstable. Maintainers or teams own packages in Debian, and they are all volunteers, so it is really dependent on the individuals involved.Unfortunately, I do not know of any direct sources where this process is clearly laid out. I have this knowledge from years of working with OS and lurking around the development community.
_softwareengineering.354438
We have some states, depending on a window that is currently active:public enum LoginState{ NONE, LOGGIN_IN, LOGIN_SUCCESS, LOGIN_FAILED}public enum LobbyState{ NONE, FINDING_MATCH, FOUND_MATCH}I need to notify the listeners about state changes through something like: UIManager.Instance.Notify(LobbyState.FINDING_MATCH)so the subscribers get notified about the change. The problem with this approach is that I have multiple State types like LoginState, LobbyState, MatchState and I need to be able to subscribe only to specific ones. For instance - I need to listen for login changes in lobby, but I don't need to listen in login window for lobby changes.First thing I came up with is that I made a global Event System where I register state listeners and invoke changes to subscribers.For example:public interface StateListener<T>{ void OnStateChanged(T state);}which can be used with different state types like:class Lobby : StateListener<MatchState>, StateListener<ProfileState> { public Lobby(){ EventSystem.Register(this); // here's the issue no. 1 EventSystem.Register<MatchState>(this); // or this EventSystem.Register<ProfileState>(this); // or this ... } void OnStateChanged(MatchState state){ switch(state) { // do things } } void OnStateChanged(ProfileState state){ switch(state) { // do things } }}The problem with this approach is that it becomes a mess when I register different listeners as you can see in above's class constructor. Also, I have to provide a generic type twice (in class declaration and in the constructor).I could make 10 different containers for all state types like:List<StateListener<MatchState>> matchStateListeners;List<StateListener<LoginState>> loginStateListeners;...public void RegisterListener(StateListener<MatchState> listener){ // add}public void RegisterListener(StateListener<LoginState> listener){ // add}...But then, it becomes a real mess.Is there a better approach to achieve something like this?
How should I code a multi-type observer pattern?
c#;design patterns;state;observer pattern
null
_unix.112295
I'm trying to monitor serial port communication using strace -s9999 -o serialtrace.log -eread,write,ioctl command. After few normal logging messages I got guge amount of message --- SIGIO (I/O possible) @ 0 (0) ---. What does it means? How to get normal information instead of these lines?
strace got message --- SIGIO (I/O possible) @ 0 (0) ---
strace
null
_unix.345212
I want to open port 443 in my Debian 8 server but i get permission denied error.my rules.v4 file looks like:# Generated by iptables-save v1.4.21 on Wed Feb 15 14:42:03 2017*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [208710:151335680]-A INPUT -p icmp -m comment --comment 000 accept all icmp -j ACCEPT-A INPUT -i lo -m comment --comment 001 accept all to lo interface -j ACCEPT-A INPUT -m comment --comment 002 accept related established rules -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT-A INPUT -p tcp -m multiport --dports 22 -m comment --comment 099 allow ssh access -j ACCEPT-A INPUT -p tcp -m multiport --dports 80,443 -m comment --comment 100 allow http and https access -j ACCEPT-A INPUT -p tcp -m multiport --dports 1122 -m comment --comment 150 allow phpmyadmin access -j ACCEPT-A INPUT -m comment --comment 999 drop all -j DROPCOMMIT# Completed on Wed Feb 15 14:42:03 2017After making the changes in /etc/iptables/rules.v4 i tried to save withsudo iptables-save > /etc/iptables/rules.v4I get error message -bash: /etc/iptables/rules.v4: Permission denied I tried with sudo bash -C iptables-save > /etc/iptables/rules.v4 i get no such file or directory when the file exists.I also tried with teesudo tee iptables-save > /etc/iptables/rules.v4andsudo sh -c iptables-save > /etc/iptables/rules.v4when i do netstat -tulnp | grep 443 i get no output.
Permission denied when saving iptable rules in Debian 8
linux;bash;debian
null
_webapps.20351
I just noticed a side-ribbon flyout of YouTube that, when the logo is clicked, expands to ask What would you like to play? I don't recall adding any extension to Google+ or Chrome that would add this. What is it for? Can it be removed?
What is the YouTube flyout in Google+?
youtube;google plus
This is Google shipping Google in your Google.http://googleblog.blogspot.com/2011/11/shipping-google-in-google.htmlSimilar to the Facebook inspired Ticker memeyo dawg, i heard you like facebook so we put a facebook in your facebook so you can facebook while you facebookOr as Google puts it,We wanted to bring YouTube directly into Google+as well as make it easier to watch and share your favoritesso we're launching a YouTube slider in the stream.I am afraid there is no way to prevent/remove it that I am aware of, I will look out for extensions in the meantime.
_unix.324308
Is there a way to resize a /root partition while running OS which resides on it? I am running FreeBSD. Thanks in advance!
Resizing /root partition
partition;freebsd
There's a fairly comprehensive guide to growing a disk under FreeBSD in the handbook. For UFS (the default for FreeBSD), you can grow online from kernel version 10.0 onward.However, changing the partitions around mounted filesystems might lead to data loss or inconsistencies which might be first discovered when the system tries to boot the next time. Also remember to check up on your bootloader.Growing a filesytem basically boils down to this:Adjust your partition table. Be very careful with this step, as messing up here can lead to corrupting your entire disk. You'd probably want to unmount all partitions that are not actively needed to run a minimal system, most notably the swap partition.From your question, I assume you know how to use gpart to accomplish this.After having made the adjustments, you can safely remount all partitions and re-enable swap.Actually grow the FS. For FreeBSD, this is as easy as issuing growfs <blockdevice> with <blockdevice> being the partition that you resized.For shrinking, you have to do these steps in reverse:Shrink the filesystem to make sure you won't overwrite anything important: growfs -s <new_size> <blockdevice>. Make sure that you choose a size less than your new target filesystem size, i.e. if you want to go down from a 100GB partition to a 70GB one, resize the FS to about 60-65GB at this step. Overshooting more means more headroom, but usually also means more relocations and thus longer wait time.Adjust the partition table. Unlike before, at this step you choose your exact target size. You'd probably want to unmount additional partitions at this step too, especially if you move other partitions around (in fact, in this case you have to unmount these).Grow the filesystem. This gets rid of the headroom you left at the end of the (new) partition: growfs <blockdevice>. Omitting the size parameter tells it to use the entire partition.
_cs.70756
Is it true that any language that can be accepted by a polynomial time alternating Turing machine with existential and universal states can be accepted by a polynomial time alternating Turing machine with only existential and negating states?It is known that any language that can be accepted by an alternating Turing machine with existential, universal, and negating states can be accepted by an alternating Turing machine with only existential and universal states.
Polynomial time alternating Turing machines with only existential and negating states
complexity theory
null
_codereview.68443
I would like to find all binary sequences from the specified range, which has low peak sidelobe level of its autocorrelation function.Here is the solution (this is simplified version of the real code) using GCC Inline Assembly with AT&T syntax. Binary sequences are represented as sequence of bits in the integer number. The example is a complete C program which finds and prints the 13-position Barker Code.#include <stdint.h>#include <stdio.h>#include <stdlib.h>// A helper function, which is performed very very rare.void SaveCode (const uint8_t length, const uint64_t code);int main (int argc, char * argv []){ const uint8_t sideLobeLimit= 1; const uint8_t length = 13; const uint64_t beginCode = 1ull << (length - 1); const uint64_t endCode = (1ull << length) - 1; const uint64_t mask = (1ull << (length - 1) ) - 1ull; __asm__ __volatile__ ( INIT: \n\t // Prepare for computation. movb %[length], %%r8b \n\t // Load length of sequences into CPU register. movq %[code], %%r9 \n\t // Load first sequence of the range into CPU register. movq %[maxcode], %%r10 \n\t // Load last sequence of the range into CPU register. movb %[limit], %%r11b \n\t // Load maximum allowed level of sidelobes into CPU register. movq %[mask], %%r12 \n\t // Load mask for extracting significant bits into CPU register. CHECK_CODE: \n\t // Body of loop through sequence (like the do-while loop). movb $1, %%cl \n\t // Set the offset value. movq %%r12, %%r13 \n\t // Set mask into mutable variable. NEXT_SHIFT: \n\t // Beginning of loop through shift of sequence (like the do-while loop). movq %%r9, %%rdi \n\t // Shifting sequence. shrq %%cl, %%rdi \n\t // Shift. xorq %%r9, %%rdi \n\t // Counting level of sidelobes. andq %%r13, %%rdi \n\t // Remove extra bits. popcntq %%rdi, %%rax \n\t // al = n (number of the different bits). shlb $1, %%al \n\t // al = 2 * n. subb %%r8b, %%al \n\t // al = 2 * n - l (l - length of the sequence). addb %%cl, %%al \n\t // al = o + 2 * n - l (o - current offset). jge ABS \n\t // al =|o + 2 * n - l| (now al contain the sidelobe level). negb %%al \n\t // . ABS: \n\t // . cmpb %%r11b, %%al \n\t // Check if the sidelobe level acceptable? jg NEXT_CODE \n\t // If it is not, then go to the next sequence. incb %%cl \n\t // Increment the offset for creating next shifted sequence. cmpb %%cl, %%r8b \n\t // Check if is it the lass offset. jbe SAVE_CODE \n\t // If it is, then save cureent sequence. shrq $1, %%r13 \n\t // Shift mask for next shifted sequence. jmp NEXT_SHIFT \n\t // End of loop through shift of sequence. NEXT_CODE: \n\t // Control of loop through sequence. incq %%r9 \n\t // Set next sequence. cmpq %%r10, %%r9 \n\t // Check if the sequence inside the range. jbe CHECK_CODE \n\t // If it is, then go to the begining of the loops body. jmp QUIT \n\t // If it is not, then go to the end of procedure. SAVE_CODE: \n\t // Saving sequence with accepted level of sidelobes. pushq %%r8 \n\t // Store registers. pushq %%r9 \n\t // . pushq %%r10 \n\t // . pushq %%r11 \n\t // . pushq %%r12 \n\t // . movl %%r8d, %%edi \n\t // . movq %%r9, %%rsi \n\t // . call SaveCode \n\t // Calling external function for saving the sequence. popq %%r12 \n\t // Restore registers. popq %%r11 \n\t // . popq %%r10 \n\t // . popq %%r9 \n\t // . popq %%r8 \n\t // . jmp NEXT_CODE \n\t // Continue test sequences. QUIT: \n\t // Exit of procedure. nop \n\t // . : : [length ] m (length), [code ] m (beginCode), [maxcode] m (endCode), [limit ] m (sideLobeLimit), [mask ] m (mask) : %rax, %rcx, %rdi, %rsi, %r8, %r9, %r10, %r11, %r12, %r13 ); return EXIT_SUCCESS;}void SaveCode (const uint8_t length, const uint64_t code){ uint8_t i = 0; for (i = 0; i < length; ++i) { (code >> i) & 0x01 ? printf (+) : printf (-); } printf (\n);}It can be built with:gcc main.c -o mainI'd like to hear any suggestions about how to:improve the algorithm; improve performance (use some tricks or instruction reordering or something else); improve comments (content and translation); improve readability (set aliases for CPU registers if it's possible or something else); any other suggestions
Find binary sequences with low peak sidelobe level of autocorrelation function
performance;c;mathematics;assembly;signal processing
null
_webapps.97647
I searched Google Photos for all of my pictures of a person by clicking on the search bar and then clicking on the picture of the person for whom I'm searching.That worked really nicely β€” it returns a result with a bunch of pictures of that person organized by date.Now, I want to create an album so that I can share this batch of pictures with my spouse. But I can't figure out how to do that.
How do I turn a Google Photos search result into an album?
search;google photos;album
Perform your searchClick on the check mark of the first photo so that it is selectedScroll down to the bottom of the batch of photosWhile holding down the shift key, click the check mark of the last photoNow all of the photos are selectedUp to 500; if you have more than that you may need to do this multiple times Click the + at the top of the screen (next to where the search bar usually is) and choose AlbumAll of the selected photos are now in an album. Name the album as you like, re-arrange your photos, etc.
_cstheory.4765
I just read the Is integer factorization an NP-complete problem? question ... so I decided to spend some of my reputation :-) asking another question $Q$ having $P(\text{Q is trivial}) \approx 1$:If $A$ is an oracle that solves integer factorization, what is the power of $P^A$?I think it makes RSA-based public-key cryptography insecure ... but apart from this, are there other remarkable results?
P with integer factorization oracle
cc.complexity theory;oracles;factoring
I don't have an answer to you question, but I know that a similar notion has very recently been studied, under the name of angel-based security.The first paper studying this concept is Prabhakaran & Sahai (STOC '04). In particular, they wrote in the abstract:[... we give the] adversary access to some super-polynomial computational power.Another important paper which discusses this notion is that of Canetti, Lin, & Pass (FOCS 2010). I watched some parts of their conference presentation (on techtalks), and if I recall correctly, they start with an example similar to what you mentioned in the question.
_unix.234294
I use a socks5 proxy for daily Internet browsing to bypass Internet censors (actually it is a Windows application called freegate and I use it under Wine it doesn't need accounts or anything: it is free for China users). I use it on my both Linux laptops under Wine 1.6.2 and it was perfect until 3-4 days ago. Now, on one of them, I cannot watch YouTube videos anymore, YouTube gives me an error occurred, please try again later for every single video.freegate when first started generates an *.ini file that contains its configs.I even copied the ini file from the one that YouTube works but it didn't work.Here are the things I have done to get YouTube working again but did not work:tried multiple browsers ex Chromium, Firefox, Google-chorme, Operatried flashplayer & html5 cleaned cache and cookieschanging dnsusing the exact config on both pcdeleting ~/.wine folder to let wine rebuild a fresh oneinstalled unscd to clean dns cache!? and #/etc/init.d/unscd restartchanging socks proxy portwait 3-4 days and hope it get fixed by itselfrebooting the systemPlease note that I use the same version, same config β€” one of them opens YouTube great but the other one gives me an error occurred, please try again later for every single videoany suggestions to fix this situation?
Identical situation different results! cannot watch youtube over proxy
proxy;socks;youtube
null
_unix.28107
I want to create a Bootable EFI USB to install Ubuntu & Windows 7 (maybe with utilities like PartedMagic). I did that using MultiSystem previously. However, I am using GPT and Windows install needs to be launched in EFI mode to install on GPT system. I suppose I must use GRUB EFI instead? If theres no app like MultiSystem that creates a GRUB EFI Bootable USB, how can I create one myself? I suppose I will format my USB as GPT, and install GRUB EFI on it (how?). Then I need to configure GRUB EFI to load Ubuntu & Windows 7 install in EFI? How can I do these?UPDATEHere's what I triedCreate 2 partitions on my USB (GPT, 100+MB FAT32 (/dev/sdc1, set boot flag), The rest FAT32, /dev/sdc2, for installs)Extract Windows 7 & Ubuntu 11.10 iso's into Installer partition, 2 different foldersTried using sudo elilo -b /dev/sdc1 --autoconf --efiboot -v`jiewmeng@JM:~$ sudo elilo -b /dev/sdc1 --autoconf --efiboot -velilo: backing up existing /etc/elilo.conf as /etc/elilo.conf-Loaded efivars kernel module to enable use of efibootmgrelilo: Checking filesystem on /dev/sdc1...elilo: Mounting /dev/sdc1...e lilo: 44298KB needed, 78781KB free, 42192KB to reuseelilo: Installing primary bootstrap /usr/lib/elilo/elilo.efi onto /dev/sdc1...elilo: Installing /tmp/elilo.k8NWXX on /dev/sdc1...elilo: Installing /vmlinuz on /dev/sdc1...elilo: Installing /vmlinuz.old on /dev/sdc1...elilo: Installing /initrd.img on /dev/sdc1...elilo: Installing /initrd.img.old on /dev/sdc1...elilo: Updating EFI boot-device variable...Fatal: Couldn't open either sysfs or procfs directories for accessing EFI variables.Try 'modprobe efivars' as root.Fatal: Couldn't open either sysfs or procfs directories for accessing EFI variables.Try 'modprobe efivars' as root.elilo: An error occured while updating boot menu, we'll ignore itFatal: Couldn't open either sysfs or procfs directories for accessing EFI variables.Try 'modprobe efivars' as root.Fatal: Couldn't open either sysfs or procfs directories for accessing EFI variables.Try 'modprobe efivars' as root.Fatal: Couldn't open either sysfs or procfs directories for accessing EFI variables.Try 'modprobe efivars' as root.elilo: Installation complete.Did sudo modprobe efivars got no output but got the same error, I think its because I am not bootted into EFI Ubuntu?Next, I'll try using USB Startup Disk Creator to boot into a live system in EFI mode to try againUPDATEI am so lost, is Windows installed 1st the fault? I formatted & made a bootable USB for Ubuntu Alternate with UNetBootIn, and it failed too with same error of no available kernel. If I made a Ubuntu Desktop I will get cannot configure apt sources The syslog for ubuntu desktop install http://pastebin.com/CdbUPXaxI feel I better not waste time and revert back to MBR soon ... that will mean I have to somehow backup all my data 1st ... which is why I am delaying it to the last resort ... any ideas?UPDATEI tried booting Ubuntu 11.10 Alternate in BIOS mode (Non-EFI), installed fine except I cannot install a boot loader. It says fatal error. I then installed GRUB by booting the USB in recovery mode. Works but it does not boot. Gives a blank screen on boot. If I try to enter recovery mode (on HDD, where ubuntu is installed), keyboard seems to fail, mouse have light though.
Create a Bootable (UEFI GRUB) USB for Ubuntu & Windows 7 Install
ubuntu;usb;windows;system installation;bootable
I'm working on an update to this question/answer.This doesn't work without errors, but as I worked with @jiewmeng I uncovered that the goal was to use a USB to install both Windows and Ubuntu onto one hard drive, UEFI.It has taken a while and I've found the solution but we need to clean the question and answer.Maybe the original question can be answered as well but since the goal was more on the install side the single boot UEFI USB seemed less important.I'm presently using two USB sticks one for Windows, one for Ubuntu.This is a WIP to be updated ASAP I've been working on this for a few days, spare hour here and there and finally have a single USB, that will boot and offer installation of windows 7 and ubuntu.My config is 64 bit specific, you could try and change to accomodate a 32bit intall but there are many differences in filenames. Please follow up if you need 32bit. That said... You cannot install Windows 7 from a GPT formatted USB.You can use gdisk, or parted, and create a GPT USB, which will boot via UEFI.You'll be able to configure the UEFI boot manager to load the Windows installer from the USB but the installer will search for files and data needed to perform the installation and it won't recognize the GPT USB, while it will find an MBR USB.However, this is of little consequence as UEFI looks at the MBR/GPT and the EFI partition, see the Wikipedia entry on UEFI BootingIn spite of using a std MBR for the USB, one can install via UEFI to a GPT disk.The following worked using 64bit installs, on 64bit UEFI Asus Sabertooth.The firmware on each motherboard is very specific and each motherboard UEFI firmware searches for UEFI boot differently. You may have issues with your motherboard finding boot data, but the following works on my ASUS. Here's how I made a bootable USB with an installable copy of the Windows 7 64bit DVD and an Ubuntu ISO (in this example, the 11.10 64bit desktop iso).Using an 16G USB, which is all I had at hand...my USB installed as /dev/sdc, change the relevant references to the appropriate device for your USB.Make sure you have 7zip installed.fdisk /dev/sdccreate new MBR, 'o' commandcreate new partition, part 1, size 8G, type ef, set bootable, writemkfs.vfat -F32 /dev/sdc1mkdir /mnt/USBmount /dev/sdc1 /mnt/USBinsert Windows 7 x64 DVD, again, mine appeared as /media/UDF\ Volume, you need to change references below# Extract/Copy the entire Windows DVD to the USBcp -r /media/UDF\ Volume/* /mnt/USB# I don't know what effect the following rename has, I copied blindly from another webpage.mv /mnt/USB/sources/ei.cfg /mnt/USB/sources/ei.cfg_cd /mnt/USB/efi/microsoft/boot/7z e /mnt/USB/sources/install.wim 1/Windows/Boot/EFI/bootmgfw.eficp -r /mnt/USB/efi/microsoft/boot /mnt/USB/efi/mv /mnt/USB/efi/boot/bootmgfw.efi /mnt/USB/efi/boot/bootx64.efi# At this point I booted the USB, and installed Windows 7 to a GPT SSD# Upon reboot I noticed the Windows Boot loader in my UEFI boot list (actually it made itself 1st).# so, here we have a standalone Windows7 UEFI installer that will function correctly (64bit ASUS, at least).# Now, on to adding Ubuntucd /mnt/USB7z x /path2iso/ubuntu-11.10-desktop-amd64.iso# If 7z finds prexisting files with the same name, just allow always overwrite# (Y)es / (N)o / (A)lways / (S)kip all / A(u)to rename all / (Q)uit? A# At this point I booted the USB, and installed Ubuntu x64 to a GPT SSD# we have a standalone Ubuntu 64bit installer that install Ubuntu 64# Now, on to add a boot manager that will allow us to select between Windows 7 and Ubuntu# Get the target UUID of the USB partition, using either blkid or the following commandgrub-probe --target=fs_uuid /mnt/USB/efi/Microsoft/Boot/bootmgfw.efi will print YOUR_UUID # Substitute into the following references to YOUR_UUID# Append the following menuentry to /mnt/USB/boot/grub/x86_64-efi/grub.cfgmenuentry Microsoft Windows x86_64 UEFI-GPT Setup { insmod usbms insmod part_gpt insmod part_msdos insmod fat insmod search_fs_uuid insmod chain search --fs-uuid --no-floppy --set=root YOUR_UUID # <- CHANGE THIS TO YOUR UUID chainloader (${root})/efi/Microsoft/Boot/bootmgfw.efi }And voila! A working USB stick that uses grub as boot manager, allowing installation to GPT disks with UEFI install. If you have an errors, don't hesitate to msg me, and I'll look into it.
_softwareengineering.136216
I've read through the NetworkX tutorial and documentation, but was unable to find real world answers for this question.Usually when using NetworkX, I might use strings to define nodes, then set several attributes.e.g.G = nx.Graph()G.add_node('John Doe', haircolor = 'brown')G.node['John Doe'][age] = 22However, it seems like declaring a class with members instead of attributes is better in practice, especially when there are many attributes and readability might be an issue.class Person: name = None age = NonePerson pp.name = 'John Doe'p.age = 22G.add_node(p)Could someone be kind enough to validate my reasoning? I lack the foresight to see if Networkx node/edge attributes would be preferable.
Networkx / Python : Is using a class for a node better practice than defining multiple attributes?
python;graph
This doesn't answer your question. However. It seems necessary.class Person: name = None age = NoneDoesn't do what you're suggesting.Those are two class-level attributes. They're emphatically not instance variables.Also. You don't declare attributes at all. You don't declare them like that.Person p isn't proper Python.p= Person()p.name = 'John Doe'p.age = 22This emphatically does not set the class level attributes created as part of the class. It creates additional instance-level attributes.This may answer your question.Networkx allows you to have any object associated with a node.Feel free to do thisclass Person( object ): def __init__( self, name, age ): self.name= name self.age= ageG.add_node('John Doe', data = Person( name='John Doe', age=22 ) )Now you have all of your node data in a single object associated with the 'data' attribute.For trivial name-value kinds of things, this is not obviously creating any real value.But, if you have some node-specific method (rare in graph problems, but possible) then you'd have method functions associated with a node.In graph theory problems, many of the algorithms work on the graph -- as a whole -- and you'll rarely find a use for a class with method functions.Since the change is a trivial piece of syntax, it's probably simpler to start with G.add_node('John Doe', age=22 )And migrate to G.add_node('John Doe', data = Person( name='John Doe', age=22 ) )when you absolutely need to.
_cs.10906
Here is the problem: we are given vectors $v_1, \ldots, v_k$ lying in $\mathbb{R}^n$ which are orthogonal. We assume that the entries of $v_i$ are rational, with numerator and denominator taking $K$ bits to describe. We would like to find vectors $v_{k+1}, \ldots, v_n$ such that $v_1, \ldots, v_n$ is an orthogonal basis for $\mathbb{R}^n$. I would like to say this can be done in polynomial time in $n$ and $K$. However, I'm not sure this is the case. My question is to provide a proof that this can indeed be done in polynomial time. Here is where I am stuck. Gram-Schmidt suggests to use the following iterative process. Suppose we currently have the collection $v_1, \ldots, v_l$. Take the basis vectors $e_1, \ldots, e_n$, go through them through them one by one, and if some $e_i$ is not in the span of the $v_1, \ldots, v_l$, then set $v_{l+1} = P_{{\rm span}(v_1, \ldots, v_l)^\perp} e_i$ (here $P$ is the projection operator). Repeat. This works in the sense that the number of additions and multiplications is polynomial in $n$. But what happens to the bit-sizes of the entries? The issue is that the projection of $e_i$ onto, say, $v_1$ may have denominators which need $2K$ bits or more to describe - because $P_{v_1}(e_i)$ is $v_1$ times its $i$'th entry, divided by $||v_1||$. Just $v_1$ times its $i$'th entry may already need $2K$ bits to describe. By a similar argument, it seems that each time I do this, the number of bits doubles. By the end, I may need $2^{\Omega(n)}$ bits to describe the entries of the vector. How do I prove this does not happen? Or perhaps should I be doing things differently to avoid this?
Can you complete a basis in polynomial time?
algorithms;linear algebra
The result of the Gram-Schmidt can be expressed in determinantal form, see Wikipedia. This shows that the output of the Gram-Schmidt process is polynomial size. This suggests that if you run the classical Gram-Schmidt process, then all intermediate entries are also polynomial size (even in LLL, all intermediate entries are polynomial size). However, even if it is not the case, then using efficient algorithms for computing the determinant (see my other answer), you can compute the Gram-Schmidt orthogonalization in polynomial time.
_cstheory.24943
Rather than empirical evidence, by what formal principles have we proved that quantum computing will be faster than traditional/classical computing?
Is there a formal proof that quantum computing is or will be faster than classical computing?
quantum computing
This is a question that is a little bit difficult to unpack if you are not familiar with computational complexity. Like most of the field of computational complexity, the main results are widely believed but conjectural.The complexity classes typically associated with efficient classical computation are $\mathsf{P}$ (for deterministic algorithms) and $\mathsf{BPP}$ (for randomized). The quantum counterpart of these classes is $\mathsf{BQP}$. All three classes are subsets of $\mathsf{PSPACE}$ (a very powerful class). However, our current methods of proof are not strong enough to definitively show that $\mathsf{P}$ is not the same thing as $\mathsf{PSPACE}$. Thus, we do not know how to formally separate $\mathsf{P}$ from $\mathsf{BQP}$ either β€” since $\mathsf{P \subseteq BQP \subseteq PSPACE}$, separating those two classes is harder than the already formidable task of separating $\mathsf{P}$ from $\mathsf{PSPACE}$. (If we could prove $\mathsf{P \ne BQP}$, we would immediately obtain a proof that $\mathsf{P \ne PSPACE}$, so proving $\mathsf{P \ne BQP}$ has to be at least as hard as the already-very-hard problem of proving $\mathsf{P \ne PSPACE}$). For this reason, within the current state of the art, it is difficult to obtain a rigorous mathematical proof showing that quantum computing will be faster than classical computing.Thus, we usually rely on circumstantial evidence for complexity class separations. Our strongest and most famous evidence is Shor's algorithm that it allows us to factor in $\mathsf{BQP}$. In contrast, we do not know of any algorithm that can factor in $\mathsf{BPP}$ β€” and most people believe one doesn't exist; that is part of the reason why we use RSA for encryption, for instance. Roughly speaking, this implies that it is possible for a quantum computer to factor efficiently, but suggests that it may not be possible for a classical computer to factor efficiently. For these reasons, Shor's result has suggested to many that $\mathsf{BQP}$ is strictly more powerful than $\mathsf{BPP}$ (and thus also more powerful than $\mathsf{P}$).I don't know of any serious arguments that $\mathsf{BQP = P}$, except from those people that believe in much bigger complexity class collapses (which are a minority of the community). The most serious arguments I have heard against quantum computing come from stances closer to the physics and argue that $\mathsf{BQP}$ does not correctly capture the nature of quantum computing. These arguments typically say that macroscopic coherent states are impossible to maintain and control (e.g., because there is some yet-unknown fundamental physical roadblock), and thus the operators that $\mathsf{BQP}$ relies on cannot be realized (even in principle) in our world.If we start to move to other models of computation, then a particularly easy model to work with is quantum query complexity (the classical version that corresponds to it is decision tree complexity). In this model, for total functions we can prove that (for some problems) quantum algorithms can achieve a quadratic speedup, although we can also show that for total functions we cannot do better than a power-6 speed up and believe that quadratic is the best possible. For partial functions, it is a totally different story, and we can prove that exponential speed ups are achievable. Again, these arguments rely on a belief that we have a decent understanding of quantum mechanics and there isn't some magical unknown theoretical barrier to stopping macroscopic quantum states from being controlled.
_webmaster.2584
I always get asked questions by friends why my site doesn't search very well in Google, even though mine is better quality.How can I improve the searchability of my friend's site (drakesterling.com)?http:// www.sterlingcurrency.com.au/http:// www.bluesheet.com.au/http:// www.drakesterling.com/Any help is appreciated.
SEO - Website comparison - Why is this site so good?
seo
At least one site was showing 50+ W3 errors...as John said this isn't the only measure of success, but I personally believe that meeting standards are vitally important as a professional web developer. A quick scan of another site showed it had 583 records in Google...that's a LOT considering that many other sites might only have 15-20. As for improving quality, that requires full SEO analysis as John said, something you either educate yourself about or pay to have done.Google Webmaster tools and Analytics are a good place to start. Make sure the site is visible with webmaster tools and showing no errors. Analytics will tell you where people are coming from, where they're going, and how they travel through. Follow the patterns long enough and you'll figure out which pages need to be improved.I can't say enough about cross-promoting your sites via other media than just Google. Bing just took over for Yahoo's search...have you used their analyzer tool yet? How about social media? To do a good job, you have to be persistent and consistent, posting inbound links (in a non-spam or annoying manner) regularly. My most successful customers actually get more inbound links from other sites and social media than from Google, by a large margin....but they also never stop trying to improve either avenue.As I tell every potential client, there's no silver bullet. It takes patience, work, and detail to be on top of the rankings.
_codereview.43569
For this problem, I came up with this ugly solution of appending the characters to the output and in case there is adjacent duplicate deleting from the output. Considering StringBuilder.deleteCharAt(i) is O(N), performance is O(N) + O(N) = O(N).Please critique this, if this is a right way.public static String removeDuplicate(String s) {StringBuilder builder = new StringBuilder();char lastchar = '\0';for (int i = 0; i < s.length(); i++) { String str = builder.toString(); if (!str.equals() && (str.charAt(str.length() - 1) == s.charAt(i))) { builder.deleteCharAt(str.length() - 1); } else if (s.charAt(i) != lastchar) builder.append(s.charAt(i)); lastchar = s.charAt(i);}return builder.toString();}Sample input outputs: Input: azxxzy Output: ayInput: caaabbbaacdddd Output: Empty StringInput: acaaabbbacdddd Output: acac
Remove all adjacent duplicates
java;algorithm;performance
The problem is not that StringBuilder.deleteCharAt()is O(n) you only ever use it to strip the last character. Rather, it's your builder.toString() that is problematic. It's an O(n) operation that is invoked in a loop up to n times.Rather than using a StringBuilder, I recommend manipulating a char[] array directly. The problem, as you pointed out, is that StringBuilder.deleteCharAt(i) is O(n) because it shifts the rest of the string over. By doing your own accounting, you can just build the string correctly the first time.public static String removeDuplicates(String s) { if (s.isEmpty()) { return s; } char[] buf = s.toCharArray(); char lastchar = buf[0]; // i: index of input char // o: index of output char int o = 1; for (int i = 1; i < buf.length; i++) { if (o > 0 && buf[i] == buf[o - 1]) { lastchar = buf[o - 1]; while (o > 0 && buf[o - 1] == lastchar) { o--; } } else if (buf[i] == lastchar) { // Don't copy to output } else { buf[o++] = buf[i]; } } return new String(buf, 0, o);}
_cs.28435
Does anyone know of a non-trivial reduction from XORSAT to 2-sat since they are both in P? (By non-trivial I mean one that does not just solve the instance of XORSAT and map it to a fixed instance of 2-sat. Rather I'm looking for a way to solve XORSAT by a different method other than just using Gaussian Elimination or some other method of linear algebra.)
A Reduction from XORSAT to 2-SAT
complexity theory;reductions;satisfiability;polynomial time
Your problem can be posed more formally as follows: Is there a weak reduction from XORSAT to 2SAT? Here weak can be, for example, logspace or AC$^0$.We know that 2SAT is NL-complete under AC$^0$ reductions, while restricted versions of XORSAT (say 3XORSAT) are $\oplus$L-complete under AC$^0$ reductions, see this paper proving a refined Schaefer dichotomy theorem. Although NL$\subseteq \oplus$L non-uniformly (see this answer), we don't expect the converse to hold, and in particular we don't expect there to be a weak reduction from $\oplus$L to NL. Unfortunately I'm unaware of any concrete results in this direction.Summarizing, it is expected that there is no weak reduction from XORSAT to 2SAT, but I'm not sure there's much technical work supporting this conjecture.
_reverseengineering.15309
I am trying to analyse a binary so file on android platform written with cocos2d. But can't find a better way of doing it. I was trying frida I was able to perform open calls tracing using frida trace and python bindings but I am not able to deal with other methods which I wanted to. The game is using its own crypto api's and networking api's. I can trace the android networking api's but they are not getting anything decrypted everything they get is encrypted by there custom ssl implementation. How can I trace a function whose address I know in ida.
How can I use IDA address of functions of a stripped binary into frida?
android;binary;instrumentation;shared object
null
_unix.328207
As I run yum makecache on Centos-6.1, I get this error:epel-source/other_db | 1.6 MB 00:40Could not retrieve mirrorlist http://apt.sw.be/redhat/el6/en/mirrors-rpmforge error was14: PYCURL ERROR 22 - The requested URL returned error: 404http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - The requested URL returned error: 404Trying other mirror.Error: Cannot retrieve repository metadata (repomd.xml) for repository: rpmforge. Please verify its path and try againSo, there is a problem with rpmforge and in fact http://apt.sw.be is down. Any way to fix that?
mirrors for rpmforge repository
centos;yum;repository
null
_softwareengineering.50973
I've been asked to create a web page from which users can access several other applications created using oracle forms and jsf, this will include also sso. I cant think of an easy way to do it, what I was thinking about was that the user should register and enter all his usernames/passwords for each application, after which he will deal only with the username/password he created for this page.What I wanted to ask was if this is a good idea or is there a better way to deal with this?
A single access point for several applications
sso
Don't do your idea!See Is an 'if password == XXXXXXX' enough for minimum security? for some basic password advice for applications. In virtually all cases, a password should be hashed so that the value cannot be retrieved.Important: If you do continue with your idea, you've just stored retrievable passwords for several applications in a single place, which is much, much worse than violating the retrievability principle for one application. So, even if those applications had implemented good security practices, this interception step breaks all of them.Single sign-on is usually implemented through some method of either sharing an authentication token (could be used where each application is produced by the same vendor, sharing similar database fields and being able to authenticate the validity of that token), or a single authentication server (OpenId) or one of the other means listed in that article.It's always very complicated and requires very advanced understanding of this topic and security in general.So, your first impression was correct - if SSO is involved, there is no easy way to do it. The SSO requirement makes this project sound like it's beyond your current means, especially if you were asked to create a web page that provides SSO.A single web page with links to different applications is obviously trivial, and I suggest you complete that as a first step.Next, I'd suggest you research SSO, understand how it relates to your applications, and take this back to your manager.If your manager would like to continue, I would look at existing SSO products. Building a secure in-house solution is probably more than you want to take on. If you do decide to build it in-house, get learning.
_scicomp.4972
I have the following 1st order equations and need to solve them using Maple 12. There are unspecified initial conditions and can only be estimated through the Newton raphson method. My problem is how do I implement it so that the equations are numerically solved using rk4-order. Tried out with BVPsolve but doesn't work.this is in line with works by Makinde. On MHD boundary-layer flow and mass transfer past a vertical plate in a porous medium withconstant heat flux.> k1 := diff(X[1](t), t) = X[2](t);> k2 := diff(X[2](t), t) = M*(X[1](t)-1)-(2*(eta+b))*X[2](t);> k3 := diff(X[3](t), t) = X[4](t);> k4 := diff(X[4](t), t) = (2*Sc*Du*(eta+b)*X[6](t)-Du*lambda*X[5](t)-2*Pr*(eta+b)*X[4](t)- Pr*Ec*X[2](t)^2-Pr*Ec*M*(X[1](t)-1)^2)/(1-Du*Sr);> k5 := diff(X[5](t), t) = X[6](t);> k6 := diff(X[6](t), t) = (lambda*X[3](t)+2*Pr*Sr*(eta+b)*X[4](t)+Pr*Sr*Ec*X[3](t)^2+Pr*Sr*Ec*M*(X[1](t)-1)^2-2*Sc*(eta+b)*X[6](t))/(1-Du*Sr);> ICS := X[1](0) = 0, X[2](0) = S[1], X[3](0) = 1, X[4](0) = S[2], X[5](0) = 1, X[6](0) = S[3];
System of non-linear ODEs and estimating unspecified initial conditions on Maple 12
ode
null
_webmaster.92656
I was trying to do some SEO improvements on my website and accidentally set the Expires header to -1.I changed it back after a couple of days. Would the Expires: -1 header be the cause of why my number of visitors has dropped from a daily average of about 3800 to 1300, quite a drastic cut. What's a best value to put for Expires or should I leave that well alone? I've noticed a competitor does a month and so did the same now.Will Google revisit my Expires: -1 pages eventually or will they be forever be ignored by Googlebot? I guess that might be a guess.HTTP/1.1 200 OKCache-Control: no-cache, max-age=0, no-cache, no-store, must-revalidateContent-Encoding: gzipContent-Type: text/htmlDate: Thu, 21 Apr 2016 18:02:06 GMTExpires: -1The following resulted in the Expires header being set:Response.Cache.SetCacheability(HttpCacheability.NoCache);I also updated all the images, made them smaller in size by compressing them using ImageMagick so that they got a good rating with webpagetest.org. Would that have also caused a significant drop.
Set Expires: -1 HTTP Header by mistake and visitors dropped?
html;asp.net;headers
Would the Expires:-1 be the cause of why my number of visitors has dropped from average of about 3800 to 1300No.The Expires header simply controls caching. A value of -1 (strictly an invalid value, which should really be in HTTP-date format) will simply be seen as expired. So, users will always make server requests.Note that the Cache-Control: max-age header is also set. This actually takes priority over the Expires header in compliant browsers (every browser in the last few years and then some).In fact, with caching disabled, you are likely to see an increase in visitors (or at least hits), not less! (If you see any change at all.) You will see more hits because your server will be receiving more requests.Whats a best value to put for expiredWhat you set for your cache headers is entirely dependent on your content and how often it changes. If it rarely changes then set a long(er) cache time. If it changes very often then set a short time or even disable caching (like you did).Would google revisit my expires:-1 pages eventuallyYes. If anything, an Expires: -1 header might encourage Google to crawl more often.Nothing that you've stated in your question would necessarily result in a traffic drop. It's something else.
_webapps.71518
Is there any way to list all the comments received on the videos I uploaded on YouTube? On https://www.youtube.com/comments it seems that I can only get the list of published comments, which is only a small subset of all comments.Basically I am looking for a way to moderate those comments efficiently (list all comments received + an easy way to remove the ones I don't want).
Listing all the comments received on the videos I uploaded on YouTube
youtube;comments
There is no way to list all the comments received on the videos you uploaded on YouTube.
_unix.210589
So somehow the command ls seems to be showing me two identical files in a directory. I am certainly not doing something right here.Behold$ ls -Blah /System/Library/LaunchDaemonstotal 32 drwxr-xr-x 266 root wheel 8.8K Jun 18 10:41 .drwxr-xr-x 79 root wheel 2.6K Mar 31 12:28 ..[redacted]-rw-r--r-- 1 root wheel 715B Jun 18 10:36 tftp.plist-rw-r--r-- 1 root wheel 715B Jun 18 10:35 tftp.plistI can move, rename, edit etc. one of the files, but the other one does not even seem to be there. bash tab completion even shows identical files.For example, entering the following and then hitting TAB$ sudo mv /System/Library/LaunchDaemons/tftptftp.plist tftp.plistIf I rename the file:$ sudo mv /System/Library/LaunchDaemons/tftp.plist /System/Library/LaunchDaemons/tftp.plist.derpThe tab completion still shows the file:$ ls -Blah /System/Library/LaunchDaemons/tftftp.plist tftp.plist.derp But the original, unmodified file does not appear to 'ls'$ ls -Blah /System/Library/LaunchDaemons/tftp.plistls: /System/Library/LaunchDaemons/tftp.plist: No such file or directoryHowever if I just list the files as in the first code snippet above, behold:$ ls -Blah /System/Library/LaunchDaemonstotal 32 drwxr-xr-x 266 root wheel 8.8K Jun 18 10:41 .drwxr-xr-x 79 root wheel 2.6K Mar 31 12:28 ..[redacted]-rw-r--r-- 1 root wheel 715B Jun 18 10:35 tftp.plist-rw-r--r-- 1 root wheel 715B Jun 18 10:36 tftp.plist.derpAny idea what's going on here and how I can get rid of this ghost file?This is a mac running OS X if that adds any info to the problem. I was using sed on this file just before the craziness began.EditI have used both the blah and the Blah ls flags with no change in apparent output.Edit 2Additional info requested in comments:$ echo tftp* | xxd0000000: 7466 7470 2e70 6c69 7374 2020 7466 7470 tftp.plist tftp0000010: 2e70 6c69 7374 2e64 6572 700a .plist.derp.Moar:$ printf '<%q>\n' tftp*<tftp.plist\ ><tftp.plist.derp>Even more:$ locale -rw-r--r-- 1 root wheel 495B Sep 9 2014 org.net-snmp.snmpd.plistLANG=en_US.UTF-8 -rw-r--r-- 1 root wheel 498B Jan 15 23:15 org.ntp.ntpd.plistLC_COLLATE=en_US.UTF-8 -rw-r--r-- 1 root wheel 1.0K Nov 13 2014 org.openldap.slapd.plistLC_CTYPE=en_US.UTF-8 -rw-r--r-- 1 root wheel 572B Sep 9 2014 org.postfix.master.plistLC_MESSAGES=en_US.UTF-8 -rw-r--r-- 1 root wheel 238B Sep 9 2014 shell.plistLC_MONETARY=en_US.UTF-8 -rw-r--r-- 1 root wheel 941B Sep 9 2014 ssh.plistLC_NUMERIC=en_US.UTF-8 -rw-r--r-- 1 root wheel 260B Sep 9 2014 telnet.plistLC_TIME=en_US.UTF-8 -rw-r--r-- 1 root wheel 715B Jun 18 10:36 tftp.plistLC_ALL=en_US.UTF-8
'ls' showing two identical files in a directory
bash;osx;ls
You either have trailing whitespace, or a corrupt filesystem.Tryfor i in tftp.plist*do echo '$i'doneThat should output something like'tftp.plist''tftp.plist 'note the quotes and the extra space. If it outputs the exact same thing twice, you likely have a corrupt filesystem.Tryls -i tftp.plist*this will give you the inode numbers of the file. If they're the same, you have the same file twice in your directory. That would be Really Bad(tm), and you should run fsck asap. But I doubt that's the problem; it's more likely the whitespace thing.
_cs.42568
My teacher asked me to convert the regular expression $(a | b)^*$ to an NFA using Thompson's algorithm well, I'm well aware of how this algorithm works, but since I'm not good at memorising details, I produced a slightly different NFA, and the teacher said it was wrong. Yeah, I know that it's not what the algorithm produces, but I guess it's not wrong either what do you guys think?This is the NFA that I produced:Thompson's algorithm produces an empty transition from state 7 to 2 instead of that from 8 to 1. Is there anything wrong with my construction? If so, could you please show me?N.B.: The accepting state (8) is not marked because of a difficulty with the drawing tool.
Thompson's Construction Algorithm produces a different NFA
automata;finite automata;regular expressions;simulation
Your construction is correct, in that your FA accepts $(a\mid b)^*$. In fact, the construction is exactly the one that is used in a popular (and very good) text by Peter Linz. However, your instructor might have objected to the fact that it wasn't the one used in the Thompson paper, which of course it wasn't, as you noted. The upshot is that there are different, equivalent, ways of going from a regular expression to an NFA. As the sage said, There are many roads to the top of the mountain, grasshopper.(For instance, one might also propose the 1-state FA for this language, which also would be correct.)
_softwareengineering.288762
Below are the recommendation from section 5.1 of this essay.While Java is not a pure object-oriented language, it is possible to program in a pure object-oriented style by obeying the following rules:1) Classes only as constructors A class name may only be used after the keyword new.2) No primitive equality The program must not use primitive equality (==). Primitive equality exposes representation and prevents simulation of one object by another.3) In particular, classes may not be used as types to declare members, method arguments or return values. Only interfaces may be used as types. Also, classes may not be used in casts or to test with instanceof.This is generally considered good object-oriented styleFor instance, below part of jdk code does not follow third point mentioned above:return type of toArray method.public Object[] toArray() { //whatever }in java.util.AbstractCollection.or return type of toString() nethod.public String toString() { //whatever }below part of jdk code does follow third point above:public Iterator<E> iterator() { return new Itr(); }What advantages do we find in following these recommendation? Can some design patterns be pulled in(automatically), by following such recommendations?
Advantages of these recommendations in ooprogramming using Java
java;design;design patterns;object oriented;interfaces
null
_codereview.25747
I'm trying to improve my javascript and have been using the revealing module pattern to good effect but I noticed that code in the module not in a function or the return is executed on creation have utilised this, such as:var MyModule = function ($) { setUp(); function setUp() { // execute code to be run when module created } function doStuff() { } return { doStuff: doStuff }}I can see this is creating a form of constructor for the function which relies on it being called by var myModule = new MyModule(jQuery).Any feedback on the rights or wrongs of this appreciated.
Revealing module implicity calling an internal function - is this a smell
javascript;revealing module pattern
null
_softwareengineering.352578
I'm working at a company on a project for their Sales department. It's my first professional programming job, but I've been coding by myself and learning for years. Part of the project involves taking some data and combining it with input to produce and graph. Then save the data...so on and so forth. So I wrote the code for this in a little under a day. The next day I showed my project supervisor, and he liked it, but what if we had this, and wanted me to add something to the graph. This was not a huge change to the look or function of the program, but it drastically changed how I needed to be storing data, processing it, etc. Again, it took me about a day to re-structure the database table, and rewrite the code basically from scratch to support this new request. I took it back to him again, and the exact same thing happened. He requested something else which drastically changed how I needed to process the data. So, I had to rewrite it again. Finally he signed off on it, and hopefully, I won't have to rewrite it again.Just be clear, I'm not bashing my manager or anything like that. He's a great guy and the things he was requesting were not out of this world, they just were incompatible with what I had previously done.I'm just wondering if there's anything I can do in the future to avoid complete rewrites. I understand making flexible code and was trying to do so, but I would just like to know of any practices or things I could have done differently to make this easier, so, in the future, I don't spend 3 days on something that should've taken 1.
How to avoid rewriting parts of an application
programming practices;requirements;rewrite
As I commented, I have the strong feeling that the requirements were not clear the first time or probably you missed some important details. Not everything can be addressed with better code, best practices, design patterns or OOP principles. None of them will prevent you from redoing the whole application if the implementation is based on false assumptions or wrong premises.Don't rush into coding the solution. Before typing down a single LOC, spend some time on clarifying the requirements. The deeper you delve into the requirements, the more what if questions appear. Don't wait for the Manager to surprise you with next what-if. Anticipate things yourself. This little exercise can reduce significantly the surprise factor.Don't be afraid to ask as many times as you need. Sometimes the trees (details) don't let us see the forest (the overall picture). And it's the forest that we need to see first.When requirements are clear, it's easier to make better decisions during the design phase. Finally, remember that the overall picture is a goal. The route to this goal is neither plain nor straightforward. Changes will continue to happen, so be agile.
_webapps.30946
I need to see who favorite my tweet. I need twitter api call to know who favorite my tweet and the person details.
how do I find out all the users who favorite a particular tweet of mine? How can I find in twitter api call?
twitter;favorites
null
_unix.76655
When I say sudo sh, TAB stops working as autocomplete signal on my Debian.How can I enable TAB key autocomplete after I say sudo sh ?
TAB autocomplete on sudo sh
shell;sudo;autocomplete
Trying using sudo bash instead of sudo sh.
_unix.381211
How can I have surfraw use google as a default search engine?Or how could I shorten surfraw google foo to surfraw g foo?
Use google by default in surfraw
command line
Set SURFRAW_customsearch_provider=google.Then use: sr S fooIf setting via a shell variable, make sure to export it.
_unix.184651
I'm using scp to copy files from a remote server to a local one. What's really uncomfortable is that I need to type the file path precisely. I'm used to relying on autocompletion, because file names and folder structures can be long. I want to be able to see the names of files in each directory and autocomplete just like when browsing files locally.Now, I could do SSH separately, find the file names, and use them in the SCP command. But of course, that would be a huge waste of effort. Also, I could use a GUI, but I prefer to avoid that because a command line is more lightweight.Any way to use SCP without having to remember file names exactly?
Seeing file names in SCP
ssh;autocomplete;scp
bash-completion (which is available in Cygwin, Debian, Ubuntu and no doubt many other distributions) supports scp auto-completion, as long as the shell can access the required server with no prompting (it uses ssh in batch mode, see the ssh_config(5) manual page for details).The easiest way to enable this is to use ssh-agent. This is probably enabled by default by your desktop environment; simply tryssh-addto add your default key to the currently-running agent (if any). If no agent is running, you can start one by runningeval $(ssh-agent)Once your key is known to the agent, you'll be able to auto-complete scp commands involving servers you can access with the key.I'm pretty sure zsh also supports scp auto-completion with the same caveats; the necessary support is in the zsh-common package in Debian. It needs to be enabled in your .zshrc though with something likeautoload -U compinit && compinit(which loads all the supported completions).
_softwareengineering.255305
In Chapter 3 of his book The Art of Unit Testing: with Examples in C#, Roy Osherove describes the concept of testing state change of a system.The example code under test he uses looks like this:public class LogAnalyzer{ public bool WasLastFileNameValid { get; set; } public bool IsValidLogFileName(string filename) { WasLastFileNameValid = false; if (string.IsNullOrEmpty(filename)) { throw new ArgumentException(filename has to be provided); } if (filename.EndsWith(.SLF, StringComparison.InvariantCultureIgnoreCase)) { WasLastFileNameValid = true; return true; } return false; }}and we want to test state of the WasLastFileNameValid property.To this end, the author uses the following test:[Test]public void IsValidFileName_WhenCalled_ChangesWasLastFileNameValid(){ LogAnalyzer la = MakeAnalyzer(); la.IsValidLogFileName(badname.foo); Assert.False(la.WasLastFileNameValid);}However, I see the following issues with this test:The 'outcome' part of the test name is ChangesWasLastFileNameValid, but the test doesn't really check whether the property value changes; it may have been false even before the call to IsValidLogFileName.The test is only testing the one case where the last call was an invalid filename.I would use the following test instead (using xunit.net) [Theory] [InlineData(true, fileWithValidExtension.SLF, true)] [InlineData(true, fileWithBadExtension.FOO, false)] [InlineData(false, fileWithValidExtension.SLF, true)] [InlineData(false, fileWithBadExtension.FOO, false)] public void IsValidLogFileName_WhenCalled_ChangesWasLastFileNameValid( bool preState, string filename, bool postState) { LogAnalyzer analyzer = new LogAnalyzer(); analyzer.WasLastFileNameValid = preState; analyzer.IsValidLogFileName(filename); Assert.Equal<bool>(postState, analyzer.WasLastFileNameValid); }Here I test whether the value changes, and I also test all scenarios. Is this a better test?
State Change Tests
unit testing
To answer your specific concerns:Whether a state change occurs is not necessarily relevant, only that the correct state exists at the time of testing. Both states should be tested against their relevant conditions, not just one or the other.It's probably a better software design if the IsValidLogFileName() method is called at the moment that an evaluation of the file name is needed, rather than trying to maintain a state variable with that information. It should be redesigned to be stateless, in other words (all other things being equal).
_codereview.39444
Which of these names is better: time.getHours() or time.hours()? And why?public class Time implements Serializable { private final int hours; private final int minutes; public static Time from(Calendar calendar) { int hoursIn24HourFormat = calendar.get(Calendar.HOUR_OF_DAY); int minutes = calendar.get(Calendar.MINUTE); return new Time(hoursIn24HourFormat, minutes); } public Time(int hours, int minutes) { this.hours = hours; this.minutes = minutes; } // or may be getHours() name is better? public int hours() { return hours; } // or may be getMinutes() name is better? public int minutes() { return minutes; } @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (obj == this) { return true; } if (!(obj instanceof Time)) { return false; } Time other = (Time) obj; return (hours == other.hours) && (minutes == other.minutes); } @Override public int hashCode() { return toMinutesOfDay(); } private int toMinutesOfDay() { return hours * 60 + minutes; } @Override public String toString() { return twoDigitString(hours) + : + twoDigitString(minutes); } private static String twoDigitString(int timeComponent) { return (timeComponent < 10) ? (0 + timeComponent) : String.valueOf(timeComponent); } public boolean before(Time time) { return toMinutesOfDay() < time.toMinutesOfDay(); } public boolean after(Time time) { return toMinutesOfDay() > time.toMinutesOfDay(); } public boolean within(Time fromIncluded, Time toIncluded) { return (!fromIncluded.after(this)) && (!this.after(toIncluded)); }}
Proper naming for a Time class
java;datetime
I'd name the class TimeOfDay to be absolutely clear about its purpose.Consider adding validation to the constructor (0 hour 23 and 0 minutes 59).Because before() and after() are predicates, I'd rename them to isBefore(TimeOfDay t) and isAfter(TimeOfDay t).I prefer getHours() over hours().If you prefer hours(), you might consider just exposing public final int hours, minutes; instead. Normally, you want to reserve the flexibility to change the internal representation of your object (for example, to store just minutesOfDay instead of hours and minutes). However, since those fields are final, and you're more or less committed to the representation due to serialization, there's not much to be gained by wrapping the value in a method, other than consistency with tradition, and you would already violate tradition slightly anyway by choosing hours() over getHours().
_unix.234744
When viewing PDFs that are setup for print, I often want to view facing page spreads. Typically PDFs are typeset such that the first page of the file/part/chapter always starts on a right page so facing page spreads are always even-odd number pairs. Unfortunately Zathura's dual display mode shows everything in odd-even pairs starting with 1-2.Is there some way to setup display of facing-pages much the way most PDF readers default to in 2-up modes?
Can Zathura's dual page mode use a page offset?
pdf;zathura
Yes this is possible, it's just not well documented. Or documented at all for that matter. The only reference I found was in an issue report.Once a file is opened, you can change the first page location by running :set first-page-column 2 from the command line.This same line can be added to the rc file at ~/.config/zathura/zathurarc to make it the default.Note: Since zathura 0.3.5 the format for this setting has changed. It is no longer a fixed value for all page configuration but can be set independently for different numbers of columns. As a result in order to set this for a two column layout you actually need to set the second column of the setting, the above format will have no effect. The new format looks like this::set first-page-column 1:2Or if you want to set layouts for 3 and 4 column views as well::set first-page-column 1:2:1:2That will start three column views with the first column but use skip to the second column to start 2 and 4 page spreads.
_softwareengineering.213206
I'm about to take on a new project for a client designing a server-side python program that will poll a number of XML streams at regular intervals and populate a Postgresql database with results. The system will basically act as an XML cacheing daemon and populate a database. That database will then be accessed by a separate program which will serve that data in JSON to a public-facing website. I'm wondering if anyone has suggestions as to the high-level design of the python caching/polling program. I'm thinking something class-based where a number of classes (one for each XML stream to be polled) descend from a common parent class, check for new content and then grab any if found. I'm thinking to have a table for each of the XML streams on the database, a column for each of the XML fields, and another table to keep track of the last access for each stream.For this I'm planning on using Python 2.7, pyscopg, and ElementTree for XML processing.Does anyone have any thoughts or criticism on my design or toolchain at this point? Is parsing my XML values to store in a Postgres database more work than is worthwhile? The front-end program that will be serving that data will be turning it into JSON rather than XML by the way.
Suggestions designing an XML polling deamon in python
python;xml;postgres
I'm not sure why you refer to the daemon as caching; it is probably better to think of it as a polling-translation or polling-adaptation daemon. I'm often surprised at how subtle naming differences can change how you think about the problem.Your desire to get the XML data out of XML is a good one. I've yet to see any case where XML is useful for anything other than well-defined data exchange and then only when XML is expected by one side or the other. Put another way, if I want to look for the average weight of an unladen swallow, digging through a pile of XML is about the worst way I can think to get at that data. This points to another place where your writing is inverted compared to my thinking: the data schema is preeminent, XML is just a container. When you say column for each of the XML fields, I think column for each data field. Are these different enough to matter? Probably.You talk of classes and inheritance. People try to use inheritance far more than is probably worthwhile (which is its own diatribe) but I think you might be over-engineering the adaptation part. I can see no reason to bother constructing a Python class representation of input items when the next step is to turn it into a SQL statement and send it away. From what little you've given, instantiating a class for each row could be convenient or it could be overkill; I'd assume overkill unless you can demonstrate otherwise.Your tool choice is obviously correct with one exception: unless there is reason you've not given for preferring Python 2, use Python 3. Your future self will thank you in five years.
_cogsci.4462
Is long-term alcohol use really capable of permanently changing one's thought processes? In what ways is this possible, and through what physical changes in the brain does this occur?
Does long-term alcohol use permanently change one's thought processes?
cognitive neuroscience;alcohol
null
_reverseengineering.13886
I have some dead pixels in top couple of rows of my cheap Win 10 tablet. To alleviate that, I wanted to use the custom resolution option in Intel graphics management panel, however, I cannot choose advanced options in the custom resolutions options and it is driving me crazy.I remembered Intel GFX drivers used some *.inf to enable some options and I believe it is in igdlh64.inf. Has any of you tried to modify this or have some knowledge on this (and no there is not a documentation provided anywhere)?
igdlh64.inf modify custom resolution
driver;intel
DTD Calculator (http://www.avsforum.com/forum/26-home-theater-computers/947830-custom-resolution-tool-intel-graphics-easier-overscan-correction.html) did the job for me. Using it I was able to create valid EDIDs and add them to by modifying [NonEDIDMode_AddSwSettings] category in the said inf file(simply modify total number of DTD and add your EDID, do not forget to add 2 Bytes of flags 37 01). If you need further step by step instructions or detailed explanations of fields please do check https://software.intel.com/en-us/articles/custom-resolutions-on-intel-graphicsOr you can try your luck with registry hack option of the DTD calculator, as well, did not work for me though.
_unix.154291
Is there a way to change inside of a comment with Vim? I know you can change inside brackets and quotes with i] and i. For example, If you are on a quote you can press,cimy new textESCAnd this will replace the text inside of the quote with the phrase my new text, but how can I do the same thing with C comments which are enclosed with /* and */?
Change inside comments with Vim
vim;keyboard shortcuts;block comment
null
_unix.269553
I am trying to provide Internet to laptop, from any of two phones.I can do it only by wifi, but it drains battery quickly.When same laptop had Linux Mint 17.1-3 XFCE it worked without problem (on both smartphones).I am able to connect by bluetooth, and on phone popups message to authorize laptop to use Internet, which I accept, and phone says, that shares connection to 1 device (from this moment Linux Mint had Internet), however, laptop just loops between retrying connection and after each 10 seconds connection failed.I can send and receive files by bluetooth without problem.Connecting any phone with usb, and clicking tether as in Mint, makes laptop to recognize connection as ethernet (it's okay I guess), but also Internet cannot be accessed.I can access phone's files by usb without problem, and it gets recognized as right device.I was trying on both phones with both bluetooth and usb to ping 8.8.8.8 and it gives me Network unreachable.Tether by wifi works fine.How can I tether by usb and bluetooth?EDIT:I know, that 8.8.8.8 and 8.8.4.4 are Google's DNS, and they will know, which hostnames I want to resolve, if I use their DNS.However, I have been trying to add 8.8.8.8 as name server in connection settings, and it didn't work - how is this file different?But yes - adding nameserver 8.8.8.8 and nameserver 8.8.4.4 to this file solved issue for bluetooth.I still cannot tether through usb - lapcop creates Wired connection 1 with type of Wired Ethernet, but it cannot really connect to it (last used: never).I am unable to force establishing this connection, in connection editor, option connect is disabled for this connection, and connection manager in tray doesn't even show it.I have been trying to remove it, and also add from two separate phones - without luck.It may be also worth a note, that before adding those nameservers to file, bluetooth couldn't even connect as Network Access Point, but now it works fine.Remaining questions:1. How to establish connection through usb?2. How is nameserver in file different from DNS in connection settings?
USB/bluetooth tether to Kubuntu 15.10
kubuntu
I had the same problem. So try opening /etc/resolve.conf and where nameserver 127.*** change it to 8.8.8.8 then add nameserver 8.8.4.4 save it and reboot. Try your connection again. Hope it helps, it worked for me.
_webmaster.74352
I know you cant create goals retrospectively in Analytics and AdWords.But I have been converting most of my AdWords customers in my showroom, not online through the ecommerce part of the website. I now want them all to visit a certain page which I will set as a conversion goal.Will the goals be recorded if I created this goal after they first got tagged via AdWords?The actual goal completion will occur after I create the goal when they now visit this special conversion recording page.
Can one create a new goal for previous AdWords visitors if they now go to the goal page?
google analytics;google adwords;conversions;goals
null
_codereview.21131
Anyone have any comments? Just trying to better my code if can be done.public class SacWidget extends AppWidgetProvider { String roseUrl; AQuery aq; public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { final int N = appWidgetIds.length; for (int i = 0; i < N; i++) { int appWidgetId = appWidgetIds[i]; Intent intent = new Intent(context, DangerRose.class); PendingIntent pendingIntent = PendingIntent.getActivity(context, 0, intent, 0); RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.widget_layout); views.setImageViewBitmap(R.id.ivdangerrose, getRose()); appWidgetManager.updateAppWidget(appWidgetId, views); } } private Bitmap getRose() { Bitmap bitmap = null; File f = new File(/storage/emulated/0/acr/sac/dangerrose.png); try { bitmap = BitmapFactory.decodeStream(new FileInputStream(f)); } catch (FileNotFoundException e) { e.printStackTrace(); } return bitmap; }}
Android widget code
java;android;file
I'm not too familiar with Android, so the following is just some generic Java notes:Fields roseUrl and aq seem unused as well as the pendingIntent local variable. You might remove them.You could use a foreach loop: for (final int appWidgetId: appWidgetIds) ...It isn't the best idea to use printStackTrace() in Android exceptions.The getRose() method creates a stream (new FileInputStream(f)). I'm not sure whether BitmapFactory.decodeStream closes it before it returns or not. If not, you should close it.Notes for the edit:You should close the stream in a finally block. If BitmapFactory.decodeStream throws an exception it won't be closed. See Guideline 1-2: Release resources in all cases in Secure Coding Guidelines for the Java Programming LanguageThe following two lines are duplicated:File ext = Environment.getExternalStorageDirectory();File file = new File(ext,acr/sac/dangerrose.png);You could extract them out to a method.