id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_datascience.22124 | I have had basic experiences in Machine Learning before in my classes at school. However, I have never dealt with a large amount of log data before.In my PaaS company, we have customers who buy our product. After the purchase, they go through an implementation phase where they set up our product to uniquely fit their needs (can take many months). Once this is done, they enter the production phase where their employees can start logging on and using our product.For each customer, I current have timestamped data from the logs for each day, and firmographic information such as #employees, company size, etc for each customer. The log data in particular contains information for each day for each customer such as memory used, tasks performed etc.Goal: To predict when the date a new customer will go into production phase. This will allow us to allocate our servers and hardware more efficiently if we know the date they will go into production phase.I currently have labels for all our customers of the dates they went into production phase. I am not sure how I should deal with features, and what algorithms to use when I have a bunch of logs for each customer. Thank you! | Machine Learning from log files and firmographics | machine learning;bigdata | null |
_webapps.30806 | I see that Google and Blogger and maybe other have a customized Google Plus URL.Ex:http://plus.google.com/+google/posts http://plus.google.com/+blogger/posts | How can I get Custom Vanity Google plus URLs? | google;google plus | Google is only beginning to roll out these vanity URLs. Currently, only a few thousand verified and well known people and brands have it, although they will be adding more over time. See this post for (not much) more information.In the meantime, you should be able to create a redirect from your own website to your profile. (It may be argued that this is a better solution, even.) You can also setup a Google+ Badge to let visitors to your website go to your profile or follow you directly from your site. |
_codereview.812 | I have a function that takes a column title, and a response.body from a urllib GET (I already know the body contains text/csv), and iterates through the data to build a list of values to be returned. My question to the gurus here: have I written this in the cleanest, most efficient way possible? Can you suggest any improvements?def _get_values_from_csv(self, column_title, response_body): retrieves specified values found in the csv body returned from GET @requires: csv @param column_title: the name of the column for which we'll build a list of return values. @param response_body: the raw GET output, which should contain the csv data @return: list of elements from the column specified. @note: the return values have duplicates removed. This could pose a problem, if you are looking for duplicates. I'm not sure how to deal with that issue. dicts = [row for row in csv.DictReader(response_body.split(\r\n))] results = {} for dic in dicts: for k, v in dic.iteritems(): try: results[k] = results[k] + [v] #adds elements as list+list except: #first time through the iteritems loop. results[k] = [v] #one potential problem with this technique: handling duplicate rows #not sure what to do about it. return_list = list(set(results[column_title])) return_list.sort() return return_list | Getting lists of values from a CSV | python;csv | Here's a shorter function that does the same thing. It doesn't create lists for the columns you're not interested in.def _get_values_from_csv(self, column_title, response_body): dicts = csv.DictReader(response_body.split(\r\n)) return sorted(set(d[column_title] for d in dicts)) |
_cs.52989 | I am trying to optimize what we call AJAX request polling frequency in the domain of web design. Here's a general version of the problem in simple lingo:Problem Statement:Suppose there are 3 persons - consumer (C), producer(P) and a messenger(M). The consumer and producer are some distant apart and can't communicate directly. The messenger - who works for the consumer - takes order from consumer, walks up to producer, places the order and come back to consumer. Messenger needs some finite amount of time to make the travel between P and C.Now different orders require different amount of time for processing. After an order is placed, the M and C have no way to know how long the order processing will take. Hence, the messenger is required to make repeated trips to the Producer to check if the order is ready. It works in this way:Step 1: Messenger waits for W amount of timeStep 2: M Goes to the producer to see if order is readyStep 3: a) Order ready - come back with the produced itemb) Not ready - come back and repeat from step 1.Now, I tabulated the time taken by each orders and plotted them against the number of orders taking that time. I got this: | | | 25 | | | | | | | | | 20 | | | | | | | | | | | | | | | | 15 | | | | | | | | | | | | | | | | | | | | | 10 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 0 +-|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--- 1 5 10 20 25 30 35 40 45 50 55 60 65 70 75 85 90 As you can see, most orders take around 35 to 40 units of time (let's say seconds) to complete whereas some orders get completed in 5-10 seconds and some takes 80-90 seconds. My Goal:It is costly to have a high-frequency W. Right now my messenger is checking back in every W seconds. Instead of keeping this checking frequency W as constant, can we find some other checking frequency - may be a variable one where we adjust the frequency as we go along - that would ensure the least overall waiting time combining consumer and producer waiting time together.As you can tell, I am obviously not very good in articulating the problem very well. So, let me know if you need clarifications in comments.~~~ EDIT - Defining an Evaluation Model ~~~ As some of the comments suggest, it's important to include W in the evaluation model. I think what I am really trying to do here is to minimize unsuccessful pollings (like polling that comes back empty handed as producer hasn't finished producing)Say, messenger takes W amount of time to do one polling. If I take a given sample set of N pollings and if only S fractions of those polls were successful, then my intuition tells me that,S = function (N, W)So the question is - given a specific set of N, how do I find a W that maximizes S | Algorithm to optimize polling frequency between producer and consumer | algorithms;optimization;linear programming | null |
_datascience.16921 | I would like to ask if Savitzky-Golay can be implemented on real-time data. I have used it on a fixed array size, but would like to extend it to output values for real-time sensor data. Can anyone refer me to appropriate implementation or hint online implementation. Thanks. | Real time noise removal using Savitzky-Golay Method | time series;preprocessing;online learning;data stream mining;noisification | null |
_unix.382981 | I installed some self-built deb packages (PAM) withsudo dpkg --force-all -i /opt/bzr/build-area/*.debtree is:/opt/bzr/build-area . libpam0g_1.1.8-3ubuntu1_amd64.deb libpam0g-dev_1.1.8-3ubuntu1_amd64.deb libpam-cracklib_1.1.8-3ubuntu1_amd64.deb libpam-doc_1.1.8-3ubuntu1_all.deb libpam-modules_1.1.8-3ubuntu1_amd64.deb libpam-modules-bin_1.1.8-3ubuntu1_amd64.deb libpam-runtime_1.1.8-3ubuntu1_all.deb pam_1.1.8-3ubuntu1_amd64.build pam_1.1.8-3ubuntu1_amd64.changes pam_1.1.8-3ubuntu1.diff.gz pam_1.1.8-3ubuntu1.dsc pam_1.1.8.orig.tar.gzImportant: What happens when a new official Ubuntu PAM version is released and I run apt dist-upgrade -y? Will it overwrite my own packages?Optional: Do I need --force-all in dpkg -i?Optional: https://code.launchpad.net/~ubuntu-core-dev/pam/ubuntu looks like a dev branch to me? Does a stable branch exist? How to get it with bzr?Related to: https://unix.stackexchange.com/a/382363/239596 | What happens to self-built packages when I `apt dist-upgrade`? | ubuntu;apt;dpkg | Yes, they will, if they use a higher version number than your packages, unless you adjust your pin priorities appropriately (or hold the packages, as pointed out by cas). Note that the repositories packages are liable to overwrite yours if you use the same version number, so you should really increment your version when you rebuild locally (typically 1.1.8-3ubuntu1.1).No, and you should avoid it unless absolutely necessary. If you think its necessary, theres probably something wrong with your packages and you should fix that instead.That code repository hasnt been updated since 2014, so I doubt its an active development repository. You can see the various pam branches on Launchpad, and clone them using for example bzr branch lp:ubuntu/vivid/pam.The way I go about dealing with this type of situation is as follows:check out the source code (debcheckout or apt-get source)if its in a repository, create a new branch with the patch Im interested inif its not, apply the patch manuallyin both cases, increment the version with an appropriate changelog entry using dch -n (without committing it, to avoid merge issues)build the package and install it (in my case, via a local package repository)When a new version of the package is released, I repeat the above; in the case of a source repository, I rebase the patch instead of starting from scratch. |
_unix.316024 | I'm new to Linux kernel. Recently I tried to install Red Hat-2.6.9-EL on VirtualBox and I compiled Linux-2.6.9 and installed on the Red Hat system. When I booted with the new kernel, I got some errors like:device-mapper: table ioctl failed: Invalid argument device-mapper: table ioctl failed: Invalid argument 2 logical volume(s) in volume group VolGroup00 now activemount: error 6 mounting ext3mount: error 2 mounting noneswitchroot: mount failed: 22unmount /initrd/dev failed: 2Kernel panic - not syncing: Attempted to kill init!I'm using SCSI controller for the hard disk(8G) on the virtual machine.Can someone help me? | Kernel panic - not syncing: Attempted to kill init! | rhel;boot;linux kernel;virtualbox | null |
_datascience.15085 | Most problems have a curve whereby the results improve as data are added but level off at some point.Are there research papers or industry results that discuss the correlation between data set size and prediction accuracy for natural language identification? | Diminishing returns in language identification data set size? | dataset;nlp;language model | null |
_vi.7905 | When I run the following command ex::! some_commandIt's somewhat annoying the message:Press ENTER or type command to continueIs there a way to automatically return to current buffer without press ENTER? | Autoreturn after external command | external command | A way to do this is to use the silent command::silent !lsThis will return to normal mode right after the command (here ls has been executed).In the case the command produces output, you may want to force a redraw of the screen with the redraw command: :execute silent !ls | redraw!You can even create a new command that does this for you:command! -nargs=+ Silent execute 'silent <args>' | redraw!And use it like so::Silent !ls |
_codereview.120452 | When I'm writing Python, I wanna be able to pull up an interpreter quickly, and import what I need easily. Previously, this involved:bash4.3 $ python3>>> import os, sys, readline, lxmlI was irked by the amount of typing in that, so I wrote a couple of tiny bash functions that live in my .bashrc, so I can just do (for example):bash4.3 $ from os import path>>> from os import path: success!>>> or:bash4.3 $ import os, sys, readline, lxml>>> imported os, sys, readline, lxml>>> Perhaps this might be regarded by some as a non-issue, but for my workflow when I just want to test something quickly, it's really handy.The functions: function import () { ARGS=$* ARGS=$(python3 -c import re;print(', '.join(re.findall(r'([\d\w]+)[, ]*', '$ARGS')))) echo -ne '\0x04' | python3 -i python3 -c import $ARGS &> /dev/null if [[ $? != 0 ]]; then echo sorry, junk module in list else echo imported $ARGS fi python3 -i -c import $ARGS}function from () { ARGS=$* ARGS=$(python3 -c import re; s = '$ARGS'.replace(',', ', '); args = ' '.join(re.findall(r'^([\d\w]+) import ([\d\w, ]+)$', s)[0]).split(' '); print('from', args[0], 'import', ' '.join(args[1:]).replace(',', '').replace(' ', ','))) echo -ne '\0x04' | python3 -i python3 -c $ARGS &> /dev/null if [[ $? != 0 ]]; then echo junk module in list else echo $ARGS: success! fi python3 -i -c $ARGS}I'm aware python3 -m module exists, however, it requires module to be a fully-fledged module, with an __init__.py and stuff. If I'm writing a small script that isn't really designed to be a module (i.e, that has module-level expressions that will be run if the module is imported), then I want to import it normally, not with -m.I'm also aware I could use && and || for short-circuiting over if [[ ]]; then; fi but I consider this more readable.I'm looking for responses about any part, really, but I'd be most interested to hear whether I'm unknowingly abusing bash in some way (quoting, perhaps?) and about the probable inefficiency of the Python code I'm using to process / sanitise the args.I'd also be happy to hear about corner cases not caught by my regex / parsing job, and how they sudo rm -rf --no-preserve-root /-ed you. | Start the Python interpreter with an import statement | python;python 3.x;bash | null |
_scicomp.23445 | I understand the basic idea of imaginary time propagation method:The wavefunction $\psi(x,t)$ as a superposition of energy eigenstates $\phi_m(x)$:$$\psi(x,t)=\sum_m \phi_m(x)e^{-iE_mt/\hbar}$$In imaginary time, ${t}\rightarrow{-it}$,$$\psi(x,-it)=\sum_m \phi_m(x)e^{-E_mt/\hbar}=e^{-E_ot/\hbar}\bigg(\phi_o+\phi_1e^{-(E_1-E_o)t/\hbar}+\phi_1e^{-(E_2-E_o)t/\hbar}+........\bigg)$$As time $t$ increases the terms with greater exponents will decay more rapidly, leaving behind the ground state.But how do i make a program or subroutine to find the ground state wavefunction using imaginary time propagation for a basic one dimensional problem, preferably in Fortran ?Ref to 7.12 in an introduction to computational physics by Pang.The macroscopic state of BoseEinstein condensate is described by theGrossPitaevskii equation:$$i\hbar\frac{\partial \Psi(x,t)}{\partial t}=\frac{-\hbar^{2}}{2m}\frac{\partial^2 \Psi(x,t)}{\partial x^2}+V_{ext}\Psi(x,t)+g|\Psi(x,t)|^2\Psi(x,t)$$where $\psi(x,t)$ is the macroscopic wavefunction, $m$ and $g$ are two system-related parameters, and $V_{ext}(x)$ is the external potential. Develop a program that solves this equation numerically. Consider various potentials, such as a parabolic or a square well.Can I apply Finite-difference method or Crank-Nickelson method, similar to simple diffusion equation, by substituting for $\frac{\partial \Psi(x,t)}{\partial t}, \frac{\partial^2 \Psi(x,t)}{\partial x^2}, |\Psi(x,t)|^2\Psi(x,t), \Psi(x,t)$, after inputting the imaginary time to get rid of '$i$' ?or how do I apply imaginary time evolution to Schrodinger equation of very basic 1D system ?Note: It'd be very helpful if someone can give me advice on where to start. | imaginary time propagation to find ground state wavefunction | approximation algorithms;quantum mechanics | null |
_unix.5054 | I find this a highly annoying feature on a wide screen monitor that my mostly used apps - terminal and gedit always open directly under the top-left corner of my screen and I have to drag them to my eye position each and every-time.I have tried installing the CompizConfig Settings Manager and using the feature to position windows centre, but this has had no effect - the force feature here isn't working for me either:Window Management -> place windows -> Fixed Window Placement -> Windows with fixed positionsexample: gedit 200 200 keep-in-work-area-to-yesI can use e.g. gnome-terminal --geometry=140x50+50+50 for the terminal but this doesn't work for gedit.Any ideas?Thanks | Gnome - windows always open top left | gnome;windows;gedit;compiz | null |
_softwareengineering.67945 | I am fairly new to programming, I have studied in computer science for 3 years at college, but as you know, school is only 2% of what really makes one a fully-fledged programmer.I have a lot of trouble understanding why people say language x is more efficient that language y. I only understand when it comes to pre-compiled vs runtime compiled. I understand defining data types like a constant in code is bound to be faster than letting the computer/language figure it out(like php or ruby), but when it comes to using C or Java what is it that makes C faster? Aren't they both going to be compiled into machine language in the most efficient way possible?To me, it seems as if the only difference between using a language like C or Java is; a higher level language like java would be easier to organise and write/maintain large applications with classes and inheritance. But I feel as if it should really make no difference when once it is compiled. Can someone explain?btw i only know higher level languages like php, java, ruby, vb, c#. Maybe that's why it is hard for me to imagine? the next language i want to explore is most probably C | Programming languages differences and efficiency, does it matter? | java;c;programming languages;efficiency;language features | null |
_unix.56801 | Why is float missing in awk on my RHEL 5.8? Was it replaced by some other function?On Solaris:echo Foampile=123 | awk -F= '{ print float($2) <-> $1 }'returns123<->Foampileon RHEL 5.8awk: (FILENAME=- FNR=1) fatal: function `float' not defined | Missing float function in awk on RHEL 5.8 | rhel;awk | Solaris /usr/bin/awk is a very old awk that doesn't support user functions. So in that awk, float(123) is the same as anything(123) or anything 123, that is the concatenation of the value of the float or anything variable (empty if not set) and 123. So, it's not an error, but it does nothing.Had you writtenecho Foampile=123 | awk -F '=' '{float=x; print float($2) <-> $1}'you'd have seenx123<->FoampileI don't think that there is any awk implementation that has a float function.What would you expect that function to do anyway?On the other hand, modern POSIX awks like Solaris /usr/xpg4/bin/awk or nawk or gawk do support user functions, so in those, unless you define the float or anything function, you'll see that error.echo Foampile=123 | awk -F '=' '{ print $2 <-> $1 }'would work exactly the same (and would work with modern awks).In modern awks, to disambiguate between a function call and the concatenation of a variable and something within brace, you need to add at least one extra space:$ echo x | awk '{print foo ($1)}'x$ echo x | awk 'function foo(x) {return y}; {print foo($1)}'y |
_datascience.5287 | Any suggestion on what kind of dataset lets say $n \times d$ ($n$ rows, $d$ columns) would give me same eigenvectors?.I believe it should be the one with same absolute values in each cell. Like alternating +1 and -1. But it seems to work otherwise. Any pointers? | Dataset to give same eigenvectors? | machine learning;data mining;python | null |
_unix.198370 | I have set a good ksh environement on old unixPATH=$PATH:/usr/lib/acct:/usr/sbin:/sbin:/usr/ucbexport PATHEDITOR=viFCEDIT=viexport EDITORexport FCEDITHOSTNAME=`uname -n`HISTSIZE=500LOGNAME=mynameTERM=386ATPS1=\$LOGNAME@\$HOSTNAME:\$PWD\$ set -o emacsstty 38400 intr ^C kill ^U tabs ixon ixoff ixanysetcolor white black alias type=whence -valias __A=`echo \020` # up arrow = ^p = back a commandalias __B=`echo \016` # down arrow = ^n = down a commandalias __C=`echo \006` # right arrow = ^f = forward a characteralias __D=`echo \002` # left arrow = ^b = back a charactoealias __H=`echo \001` # home = ^a = start of linealias __Y=`echo \005` # end = ^e = end of lineWith this i have search history with arrows,etc,my question is: is possible to make an alias for ctrl+r search history?Old ksh support search history?I'm on unix svr4 ATT | KSH on old unix systemV: search history | keyboard shortcuts;ksh;command history;svr4 | To search backward in your ksh command history, Ctrl-R in emacs mode ought to work, even if you're running an old version such as ksh88. It is not an incremental character-by-character search like in bash. You have to type Ctrl-R, then the string you want to search for, then Enter. |
_codereview.132004 | GoalsI want to be able to call async code using the MVVM pattern I want tobe able to add loading animations/screens without having to addproperties for each command on my viewmodelsI may want to be able to cancel these operationsThis is what I came up withpublic abstract class CommandBase : ICommand{ private readonly Func<bool> _canExecute; public CommandBase() { } public CommandBase(Func<bool> canExecute) { if (canExecute == null) throw new ArgumentNullException(nameof(canExecute)); _canExecute = canExecute; } public bool CanExecute(object parameter) { return _canExecute == null || _canExecute(); } public abstract void Execute(object parameter); public void RaiseCanExecuteChanged() { CanExecuteChanged?.Invoke(this, EventArgs.Empty); } public event EventHandler CanExecuteChanged;}public class AsyncCommand : CommandBase, INotifyPropertyChanged{ private readonly Func<CancellationToken, Task> _action; private CancellationTokenSource _cancellationTokenSource; private bool _isRunning; public bool IsRunning { get { return _isRunning; } set { _isRunning = value; OnPropertyChanged(); } } private ICommand _cancelCommand; public ICommand CancelCommand => _cancelCommand ?? (_cancelCommand = new RelayCommand(Cancel)); public AsyncCommand(Func<CancellationToken, Task> action) { if (action == null) throw new ArgumentNullException(nameof(action)); _action = action; } public AsyncCommand(Func<CancellationToken, Task> action, Func<bool> canExecute) : base(canExecute) { if (action == null) throw new ArgumentNullException(nameof(action)); _action = action; } private void Cancel() { _cancellationTokenSource?.Cancel(); } public override async void Execute(object parameter) { IsRunning = true; try { using (var tokenSource = new CancellationTokenSource()) { _cancellationTokenSource = tokenSource; await ExecuteAsync(tokenSource.Token); } } finally { _cancellationTokenSource = null; IsRunning = false; } } private Task ExecuteAsync(CancellationToken cancellationToken) { return _action(cancellationToken); } public event PropertyChangedEventHandler PropertyChanged; [NotifyPropertyChangedInvocator] protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null) { PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); }}It can be used like this:<Button Content=Start Command={Binding UpdateDisplayTextCommand}/><Button Content=Cancel Command={Binding UpdateDisplayTextCommand.CancelCommand}/><ProgressBar IsIndeterminate={Binding UpdateDisplayTextCommand.IsRunning} />Any faults on this implementation? I'm also open to ideas on features which could be added to this class. | AsyncCommand using MVVM and WPF | c#;wpf;mvvm | null |
_unix.346081 | As it is a part of the job on every Linux System Administrator to login in a lot of production servers which mean that it easily something can go wrong and to broke a production machine. There are a lot of cases when someone is tired or get a call over the night and can do rm -rf test instead of rm -rf test.1 for example. So there are a lot of ways to predict something like that and block some commands on user level or kernel, but with root access will not be a good practice. Also when you connect to a lot of machines ( 100-200 ) it is hard task to configure to forget some commands on all of them.So I am looking for some way how you can easily and safely this can be achieved. It should be working only when you are login via ssh and sometimes to be removed. So I think something like custom command in the preferences in the terminal/terminator/ssh agent/or some other tool for command line. I am not sure if it is possible, but I think that it will be useful for a lot of people. | Restrict list of commands over SSH | linux;ssh;command line | null |
_webapps.1372 | Is there a way to set a default view on Google News? I like reading Google News' Business and Science Tech sections and I would like to make it so by default one of these pages opens. Is this possible? | Is there a way to set a default view on Google News? | google news | No this is not possible.Closest you can do is bookmark the section or rearrange it so that it is one of the upper most sections on the Google News home page. |
_softwareengineering.286547 | Currently I'm building a large framework whose purpose is to run several algorithms in sequence.Many of these algorithms have interdependencies in data structure - the output of one algo is the input of another algo, for instance MagicVectorXY. The algorithms need to be modular and compact, because some of these algorithms will be reused in many other separate frameworks. And we want to avoid unreachable code as much as possible.Our current solution is to compile each algorithm into its own .dll, while shared interfaces/3rd party math libraries/shared data structures reside in a CommonInterfaces.dll. However, this leads to proliferation of Projects in the Visual Studio Solution, and there's still significant unreachable code in the CommonInterfaces.dll.What other strategies are there to manage such a modular framework?PS: We foresee ~10 algorithms, and things like database layer, WCF, scheduling and Unit Tests for each project are all their own projects as well. So around 25-30 projects estimated in that single solution. | Strategies to manage a modular C# framework | c#;visual studio;dependency management | The information here is a bit scarce...First of all (if you haven't already) you may want to check out SOLID. Follwing these rules will give your sofware a good dependency structure. A nice dependency structure is key to being able to swap modules in and out. (And you will not need to reference the datalayer to compile your algorithms)What strikes me the most in your description is shared interfaces. To make each algorithm a true module you want each module to individually define the interfaces it uses. See ISP. Then the implementation of the different interfaces can be in a single class that references the algorithm .dll:s but you can also provide different implementations for different frameworks if needed. Or maybe you have a default implementation that you can package with the module.Also, don't be afraid to add projects. And I would recommend thinking about structuring the code by functionality (module/algorithm/etc.) instead of just different layers as that will allow you to for example pick the parts of the data layer you need. |
_unix.334486 | I have a recent.sql.gz file on a remote server. What I try to accomplish is the following: I want to open the file, push the content over SSH to my local machine, use zcat to uncompress the content and pipe it to mysql. Something like this:ssh user@remote 'cat recent.sql.gz' | zcat | mysql | Get contents of a .gz file over SSH and pipe it to zcat and mysql | ssh;cat | null |
_webmaster.42628 | I have a module in my site which can be accessed like sitename.com/module/This module has pagination.In the first page i have two paragraphs which describe about the site. This will appear in all pages and pagination links can be crawled.So page 1 will have two para and a set of listing and page 2 will have the same para with totally different set of listing.So here the paragraph is duplicated were as the listing below the paragraph are nonidentical that is they are unique for all pages.Also the title tag and description are duplicated for all pages....Assuming that I am adding page 2, page 3 to title tags and description... will it make a difference... will that too be considered duplicate content... (because only the words page 2 are page 3 is extra in the title and description of all pages... The rest of the title text ares same... like 'once upon a time page 1' for first page and once upon a time page 2 for the second page and so on.I assume to following to not to be penalized by search engines as duplicate.I can make that paragraph appear only in first page and I can hide for all other pagesIf I do the above then the title, description will be duplicate though I add extra text like 'page 2' in the title and description and in the content above the to paraOr instead of the above 2 points i can use canonical URL for pagination.This this correct? besides my questions are:Can I have duplicate title tags while using canonical URL?While using canonical URL and duplicate content in title with the string 'page 2'... will it makes a difference. for example the title is 'once there lived a king' and then other pages will have 'once there lived a king page 2', etc...If using canonical URL can I have a paragraph in all pages or should I show it on only first page to prevent duplicate content.Of I am not using canonical URL and showing the same static para in all pages with different listing and 'page 2', page 3 in title... will it become duplicate content?I hope this makes sense, if you find anything is confusing then please specify in comments so that i will update my question according to your suggestion. | pagination and duplicate content with one paragraph in each page with totally different listing | seo;duplicate content;canonical url;pagination | What are you hoping for from pagination?Users to click through multiple pages to find what they are looking for?Googlebot to be able to find all your items and crawl their pages?A way to distribute pagerank to each of your products so that the individual product pages work well?Organic search engine traffic to each of the paginated pages?These are typically what people who implement paginated listing pages are hoping for. Unfortunately, none of those will happen. I will address them one at a time.Users Using PaginationMy experience is that users don't like using pagination. When I've looked into the usage data on sites that have pagination, 2% to 10% of users even click on the pagination. Other users tend to find something on page 1, use search to find what they are looking for, or prefer using filters to narrow down the choices.Googlebot Will Find EverythingThis is one place where pagination can still be used. Googlebot will be able to find everything by crawling all the paginated pages. However, it is not the best way to tell Googlebot about your site. A sitemap.xml file will serve the same purpose. Passing PagerankPagination is an extremely bad and inefficient way to pass pagerank around your site. Take the case where page 1 has 10 products and a link to page 2. 90% of the pagerank from page 1 will go to the products. 10% will go to page 2. The 10 products listed on page 2 will have 10% of the pagerank that the products on page 1 had. Page 3 will have 1% of the pagerank that page 1 had. By the time you get to page 4, there is not enough pagerank to matter.Getting Search Traffic to Land on Each PageFirst there is the pagerank issue. The only page in the pagination chain that will have enough pagerank to rank for anything is page 1. Then there is the duplication issue which you are asking about. Why would Google index two pages with so much similar content? Then there is the keyword targeting issue. Are you going to target page 2 at a different keyword than page 1? If it is targeted at the same keyword then the first page with more pagerank will just rank. If its targeted at a different keyword, why wouldn't you build a better landing page targeted at that keyword?So What To DoImplement sitemap.xmlImplement filters to allow users to narrow the list of items they seeLink your product pages to each other to more efficiently pass pagerank and let Googlebot find all your products through links. You can use similar products, related products, people who looked at this product also looked at X.If what I have written here doesn't convince you to get rid of pagination entirely then:Instead of canonical, use the recommended rel=prev and rel=next to let Googlebot know about the paginationUse the same title and same text on each page of the pagination. Googlebot won't index and send traffic to page 2, but there isn't much you can do to make that happen anyway. Even if the pages aren't indexed, they will still be crawled and Googlebot will be able to use them to find all your products. |
_cs.69362 | Given the alphabet $ = \{a, b\}$ construct a FSA for the following language:$$ L=\{^* ||a =even ||b =odd\}$$where $||x $ is the number of occurrences of $x$ in $$.Basically, only strings where the number of $a$'s is even and the one of $b$'s is odd.My attempt was the following:whereas that's the solution of my teacher:I wonder if mine is correct, too. I tried to follow the paths and I think that mine is working. | Is the FSA for that language right? Comparison between two solutions | formal languages;finite automata | null |
_datascience.13114 | I am trying to find the common topics between articles read using the respective tags attached to each article. Background of my mini project:The problem I am trying to solve involves looking at articles read by a group of readers who have searched the same keyword, in order to gain better understanding on the nature of content they are interested in.As I have understood, topic models are commonly used for topic extraction. I'd like some advice on whether this would be suitable for my problem, given that I already have a dataset that contains the tags ('topics') of the articles. Or would a simple probability model be more suitable?Illustration for simple probability model:Keyword searched: lifestyleArticles read by User 1: fashion, health, organic food, clean eatingArticles read by User 2: fitnessArticles read by User 3: recipes, diet plan, clean eatingOutcome: 25% clean eating, 12.5% diet plan etc...Sorry, I hope my explanation isn't confusing! | Topic modelling or simple case of calculating probabilities? | text mining;topic model;probability | null |
_cs.7559 | I had an interview today, and the interviewer has told me about a theorem (of someone called Hill- or Hell-something) which states that for a non-deterministic algorithm there exists a deterministic algorithm of some time complexity and a space complexity of no more than the original space complexity times log(n).I am looking for that theorem (couldn't find it on Google). Thanks! | Logarithmic space difference between deterministic and non-deterministic algorithms | algorithms;space complexity;nondeterminism | null |
_unix.171951 | Is there a utility software (or an easy method to do it from shell script) to display a serial port's status i. e. blinking RXD, TXD, DCD, DTR, DSR, RTS, CTS? Particularly, I need to monitor whether DCD line is set most of the time and momentarily cleared on some interval. The port doesn't need to be sniffed, it's okay to open it exclusively.In DOS and Windows world, it's usual for terminal emulator and other modem-related software to display pin status, either in GUI or in console applications. However, I couldn't find an alternative even for Linux (although some say it may be possible to examine /proc/tty/driver/serial by hand, if it exists), not to mention FreeBSD, which is my actual target. Common tools like cu and minicom only display port settings at most, not the status. | Viewing (monitoring) line status of a serial port | freebsd;serial port | null |
_unix.252954 | sh-3.2# yum updateLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile* epel: www.muug.mb.ca* base: mirror.its.sfu.ca* updates: mirror.its.sfu.ca* webtatic-el5: us-east.repo.webtatic.com* extras: mirror.its.sfu.ca* addons: mirror.netflash.netepel | 3.6 kB 00:008596812757300b1d87f2682aff7d323fdeb5dd8ee28c11009e5980cb5cd4be14-primary.sqlite.bz2 | 1.2 kB 00:00http://dev.centos.org/centos/5/testing/x86_64/repodata/8596812757300b1d87f2682aff7d323fdeb5dd8ee28c11009e5980cb5cd4be14-primary.sqlite.bz2: [Errno -3] Error performing checksumTrying other mirror.Error: failure:repodata/8596812757300b1d87f2682aff7d323fdeb5dd8ee28c11009e5980cb5cd4be14-primary.sqlite.bz2 from c5-testing: [Errno 256] No more mirrors to try.CentOS release 5.3 (Final)I can't find what is broken. I've tried 'yum clean all' and adding new repositories. | CentOs 5.3, Yum Update Fails | yum;upgrade | null |
_webapps.19619 | I would like to have an introductory video on my homepage, and I'm considering embedding a YouTube player.A problem I see is that, at the end of the video, YouTube will be suggesting other videos. This is a good chance for users to escape the website.Can I somehow modify this end of video page so that it displays a blank screen, or something? | Modifying YouTube end of video page | youtube | Yes. There's a rel=0 parameter for precisely this purpose, that was introduced in June 2007. Read the YouTube embedded player parameter documentation for details. |
_codereview.167585 | OverviewI am designing one api that will be integration with another system, however I stumbled upon one code which I know it can be improved but I don't know how to do it properly.Code [HttpPost] [Route(Reserve)] public IHttpActionResult Reserve([FromUri]ReserveData reserveData, [FromBody]BetsData bet) { var result = new ReserveResponse(); // Question 1. if (reserveData == null || !IsDataForReserveValid(reserveData)) { // Question 2. return Ok(_responseStringBuilder.BuildWrongRequestResponse().Create()); } // Question 3. if (!_customerService.CheckIfCustomerExists(reserveData.cust_id)) { return Ok(_responseStringBuilder.BuildCustomerNotFoundResponse().Create()); } if (_customerService.IsCustomerRestricted(reserveData.cust_id)) { return Ok(_responseStringBuilder.BuildRestrictedCustomerResponse().Create()); } if (!_reserveService.ReserveAmount(reserveData.cust_id, reserveData.amount)) { return Ok(_responseStringBuilder.BuildInsufficientFundsResponse().Create()); } if (bet != null) { } _reserveService.InsertReserve(reserveData); return Ok(_responseStringBuilder.BuildNoErrorsResponse().Create()); }RequirementsThis is simple action that should return Ok with string in its response content. QuestionsIs there any better what of checking for valid input? I've createdhierarchy of methods validating input of each action in thiscontroller(because most of the input for different actions is actually repeating) asprivate methods in the controller.I've created string builder pattern and I build each time my request with needed properties. Is this the best way? There is some business logic which I've put in the controller, but I am not sure if it should be in the service layer or not Should I leave it like that or invoke single method from a service which does all the logic by itself there and simply returns result to me if there are errors or not?Example response would be:...(some data)error_code=NoErrors\r\nerror_message=There were no errors\r\n | Designing better api controller | c#;design patterns;controller;asp.net web api | null |
_webmaster.3452 | Is there a site out there that has a bunch of typical HTML for seeing what the CSS looks like once applied? I mean, something full of table, a, div, span, p, whatever. You go there, paste in your CSS and see the results. | CSS style viewer/tester | css;html | I just stumbled across http://colorschemedesigner.com/.It's great for choosing colors and will show you example pages to see what the color scheme would look like applied to a typical page.Update: the website is now http://paletton.com |
_codereview.122063 | I found another user asking for an application to be written, it sounded simple enough and a good way to work out my C# muscles and try to get them into a better shape. I really liked the idea of learning the difference between people's ages so I decided I would code this, I did enlist the help of Stack Overflow of which I have given attribution for the code I used.Here is the original criteriaFor example:Please enter the number of siblings: 3 Please enter date of birth of sibling 1: 01-01-1990 Please enter date of birth of sibling 2: 05-03-1995Please enter date of birth of sibling 3: 08-05-1998 Age of sibling 1 is: 25 years 2 months 19 days Age of sibling 2 is: 20 years 0 months 15 days Age of sibling 3 is: 16 years 10 months 12 days Difference between sibling 1 and 2 is: 05 years 2 months and 4 days Difference between sibling 2 and 3 is: 03 years 02 months and 3 daysThis is only a console application (so far).I started out writing the code in Main and soon realized that I wanted some custom objects, so I first created a Human Interface (nothing special in it, yet) which holds a simple DOB, in the future this describe other things and may possibly need to be named something else (most likely). The Sibling Class implements the Human Interface (and not much more, yet).Human.csinterface Human{ DateTime DOB { get; set; }}Sibling.csclass Sibling : Human{ public DateTime DOB { get; set; } public Sibling (DateTime dateOfBirth) { this.DOB = dateOfBirth; }}I also made use of some DateTime manipulation, along with a struct of someone else's creation (it came in rather handy)DateTimeExtensions.csstatic class DateTimeExtensions{ public static int AgeInYears (DateTime age1, DateTime age2) { return Math.Abs(age1.Year - age2.Year); } public static int AgeInMonths (DateTime age1, DateTime age2) { return ((age1.Year - age2.Year) * 12) + (age1.Month - age2.Month); } public static int AgeInDays (DateTime age1, DateTime age2) { return Convert.ToInt32(((age1.Year - age2.Year) * 365.25) + (age1.DayOfYear - age2.DayOfYear)); } public static int AgePartYears(DateTime age1, DateTime age2) { return DateTimeSpan.CompareDates(age1, age2).Years; } public static int AgePartMonths(DateTime age1, DateTime age2) { return DateTimeSpan.CompareDates(age1, age2).Months; } public static int AgePartDays(DateTime age1, DateTime age2) { return DateTimeSpan.CompareDates(age1, age2).Days; }}DateTimeSpan Structresides in DateTimeExtensions currently/// <summary>/// http://stackoverflow.com/a/9216404/1214743/// http://stackoverflow.com/users/189950/kirk-woll/// answered Feb 9 '12 at 18:14/// </summary>public struct DateTimeSpan{ private readonly int years; private readonly int months; private readonly int days; private readonly int hours; private readonly int minutes; private readonly int seconds; private readonly int milliseconds; public DateTimeSpan(int years, int months, int days, int hours, int minutes, int seconds, int milliseconds) { this.years = years; this.months = months; this.days = days; this.hours = hours; this.minutes = minutes; this.seconds = seconds; this.milliseconds = milliseconds; } public int Years { get { return years; } } public int Months { get { return months; } } public int Days { get { return days; } } public int Hours { get { return hours; } } public int Minutes { get { return minutes; } } public int Seconds { get { return seconds; } } public int Milliseconds { get { return milliseconds; } } enum Phase { Years, Months, Days, Done } public static DateTimeSpan CompareDates(DateTime date1, DateTime date2) { if (date2 < date1) { var sub = date1; date1 = date2; date2 = sub; } DateTime current = date1; int years = 0; int months = 0; int days = 0; Phase phase = Phase.Years; DateTimeSpan span = new DateTimeSpan(); while (phase != Phase.Done) { switch (phase) { case Phase.Years: if (current.AddYears(years + 1) > date2) { phase = Phase.Months; current = current.AddYears(years); } else { years++; } break; case Phase.Months: if (current.AddMonths(months + 1) > date2) { phase = Phase.Days; current = current.AddMonths(months); } else { months++; } break; case Phase.Days: if (current.AddDays(days + 1) > date2) { current = current.AddDays(days); var timespan = date2 - current; span = new DateTimeSpan(years, months, days, timespan.Hours, timespan.Minutes, timespan.Seconds, timespan.Milliseconds); phase = Phase.Done; } else { days++; } break; } } return span; }}I haven't really dove into the Struct yet to see if there was anything that I could make better, I really want to create some unit tests so that I don't have to run the code every time I want to test something in the code.and last but not least, the ugliest part of any application the Main.Program.csstatic void Main(string[] args){ Console.WriteLine(Please enter the number of siblings); var siblingCount = new int(); int.TryParse(Console.ReadLine(), out siblingCount); var siblings = new List<Sibling>(); for (int i = 1; i < siblingCount + 1; i++) { Console.Write(Please Enter the date of birth for sibling + i.ToString() + :); siblings.Add(new Sibling(DateTime.Parse(Console.ReadLine()))); } siblings = siblings.OrderByDescending(x => x.DOB).ToList(); for (int i = 1; i < siblings.Count + 1; i++) { var diff = DateTimeSpan.CompareDates(siblings[i - 1].DOB, DateTime.Now); Console.WriteLine(Age of sibling + i + is: + diff.Years + years, + diff.Months + months and + diff.Days + days.); } for (var i = 1; i < siblingCount; i++) { var diff = DateTimeSpan.CompareDates(siblings[i - 1].DOB, siblings[i].DOB); Console.WriteLine(Difference between sibling + i.ToString() + and + (i + 1).ToString() + is + diff.Years + years, + diff.Months + months and + diff.Days + days.); } Console.ReadLine();}What do you think?How can I clean up the Console.WriteLine()'s all over the place?I know that the next step is exception handling, is my code supportive of this next step?Is there anything that I could do to get my code ready for unit testing? | Age Calculations (First Draft) | c#;beginner;datetime | Human.csWhy is there a set method? I don't expect date of birth to change. This is a big deal, state and data are very different things. Always default to making things immutable/readonly.I don't like DOB, use DateOfBirth here. A good rule of thumb is that length of names should be proportional to scope. As a variable in a method dob may be fine, as a member in a public interface not so much.The convention in C# is naming interfaces IHuman. IDateOfBirth may be a better name btw.Good that the interface is minimal, this will allow for it to be used on things like public class Dog : IDateOfBirthSibling.csGiven we make date of birth readonly we can add some validation (if it makes sense in your application):internal class Sibling : IDateOfBirth{ public Sibling(DateTime dateOfBirth) { if (dateOfBirth > DateTime.UtcNow) { throw new ArgumentException(Date of birth cannot be in the future, nameof(dateOfBirth)); } if (dateOfBirth < DateTime.UtcNow.AddYears(-150)) { throw new ArgumentException(Date of birth must be less than 150 years from now, nameof(dateOfBirth)); } this.DateOfBirth = dateOfBirth; } public DateTime DateOfBirth { get; }}The convention in C# is to specify the internal explicitly. Not important, just mentioning it.DateTimeSpan.csGood that you link the source in a comment.With C#6 you can clean it up using get only properties. Nothing wrong with public readonly fields either, might be controversial idk.Skipping the rest of the code from the SO answer.Program.csvar siblingCount = new int(); use var siblingCount = 0; as it reads clearer.You may want to handle the case where the user inputs an illegal number. `int.TryParse() returns a bool so you can do:if (!int.TryParse(Console.ReadLine(), out siblingCount)){ // handle invalid input}Avoid concatenating strings with +. With C#6 you can use string interpolation like this:for (var i = 0; i < siblingCount; i++){ Console.Write($Please Enter the date of birth for sibling {i + 1}:); var line = Console.ReadLine(); DateTime dateOfBirth; if (!DateTime.TryParse(line, out dateOfBirth)) { // handle invalid input } siblings.Add(new Sibling(dateOfBirth));}DateTime has TryParse and TryParseExact which may be a better fit for this. Parse will crash your program with a FormatExceptionif user inputs an invalid date.Storing var line = Console.ReadLine(); in a variable is nice for debugging.Prefer to loop over the collectionfor (var i = 0; i < siblings.Count; i++){ var sibling = siblings[i]; var diff = DateTimeSpan.CompareDates(sibling.DateOfBirth, DateTime.Now); Console.WriteLine($Age of sibling {i + 1} is: {diff.Years} years, {diff.Months} months and {diff.Days} days.);}TestingThe logic in DateTimeExtensions and DateTimeSpanis very easy to test.SummaryThe public set for date of birth is the biggest issue with this code. |
_cs.11872 | Is there a common code metric for code redundancy or code cloning?I think I read somewhere a definition, where the LOC (lines of code) ofthe redundant code was measured.I also searched for references, but didn't find a paper that seemed like an good or trustworthy reference. | Code metric for code redundancy or code cloning | reference request;empirical research | null |
_unix.235187 | I have an embedded system running Debian. It is supposed to auto-update in order to pull in security fixes. But Debian's default kernel lacks a hardware driver that this system needs, so it will need to run a self-compiled kernel.Is there a way to request that, whenever a new kernel-source package comes out, the system should make oldconfig with the addition of that one config change we need, and install and use the resulting kernel binary?Alternatively, if that's impossible for Debian, is there a distribution with similar long-term-stable qualities (so... not exactly Gentoo) that allows doing this? | What's the Debian way to retain a kernel config change through updates? | debian;kernel;package management | null |
_unix.129191 | I have a problem with the output of a program. I need to launch a command in bash and take its output (a string) and split it to add new lines in certain places. The string looks like this:battery.charge: 90 battery.charge.low: 30 battery.runtime: 3690 battery.voltage: 230.0 device.mfr: MGE UPS SYSTEMS device.model: Pulsar Evolution 500basically it is an xxx.yy.zz: value, but the value may contain spaces.Here's the output I'd like to getbattery.charge: 90battery.charge.low: 30battery.runtime: 3690battery.voltage: 230.0device.mfr: MGE UPS SYSTEMSdevice.model: Pulsar Evolution 500 I have an idea to search for first dot and then look back from that position for space to put a new line there, but I'm not sure how to achieve it in Bash. I'm still a beginner. | How to split a string into an array in bash | bash;string;split | Pure bash solution, no external tools used to process the strings, just parameter expansion:#! /bin/bashstr='battery.charge: 90 battery.charge.low: 30 battery.runtime: 3690 battery.voltage: 230.0 device.mfr: MGE UPS SYSTEMS device.model: Pulsar Evolution 500'IFS=: read -a fields <<< $strfor (( i=0 ; i < ${#fields[@]} ; i++ )) ; do f=${fields[i]} notfirst=$(( i>0 )) last=$(( i+1 == ${#fields[@]} )) (( notfirst )) && echo -n ${f% *} start=('' $'\n' ' ') colon=('' ': ') echo -n ${start[notfirst + last]}${f##* }${colon[!last]}doneechoExplanation: $notfirst and $last are booleans. The part before the last space ${f% *} isn't printed for the first field, as there is no such thing. $start and $colon hold various strings that separate the fields: at the first item, notfirst + last is 0, so nothing is prepended, for the rest of the lines, $notfirst is 1, so a newline is printed, and for the last line, the addition gives 2, so a space is printed. Then, the part after the last space is printed ${f##* }. Colon is printed for all lines except the last one. |
_codereview.158217 | I've written a class that allows me to easily create sprites on the screen, and then do things like: setting an image or an animation; move the sprite as a platformer or as a top down game; and test for many things like collision and orientation. So far I've found this to be very useful, and it has made it really easy to make games. Now I've finished getting everything to work, and writing a text file of the documentation I thought that I'd post it here for review (I'll post the documentation if requested).The idea is that it would be imported into a file using import classes.sprite.Sprite. With classes being the folder that it's in, sprite.py the name of the file, and Sprite the name of the class.The folder structure would be:Projects/Pygame/Game/main.pyProjects/Pygame/Game/classes/sprite.pyProjects/Pygame/Game/data/The data folder is used to store any images or text files that the game will need.import os, pygamefrom pygame.locals import *from math import *pygame.init()WIDTH = NoneHEIGHT = NoneSCREEN = NoneFPS = Nonepath = os.path.abspath(os.path.join(os.path.dirname( __file__ ), '..', 'data'))def load_rotated_image(sprite): '''Loads an image for the rotate functions in Sprite class''' sprite.surface = pygame.transform.rotate(sprite.image, sprite.angle) sprite.rect = sprite.surface.get_rect(center = (sprite.rect.x + sprite.rect.width / 2, sprite.rect.y + sprite.rect.height / 2))def init(surface, fps): '''Initialises the sprite class''' global WIDTH, HEIGHT, SCREEN, FPS #I know globals are bad but I don't know how to avoid them here SCREEN = surface WIDTH, HEIGHT = SCREEN.get_size() FPS = fpsclass Sprite(object): '''Creates a sprite object''' def __init__ (self, x = 0, y = 0, width = 50, height = 50, colour = (255, 255, 255), boundry = None, Clamp = False, flip = False): '''Initialises the sprite object''' self.x = x self.y = y self.width = width self.height = height if boundry == None: self.boundry = (0, 0, WIDTH, HEIGHT) else: self.boundry = boundry self.clamp = Clamp self.colour = colour self.do_flip = flip self.angle = 0 self.animation = False self.appear = True self.flip = False self.moving = False self.surface = pygame.Surface((self.width, self.height)) self.image = self.surface self.rect = self.surface.get_rect() self.rect.x = x self.rect.y = y self.surface.fill(colour) self.xvel = 0 self.yvel = 0 def set_image(self, image, scale = 1, size = (0, 0), colourkey = (255, 0, 255)): '''Initilizes a sprite to display an image''' self.surface = pygame.image.load(os.path.join(path, image)).convert() #self.surface = pygame.image.load(image).convert() if not colourkey == None: self.surface.set_colorkey(colourkey) if size != (0, 0): self.width = round(size[0]) self.height = round(size[1]) else: self.width = round(scale * self.surface.get_width()) self.height = round(scale * self.surface.get_height()) self.image = pygame.transform.scale(self.surface, (self.width, self.height)) self.surface = self.image self.rect = self.surface.get_rect() self.rect.x = self.x self.rect.y = self.y def set_animation(self, frames = [], scale = 1, fps = 1, animate_on_move = False, idle_frame = None, colourkey = (255, 0, 255)): '''Initilizes a sprite to display an animation''' self.animation = True self.frames = [] self.animate_on_move = animate_on_move self.idle_frame = idle_frame self.change = max(round(FPS / abs(fps)), 1) self.tick = 0 for image in frames: self.frame = pygame.image.load(os.path.join(path, image)).convert() if not colourkey == None: self.frame.set_colorkey(colourkey) self.width = round(scale * self.frame.get_width()) self.height = round(scale * self.frame.get_height()) self.frames.append(pygame.transform.scale(self.frame, (self.width, self.height))) self.rect = self.frames[0].get_rect() self.rect.x = self.x self.rect.y = self.y self.frame = 0 def show(self): '''Show this sprite''' self.appear = True def hide(self): '''Hide this sprite''' self.appear = False def move(self, speed = 5, colliders = [], platformer = False, jump_height = 1): '''Allows the arrow keys to control the sprite''' keys = pygame.key.get_pressed() if keys[K_a] or keys[K_LEFT]: self.rect.x -= int(abs(speed)) self.flip = False self.moving = True if self.rect_collide(colliders): self.rect.x += int(abs(speed)) if keys[K_d] or keys[K_RIGHT]: self.rect.x += int(abs(speed)) self.flip = True self.moving = True if self.rect_collide(colliders): self.rect.x -= int(abs(speed)) if (keys[K_a] or keys[K_LEFT]) and (keys[K_d] or keys[K_RIGHT]) : self.moving = False if platformer == True: if not (keys[K_a] or keys[K_LEFT] or keys[K_d] or keys[K_RIGHT]): self.moving = False self.rect.y += 10 if (keys[K_w] or keys[K_UP] or keys[K_SPACE]) and self.rect_collide(colliders) and self.yvel < 1: self.yvel = int(abs(speed)) * (2 + ((jump_height - 1) / 2)) self.rect.y -= 15 if self.rect_collide(colliders): self.yvel = 0 self.rect.y += 15 self.rect.y -= 10 self.rect.y -= self.yvel if self.rect_collide(colliders): if self.yvel > 0: self.rect.y += self.yvel + 1 else: self.rect.y += self.yvel self.yvel = 0 else: if not (keys[K_a] or keys[K_LEFT] or keys[K_d] or keys[K_RIGHT] or keys[K_w] or keys[K_UP] or keys[K_s] or keys[K_DOWN]): self.moving = False if (keys[K_w] or keys[K_UP]) and (keys[K_s] or keys[K_DOWN]) : self.moving = False if keys[K_w] or keys[K_UP]: self.rect.y -= int(abs(speed)) self.moving = True if self.rect_collide(colliders): self.rect.y += int(abs(speed)) if keys[K_s] or keys[K_DOWN]: self.rect.y += int(abs(speed)) self.moving = True if self.rect_collide(colliders): self.rect.y -= int(abs(speed)) def render(self, colour = None, frame = None): '''Renders the sprite''' if not colour == None: self.colour = colour self.surface.fill(self.colour) if self.clamp: self.rect.x = min(max(self.rect.x, self.boundry[0]), self.boundry[0] + self.boundry[2] - self.rect.width ) self.rect.y = min(max(self.rect.y, self.boundry[1]), self.boundry[1] + self.boundry[3] - self.rect.height) if self.appear: if self.animation: self.tick += 1 if self.tick == self.change: if self.frame + 1 == len(self.frames): self.frame = 0 else: self.frame += 1 self.tick = 0 if not frame == None: self.frame = frame % len(self.frames) if self.animate_on_move: if not self.moving: self.frame = self.idle_frame self.rect.width = self.frames[self.frame].get_width() self.rect.height = self.frames[self.frame].get_height() if self.do_flip: if self.flip: SCREEN.blit(pygame.transform.flip(self.frames[self.frame], True, False), (self.rect.x, self.rect.y)) else: SCREEN.blit(self.frames[self.frame], (self.rect.x, self.rect.y)) else: SCREEN.blit(self.frames[self.frame], (self.rect.x, self.rect.y)) else: if self.do_flip: if self.flip: SCREEN.blit(pygame.transform.flip(self.surface, True, False), (self.rect.x, self.rect.y)) else: SCREEN.blit(self.surface, (self.rect.x, self.rect.y)) else: SCREEN.blit(self.surface, (self.rect.x, self.rect.y)) def wrap(self): '''Wraps the sprite around the edge of the screen''' self.wrap_around = False if self.rect.x < 0: self.rect.x += WIDTH self.wrap_around = True if self.rect.x + self.width > WIDTH: self.rect.x -= WIDTH self.wrap_around = True if self.rect.y < 0: self.rect.y += HEIGHT self.wrap_around = True if self.rect.y + self.height > HEIGHT: self.rect.y -= HEIGHT self.wrap_around = True if self.wrap_around: if self.animation: if self.do_flip: if self.flip: SCREEN.blit(pygame.transform.flip(self.frames[self.frame], True, False), (self.rect.x, self.rect.y)) else: SCREEN.blit(self.frames[self.frame], (self.rect.x, self.rect.y)) else: SCREEN.blit(self.frames[self.frame], (self.rect.x, self.rect.y)) else: if self.do_flip: if self.flip: SCREEN.blit(pygame.transform.flip(self.surface, True, False), (self.rect.x, self.rect.y)) else: SCREEN.blit(self.surface, (self.rect.x, self.rect.y)) else: SCREEN.blit(self.surface, (self.rect.x, self.rect.y)) self.rect.x %= WIDTH self.rect.y %= HEIGHT def rect_collide(self, sprites): '''Returns True if two sprites are colliding''' self.rect = pygame.Rect(self.rect.x, self.rect.y, self.rect.width, self.rect.height) if self.appear: for sprite in sprites: if self.rect.colliderect(sprite) and sprite.appear: return True return False def clamp(self, boundry = None, Clamp = None): '''Clamps a sprite to a boundry''' if not Clamp == None: self.clamp = Clamp else: if self.clamp == True: self.clamp = False if self.clamp == False: self.clamp = True if not boundry == None: self.boundry = boundry def Collect(self, player): '''Makes an object collectable''' if player.rect_collide([self]): self.hide() return True def mouse_hover(self): '''Returns True if the mouse is over the sprite''' mouse_pos = pygame.mouse.get_pos() if self.rect.collidepoint(mouse_pos): return True else: return False def mouse_click(self, button = 1): '''Returns True if the sprite is clicked''' mouse_pos = pygame.mouse.get_pos() mouse_pressed = pygame.mouse.get_pressed() if self.rect.collidepoint(mouse_pos) and mouse_pressed[button - 1]: return True else: return False def move_in_direction(self, magnitude, direction = None): '''Moves the sprite a certain distance in the direction it's facing''' if direction != None: self.rect.x += magnitude * cos(radians(direction)) self.rect.y -= magnitude * sin(radians(direction)) else: self.rect.x += magnitude * cos(radians(self.angle)) self.rect.y -= magnitude * sin(radians(self.angle)) def point_in_direction(self, direction): '''Points a sprite in a specific direction''' self.angle = direction load_rotated_image(self) def point_towards(self, pos = (0,0), sprite = None, anchor = 'center'): '''Points sprite towards another sprite''' if sprite == None: self.angle = 360 - atan2(pos[1] - self.rect.centery, pos[0] - self.rect.centerx) * 180 / pi else: exec('self.angle = 360 - atan2(sprite.rect.' + anchor + '[1] - self.rect.centery, sprite.rect.' + anchor + '[0] - self.rect.centerx) * 180 / pi') load_rotated_image(self) def distance_to(self, pos = (0, 0), sprite = None, anchor = 'center'): '''Returns the distance to a sprite or point''' exec('self.pos = (self.rect.' + anchor + '[0], self.rect.' + anchor + '[1])') if sprite == None: return sqrt(abs(self.pos[0] - pos[0])**2 + abs(self.pos[1] - pos[1])**2) else: return sqrt(abs(self.pos[0] - sprite.rect.x)**2 + abs(self.pos[1] - sprite.rect.y)**2) def turn(self, angle): '''Turns the sprite a certain amout of degrees''' self.angle += angle load_rotated_image(self) | Multifunction sprite class for pygame | python;object oriented;python 3.x;pygame | null |
_unix.83275 | I just downloaded the most recent FreeBSD CD image, and put it in VirtualBox. It looks like it installs fine but then it reboots and boots from the CD image again. When I make it boot from the hard-drive by pressing F12 at VirtualBox's boot splash screen and selecting the hard drive, it says:gptboot: No /boot/loader on 0:ad(0p2)gptboot: No /boot/kernel/kernel on 0:ad(0p2)FreeBSD/x86 bootDefault: 0:ad(0p2)/boot/kernel/kernelboot: _What am I doing wrong?Changing the chipset to ICH6 didn't work, increasing the RAM to 512 MB didn't work either.PC-BSD doesn't work either (VirtualBox specific image) | FreeBSD reboots during install and leaves invalid install | freebsd;virtualbox | null |
_webapps.79005 | It sounds strange, but I'm unable to sign up to Twitter.When I try to use Google Chrome, the sign up form contains only name and e-mail fields. And even though all of them are filled-up and contain valid values (notice check-marks), I'm bumped with Please complete all required fields error every time, I click Sign Up:When I'm using Opera, I'm able to see username additional field (but still no sign of password field), but I'm hitting the wall with the same error message, even though -- again -- all fields are correct:When I try to use Internet Explorer, I can finally see all the four fields, but effect is exactly the same:What am I missing? I know, how much stupidity is in this question, but... how should I sing up to Twitter?BTW: In all my attempts (in all browsers), I'm trying to use my own e-mail and all the time it is the same, if that matters. When I tried to used some fake, probably not existing e-mail address, I managed to pass through this step. I closed web page after that, because -- quite naturally -- I want my Twitter account on my real e-mail, not on some fake one. | How to sign up to Twitter? | twitter | This problem is directly linked to this question. I was trying to register a Twitter account, that contains .com in name (which is not allowed since recently).There's a bug in Twitter, which causes to display error:Please complete all required fieldsand mark name field as having correct value. While it actually should mark it as invalid (because it contains .com part, prohibited) and should display correct error message. Like for example:Account creation failed: Name must not contain URLsOr something around that.There's still, of course, an open question, when did they changed their rules (I managed to create an account containg .com in name about three months ago) and why do they limit domains prohibited in name to only a certain group (you can use .pl and probably many more ccTLDs in Twitter name)? |
_codereview.70923 | This is the batch file I have created which zips the particular month logs and deletes those logs from the source folder after successful zipping. If it does not find the zip file, then it again starts to zip the logs.Please suggest any changes to be done.@echo offset zip=C:\Program Files\7-Zip\7z.exerem Findout Month, Year and DateFOR /F tokens=1,2,3,4 delims=/ %%A in ('Date /t') do set year=%%DFOR /F tokens=1,2,3,4 delims=/ %%A in ('Date /t') do set month=%%BFOR /F tokens=1,2,3,4 delims=/ %%A in ('Date /t') do set day=%%Crem Solve problem Year start and adding 0 in monthif %month% EQU 01 ( set month=12 set /a year=%year%-1) else ( set /a month=%month%-1)if %month% LSS 10 set month=0%month%rem Set file names for last month file.set lastmonthfiles=server.log.%year%-%month%-rem To compress the file.:compress%zip% -tzip a -y bkp-%lastmonthfiles%.zip %lastmonthfiles%*ping -n 5 192.168.100.44 > nul if Not exist bkp-%lastmonthfiles%.zip ( echo zipping failed pause) else ( DEL %lastmonthfiles%* ) pause | Batch file to zip logs date wise | batch | Well, first off, I must commend you on the readability of your code. You've employed nice spacing and enough comments to be able to find different sections easily without a lot of reading. I think I could learn a thing or two from you here. Nice job!There are a few things you can do to improve your code, though. Let's start with the general practices first.General practice adviceUnless your script is intended to set environment variables for other scripts or applications, you should always use setlocal. Even if the script you're writing is intended to append a new directory to %PATH%, you should still setlocal at the top until your internal flow is complete and you're ready to commit the change to %PATH%. This way you don't pollute your environment with a bunch of variables that only have meaning within a particular script -- or worse, have meaning in a different script that expects the variable not to be defined yet. Whenever you @echo off, setlocal should automatically be the next thing you type.When setting variables to string values, it's good practice to set varname=string with the quotation marks surrounding both the variable and its value. That way, whenever you use your variable later, there's no ambiguity whether your variable=value or variable=value. Also, in a future script, you might capture special characters into a variable, like an ampersand or a percent. Get into the habit of set variable=value now and you won't have to change your coding style for special cases like that, and you'll spend less time debugging.Likewise, in your if statements, you should enclose the items on both sides of your comparison operator. if %foo% equ %bar%, or if %%~xI==.exe. I can't count all the times as a rookie scripter I would struggle with errors when %foo% contained a space, causing blah was unexpected at this time because I didn't use quotes.set /a has some shorthand syntax you might find helpful. Interestingly, when you're doing math with variables, you don't have to use % around the variable names. For example, instead of set /a year=%year%-1 you can set /a year=year-1. You can also combine operator and assignment like +=, *=, /=, etc. So instead of set /a year=%year%-1 you can set /a year-=1.Now, there are a few issues specific to this script that can be improved.Script-specific suggestions%date% and date /t are ambiguous. Some locales list date as MM/DD/YYYY, while others use DD/MM/YYYY, and still others use YYYY/MM/DD. (more information.) A more agnostic way of scraping the date would be to use wmic. See Method 2 on this page for a way to put the date into variables that should work more universally.When using del with a wildcard, consider adding the /q switch to suppress confirmation, unless you intentionally want your script to ask the user to confirm deletion.Consider changing ping -n 5 192.168.100.44 > nul to ping -n 5 0.0.0.0 >NUL. Having an actual IP there might (at at glance) prompt the reader to wonder whether the script will behave differently whether the host does or does not respond; whereas 0.0.0.0 makes it obvious that you're using the ping command as nothing more than a period of sleep.if Not exist bkp-%lastmonthfiles%.zip <-- If this is ever true, you are going to pause twice, then exit. Examine your logic here. Did you leave out a goto compress?What it looks like you intended to do is attempt to zip; then if the zip file doesn't exist, echo a notice to the user, pause, then loop back to :compress to try again. Otherwise, assume everything went fine and delete all the old stuff and exit. What happens if a file is in use and locked, and 7-zip skips archiving it but was otherwise successful with the other log files? bkp-%lastmonthfiles%.zip still exists, and your script could potentially delete the file that was skipped.If I may make yet another suggestion, you should rewrite the end of your script to take advantage of 7-zip's exit codes. Try this instead. (Note: %zip% is in quotes on the assumption that you followed General practice advice #2 above.):compress%zip% -tzip a -y bkp-%lastmonthfiles%.zip %lastmonthfiles%* && ( DEL /q %lastmonthfiles%* echo Zipping complete. Press any key to exit. pause >NUL goto :EOF) || ( if ERRORLEVEL 2 ( echo Zipping failed ^(exit status %ERRORLEVEL%^). Trying again in 5 seconds... ) else ( echo Zip completed with warnings ^(most likely because a file was locked by another echo process and had to be skipped^). Trying again in 5 seconds... ) del bkp-%lastmonthfiles%.zip >NUL 2>&1 ping -n 6 0.0.0.0 >NUL goto compress)A note about the && and || notation there: That's shorthand code for testing the exit code of the command preceding it. program.exe && success || fail. See conditional execution for more details on how this works. |
_webmaster.36512 | I am currently migrating my site to HTML5, at the same time designing pages so that they are fluid and are equally presentable for a mobile or a large screen.I took the fluid approach so as not to have to develop a separate application for mobile devices and I'm pleasantly surprised with the results that look equally as good on an iPhone as they do on a large screen.Then I went into the Google Webmaster Tools facility and became aware of the Google Mobile Index. I'm confused now as HTML5 doesn't seem to be supported by Google Mobile Indexing.Does this mean that when I go live with my new pride and joy HTML5 site on a mobile it won't appear on any Google searches as it's not in the Google Mobile Index? | HTML 5, Fluid Pages and Google Mobile Index | google search console;html5;mobile;google index | null |
_softwareengineering.289359 | Here is a similar question Where to validate domain model rules that depend on database content?I am asking this new question because I have more descriptions and I don't want to change the previous question mentioned aboveProblemI have Form aggregate root that has Fields collection. On my form I have the SetCustomField(fieldname, value) method. I also have default properties on the form e.g. DateOfBirth, FirstName, and LastName.public DateTime DateOfBirth{ get { return _dob; } set { if(value > 2000) throw new ArgumentException(null, You must be 15 yrs old or more); _dob = value; }}This works when creating the form. But when you want to edit or view the form, the validation kicks in too. This throws exception for fields that are empty or have invalid values - perhaps changed in the db or loaded from Excel. The expected behavior is to allow the form to show on updating so the Admin can correct those fields.CQRSI have been reading about CQRS but so far I don't think it would solve this problem. If I want to edit the form, Form aggregate root still needs to be loaded, modified and updated://repository needs to set the DateOfBirth here and throws exceptionvar form = repo.GetFormById(1); form.DateOfBirth = Some value;repo.Save(form);Since I need to reconstruct the model, CQRS Read-Model might not help. If I have to use CQRS, how would you reconstruct an Aggregate Root from the Read-Model?Thanks for your contribution. | Loading Aggregate Root from Database with Validations | domain driven design;validation;cqrs;aggregate | null |
_cs.52302 | I came across someone else's old post here Designing a DFA that accepts strings such that nth character from last satisfies conditionAnd decided to try it. Apparently it was a homework problem but it was from 2013 so I assume it is okay to discuss it openly by now.The problem is:Design a DFA that accepts strings having 1 as the 4th character from the end, on the alphabet {0,1}The regular expression that does this I believe is (0|1)*1(000|001|010|011|100|101|110|111).My NFA:Used Google to see how DFAs work and tried this translation:Was this a valid solution to the problem?(Blue arrow = points to starting state, green states are accepting states) | Designing a DFA for binary strings having 1 as the fourth character from the end | regular languages;automata;finite automata | You have not created a DFA.What you have created is a second slightly less messy NFA. (Sort of; as mentioned in comments elsewhere you need the explosion of internal states out from (2) that you have out from (9) in the original NFA)So let's pretend that we have an NFA that is made by chopping off state (2) in the second picture and adding state (9) from the first picture in its place.Now, let's walk through the powerset construction on that new NFA:Start in state {1}. So let's call the DFA state corresponding to the singleton set {1}, DFA_1.From DFA_1 you can see either a 0 or a 1. On 0, DFA_1 goes to {1}, so DFA_1 again. On 1, it goes to {1,9}. Call that DFA_2.From DFA_2, on 0 it goes to {1,10,13,16,19}. Call that DFA_3. On 1 it goes to {1,9,22,25,28,31}. Call that DFA_4.For DFA_3, on 0 it goes to {1,11,14} (since the other nodes only have transitions out on 1). Call that DFA_5. On 1, from DFA_3 we go to {1,9,17,20}. Call that DFA_6.For DFA_4, on 0 it goes to {1,10,13,16,19,23,26} - call that DFA_7. On 1, DFA_4 goes to {1,9,22,25,28,29,31,32}. Call that DFA_8.For DFA_5, on 0 it goes to {1,12}. Call that DFA_9. Note that DFA_9 is an accepting state. On 1 it goes to {1,9,15}. Call this DFA_10, also an accepting state.For DFA_6, on 0 it goes to {1,10,13,16,18,19}. Call this DFA_11. On 1, we get {1,9,21,22,25,28,31}. Call this DFA_12. Note that DFA_11 and DFA_12 are accepting states.For DFA_7, on 0 it goes to {1,11,14,24}. Call that DFA_13. On 1, we get {1,9,17,20,27}. Call that DFA_14. (DFA states 13 and 14 are accepting states)For DFA_8, on 0 it goes to {1,10,13,16,19,23,26,30}. Call that DFA_15. On 1, it goes to {1,9,22,25,28,29,31,32,33}. Call that DFA_16. (DFA states 15 and 16 are accepting states)For DFA_9, on 0 it goes to {1}. We've seen this before! That's DFA_1. On 1 it goes to {1,9}, which is DFA_2.For DFA_10, on 0 it goes to {1,10,13,16,19}, that is, DFA_3. On 1 it goes to {1,9,22,25,28,31}, that is DFA_4.For DFA_11, on 0 it goes to {1,11,14}, which is DFA_5. On 1 it goes to {1,9,17,20}, that is, DFA_6.For DFA_12, on 0 it goes to {1,10,13,16,19,23,26}, or DFA_7. On 1, it goes to {1,9,22,25,28,29,31,32}, or DFA_8.For DFA_13, on 0 it goes to {1,12} (DFA_9). On 1 it goes to {1,9,15} (DFA_10).For DFA_14, on 0 it goes to {1,10,13,16,18,19} (DFA_11). On 1, we get {1,9,21,22,25,28,31} (DFA_12)For DFA_15, on 0 it goes to {1,11,14,24} (DFA_13) On 1, we get {1,9,17,20,27} (DFA_14).For DFA_16, I hope you can see the pattern by now. On 0 it goes to DFA_15 and on 1 to DFA_16.So, in summary:DFA state # | Next for 0 | Next for 1 | Accepting?--------------------------------------------------------- 1 | 1 | 2 | 2 | 3 | 4 | 3 | 5 | 6 | 4 | 7 | 8 | 5 | 9 | 10 | 6 | 11 | 12 | 7 | 13 | 14 | 8 | 15 | 16 | 9 | 1 | 2 | yes 10 | 3 | 4 | yes 11 | 5 | 6 | yes 12 | 7 | 8 | yes 13 | 9 | 10 | yes 14 | 11 | 12 | yes 15 | 13 | 14 | yes 16 | 15 | 16 | yesAs a side note, I think this process might have been simpler had you started with an NFA of only 5 states corresponding to the regex (0|1)*1((0|1)(0|1)(0|1)).(Though you'd still end up needing 16 DFA states - it can be proven that that's the minimal number of DFA states needed for this problem) |
_unix.360966 | I have created a multi boot USB stick with a couple of distros, everything is working fine. Now i want to add Win 10 to the stick somehow.It seems most Apps like multisystem multiboostusb and yumi does not support this.Is there maybe a manual way for adding windows? I thought about adding a second partition, but it wont be possible to boot from there right?. Stick is in fat32 format. | Multiple OS boot usb | boot loader;live usb;bootable | null |
_webapps.27297 | I have exported board from one account of trello. Now I want to import that same on other account. I can't see any option to import the board. How can I do that??? | How to import task lists into trello, exported from some other account of trello? | trello | null |
_cogsci.3387 | Is there a way to make onself remember facts from an audiobook (containing for instance a list of countries and their capitals) played out loud while sleeping? | Is learning facts via audio while sleeping possible? | learning;sleep | null |
_unix.366821 | My intent is to place the text section at a specific location in memory (0x00100000).SECTIONS{ . = 0x00100000; .text : { *(.text*) }} Although the linker does do this (note the 0x01000000 Addr field):$ readelf -S file.elf There are 12 section headers, starting at offset 0x104edc:Section Headers: [Nr] Name Type Addr Off Size ES Flg Lk Inf Al [ 0] NULL 00000000 000000 000000 00 0 0 0 [ 1] .text PROGBITS 00100000 100000 000e66 00 AX 0 0 4 [ 2] .eh_frame PROGBITS 00100e68 100e68 000628 00 A 0 0 4...it also places ~1MB of zeros before the .text section in the ELF file (note the .text section's offset is 1MB). Shown another way: $ hexdump -C file.elf00000000 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 |.ELF............|00000010 02 00 03 00 01 00 00 00 0c 00 10 00 34 00 00 00 |............4...|00000020 dc 4e 10 00 00 00 00 00 34 00 20 00 02 00 28 00 |.N......4. ...(.|00000030 0c 00 0b 00 01 00 00 00 00 00 00 00 00 00 00 00 |................|00000040 00 00 00 00 90 14 10 00 96 04 4f 00 07 00 00 00 |..........O.....|00000050 00 00 20 00 51 e5 74 64 00 00 00 00 00 00 00 00 |.. .Q.td........|00000060 00 00 00 00 00 00 00 00 00 00 00 00 07 00 00 00 |................|00000070 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|00000080 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|*00100000 02 b0 ad 1b 03 00 00 00 fb 4f 52 e4 8b 25 90 04 |.........OR..%..|00100010 4f 00 50 53 e8 88 00 00 00 fa f4 eb fc 55 89 e5 |O.PS.........U..|00100020 83 ec 10 c7 45 f8 00 80 0b 00 c7 45 fc 00 00 00 |....E......E....|00100030 00 eb 24 8b 45 fc 8d 14 00 8b 45 f8 01 d0 8b 4d |..$.E.....E....M|How can this be prevented? Am I improperly using the location counter (dot notation) syntax? | GNU linker producing useless spacing between sections in ELF file | gnu;linker;elf | null |
_unix.374754 | How can I invoke grub-install from Ubuntu in a way that it installs its files to a custom directory and not, for example, to /EFI/ubuntu? Every method I try ends up (still) putting some .efi and .cfg files into the /EFI/ubuntu folder. I want to redirect this folder somewhere else (properly). | How can I install GRUB to a different folder than /EFI/ubuntu? | ubuntu;grub2 | null |
_codereview.60379 | The problem asks you to take an integer (debit amount) and a double (credit or initial balance amount) and process the requested debit verifying that 1 it's a multiple of the minimum denomimation amount of $5 and that it's also smaller than the credit/balance. If either is untrue, it is supposed to return the initial deposit amount otherwise it will return the new balance.Full problem descriptionI have created 3 objects for this problem:Transaction - This object reads in the two initial values given and then is used in ATMATM - Takes the transaction and applies them to the account and then displays the new balance.Account - This object keeps track of the current account balance and updates the balance if the ATM passes it a value.Limitations:I understand that it can only process a single account, but that is more a limitation set by the problem description than it is me not accounting for multiple accounts. Also no error is returned if the balance cannot be updated, but it is not a requirement. I also understand I made a mountain out of a molehill with this problem as it can be solved by much less code.In what ways can I improve this code other than the limitations mentioned?#include <istream>#include <iostream>#include <iomanip>class Account {public: Account() : mBalance(0.0) {} void updateBalance(double transaction) { mBalance += transaction; } double getBalance() { return mBalance; }private: double mBalance;};class Transaction {public: Transaction() : mDebit(0) , mCredit(0.0) {} int getDebit() { return mDebit; } double getCredit() { return mCredit; } friend std::istream& operator>>(std::istream& input, Transaction& transaction) { input >> transaction.mDebit; input >> transaction.mCredit; return input; }private: int mDebit; double mCredit;};class ATM {public: ATM() : mAccount() , mMinDenomination(5) , kWithdrawal_fee(0.50) {} void processTransaction(Transaction& transaction) { credit(transaction); debit(transaction); } void displayBalance() { std::cout << mAccount.getBalance() << '\n'; }private: Account mAccount; int mMinDenomination; const double kWithdrawal_fee; bool debit(Transaction& transaction) { if(isWithdrawable(transaction.getDebit())){ mAccount.updateBalance(-1*(transaction.getDebit() + kWithdrawal_fee)); return true; } return false; } void credit(Transaction& transaction) { if(transaction.getCredit() > 0) { mAccount.updateBalance(transaction.getCredit()); } } bool isWithdrawable(int transaction) { if(transaction % mMinDenomination == 0) { return mAccount.getBalance() >= transaction + kWithdrawal_fee; } return false; }};int main() { std::iostream::sync_with_stdio(false); std::cout << std::setprecision(2) << std::fixed; Transaction transaction; ATM atm; std::cin >> transaction; atm.processTransaction(transaction); atm.displayBalance(); return 0;} | Clean code attempt at ATM problem on codechef.com | c++;beginner;c++11;programming challenge;finance | Design.You use a mixture of int and doubles to represent monatary units. This is not a good idea. double (like all fixed with decimal representations, can not hold all values exactly). You should use an integer like type (where all values are represented exactly). If you are in America and using dollars and cents then I would use an integer but the balance of the account is held in cent. When you print it out you can then place the decimal point in the correct place.Code ReviewIn:class Account {I always think getters are wrong. They break encapsulation. Looking forward in your code you use them for two reasons. 1) Printing. 2) To test if the account has enough funds for withdraw. In both cases you should add explicit methods. double getBalance() { return mBalance; }I would replace the above with: friend std::ostream& operator<<(std::ostream& s, Account const& data) { // Assuming you changed (as suggested above to hold account balance in cent. s << $ << data.mBalance / 100 << . << data.mBalance % 100; } virtual bool canWithdraw(double amount) { return mBalance > amount; }This logic protects you against future improvements to the system. What happens if you add the ability of some accounts to go overdrawn (for a fee). Then in your code you have to find all locations where the balance is being checked and modify those. In the method I propose you only need to modify one place (the Account class). You have localized the test for whether the account can withdraw money.In:A debit is an integer and a credit is a double.I don't understand the logic here. int mDebit; double mCredit;They should be the same. If you have some compelling reason for the difference then I need a big comment about why they are different (you may have a good reason, but you will need to explain it in the code).Personally I would just have an amount. A negative amount is a debt and positive amount a credit.Getters. Ahhh. horrible. int getDebit() { return mDebit; } double getCredit() { return mCredit; }Again the only use is do tests and fiddling that should be part of the Accounts responsibility. You should send the transaction to the account which may reject the transaction if it fails any of the account specific validations (ie you can have a negative balance).Like this. friend std::istream& operator>>(std::istream& input, Transaction& transaction) { input >> transaction.mDebit; input >> transaction.mCredit; return input; }But usually when you have an input stream reader you also have an output stream writer that mirrors the reader. So when you persist to a stream the class can also read the value in.In ATM:Interesting. You have a debit action and credit action applied for every transaction. Does this mean that a transaction can perform both operations? void processTransaction(Transaction& transaction) { credit(transaction); debit(transaction); }Its OK to have a print method(). void displayBalance() { std::cout << mAccount.getBalance() << '\n'; }But usually it is best for this to just call the stream operator. void displayBalance() { std::cout << mAccount; // The account should know how to serialize itself. }This shows how bad an idea it is to have functions that have success state. bool debit(Transaction& transaction) { if(isWithdrawable(transaction.getDebit())){ mAccount.updateBalance(-1*(transaction.getDebit() + kWithdrawal_fee)); return true; } return false; }You do it all correctly yet it is still broken. Because the calling code does not check the return value. Yes internally within a class it is absolutely fine to return status codes (because you do not expose the interface publicly). But you must also make sure you do actually test the result codes.Note: It is never (very rarely) OK to expose status codes that need checking publicly. As we can see in the C world (were this practice is the norm)it is so easy to not check the error codes and thus invalidate any following code. You should write code so it can not be used incorrectly which means forcing your users to do the correct thing (or the program exits (exceptions)). |
_softwareengineering.205251 | Are there any languages where something like the following might be possible?people = [ ... a list of people ...]Person jake = Person(Jake, 165, ...)jake is Tallpeople.add(jake)for Person person in people where Tall: // ... do something terrible to themjake is not Tall // ... Jake no longer wishes to be tallI hope that makes sense - basically, dynamic adjectives that affect something about an object's properties or methods. | Are there any programming languages that make use of adjectives? | programming languages;language design | Adjectives are really just attribute evaluation. Here's how I might handle it in JavaScript.function PersonAdjectiveConstructor(person) { this.isTall = (person.height >= 6); this.isRich = (person.pocketMoney >= 1000000); this.isSmart = (person.iq > person.shoeSize); this.isInsufferable = ( this.tall && this.rich && this.smart ); //formerly (for comment context): //this.isInsufferable = ( this.smart && this.rich && this.smart );}function PersonConstructor(personAttributes){ for(var x in personAttributes){ this[x] = personAttributes[x]; } var adjectives = new PersonAdjectiveConstructor(this); for(var x in adjectives){ this[x] = adjectives[x]; }}var bob = new PersonConstructor({ shoeSize:11, iq:12, height:6.25, pocketMoney:2000000 });console.log(bob.isInsufferable);//true |
_unix.266586 | I've enabled tap-to-click in Gnome but it doesn't work on GDM.I tried running dconf-editor as root to modify the setting but to no avail.I also tried running sudo -u gdm gsettings set org.gnome.desktop.peripherals.touchpad tap-to-click true but I get the following error(process:16560): dconf-WARNING **: failed to commit changes to dconf: Error spawning command line 'dbus-launch --autolaunch=long-number-here --binary-syntax --close-stderr': Child process exited with code 1How do I enable tap-to-click on GDM? | GDM - how to enable touchpad tap-to-click | arch linux;gdm3 | You have to export $(dbus-launch) and set the gsettings backend (tested on archlinux with gdm 3.18.2):switch to a VT (e.g. Ctrl+Alt+F3), login as root and run:su - gdm -s /bin/shto switch user to gdm.then run:export $(dbus-launch)and:GSETTINGS_BACKEND=dconf gsettings set org.gnome.desktop.peripherals.touchpad tap-to-click truerun exit or hit Ctrl+D to return to root account.restart the display manager:systemctl restart gdmReverting is pretty much the same, just change true to false @ step 2. |
_unix.223882 | I always write stdin redirection after the command, because for me it's more natural to have first the command and then the redirections (if any):some-command < input-file > output-fileFor years, I've seen people writing the stdin redirection before the command, to have some flow direction:< input-file some-command > output-file (or without spaces after < and > )Is this ordering accepted by POSIX or just accepted by many shells (in my fedora 21 it is accepted by bash, dash, tcsh, ksh and zsh)? | posix placement of stdin redirection ( | shell;io redirection;posix | That behavior was defined by POSIX here:If more than one redirection operator is specified with a command, the order of evaluation is from beginning to end.and here:A simple command is a sequence of optional variable assignments and redirections, in any sequence, optionally followed by words and redirections, terminated by a control operator.This was already the case in the Bourne shell, which POSIX used as a basis.Before a command is executed its input and output may be redirected using a special notation interpreted by the shell. The following may appear anywhere in a simple- command or may precede or follow a command and are not passed on to the invoked command. ()Unlike the original Bourne shell, POSIX doesn't allow redirection to precede a complex command like while done, ( ), etc.A note that the order of redirection is important, because it control your command behavior and prevent you from some weird result upon failure. Example:command <input >outputif command failed to read input (due to permission, non-existed ...) then it will be terminated without create empty file output if you swap the redirection position:command >output <input |
_webmaster.104244 | So I am trying to understand this confusing documentation from schema.org and want to fix my structured data. Lets say I offer music recording and production services. So to show through html: example without Structured data<ul>// this is the main<h1>Music Recording and Production</h1>// these are subcategories<li>voice recording</li><li>musical instrument recording </li><li>mixing and mastering tracks</li><li>blah blah</li><li>blah blah</li><li>blah blah</li> </ul>So if i were to inject structured data would it be: <ul> <h1 itemtype itemscope=https://schema.org/Service>We offer **Music Recording and Production**</h1> <li itemprop=hasOfferCatalog>voice recording</li> <li itemprop=hasOfferCatalog>musical instrument recording </li> <li itemprop=hasOfferCatalog>mixing and mastering tracks</li> <li itemprop=hasOfferCatalog>blah blah</li> <li itemprop=hasOfferCatalog>blah blah</li> <li itemprop=hasOfferCatalog>blah blah</li> </ul>and then if those subcategories had some description, what itemprop would they eat? | How to use structured data for service categories? | schema.org;microdata | null |
_unix.377817 | Looking at other questions, I've done the following:chmod g+s MEDIA setfacl -R -d -m g::rwx MEDIA setfacl -R -d -m o::rwx MEDIA NOTE: MEDIA is a folder I'm looking to set up so that all files/folders added have the same user/group as the parent folder.In this example MEDIA is owned by the user Bob and group SharedFiles. The goal is for newly created files/folders to retain this ownership (both Bob and SharedFiles:MEDIA Bob SharedFilesMEDIA/NewFolder Bob Bob <BADMEDIA/NewFolder Bob SharedFiles <GOODIf I create a subfolder whilst logged in as user 'Bob' that folder is owned by Bob:SharedFiles with [rwxrwxrwx] permissions (as intended). All Good!If I login as Sue, the new folder becomes part of Sue:Sue with [rwxr-xr-x].If I login from a different machine via a mounted drive in KDE (user Sue) the folder becomes part of Bob:Bob with [rwxr-xr-x].Now both Bob and Sue are part of SharedFiles, where am I going wrong. I want all users in SharedFiles group to have RWX permissions and I want all files/folders created by users in SharedFiles group to have the same user/group as the parent folder, why does this only happen with the owner on the machine itself.getfacl MEDIA/returns# file: MEDIA/# owner: Bob# group: SharedFiles# flags: -s-user::rwxgroup::rwxother::rwxdefault:user::rwxdefault:group::rwxdefault:other::rwxsamba.conf contains:[MEDIA] read only = no locking = yes path = /mnt/local/int001/MEDIA guest ok = yes create mask = 0775 directory mask = 0775 | Ensure newly created files/folders are owned by parent folder's user AND group | debian;users | null |
_codereview.114397 | I developed a PHP function to get a not formatted address and split it in a street name and number.Following are some patterns of received addressesStreetName NumberSrtreetName, NumberNumber StreetNameNumber-Number StreetNameStreetName, Number, ComplementStreetName Number/NumberStreetName Number - ZipCode (ZipCode could be ignored)StreetName (without number)I'm using regex to identify the pattern and then splitting it. Here is the function (the code is commented for better understanding):<?php function getInfoAddress ($address) { $return = array('street'=>NULL, 'number'=>NULL, 'complement'=>NULL); //firstly, erase spaces of the strings $addressWithoutSpace = str_replace(' ', '', $address); //discover the pattern using regex if(preg_match('/^([0-9.-])+(.)*$/',$addressWithoutSpace) === 1) { //here, the numbers comes first and then the information about the street $info1 = preg_split('/[[:alpha:]]/', $addressWithoutSpace); $info2 = preg_split('/[0-9.-]/', $address); $return['number'] = $info1[0]; $return['street'] = end($info2); } elseif(preg_match('/^([[:alpha:]]|[[:punct:]])+(.)*$/',$addressWithoutSpace) === 1) { //here, I have a alpha-numeric word in the first part of the address if(preg_match('/^(.)+([[:punct:]])+(.)*([0-9.-])*$/',$addressWithoutSpace) === 1) { if(preg_match('/,/',$addressWithoutSpace) === 1) { //have one or more comma and ending with the number $info1 = explode(,, $address); $return['number'] = trim(preg_replace('/([^0-9-.])/', ' ', end($info1)));//the last element of the array is the number array_pop($info1);//pop the number from array $return['street'] = str_replace(,, ,implode( ,$info1));//the rest of the string is the street name } else { //finish with the numer, without comma $info1 = explode( , $address); $return['number'] = end($info1);//the last elemento of array is the number array_pop($info1);//pop the number from array $return['street'] = implode( ,$info1);//the rest of the string is the street name } } elseif(preg_match('/^(.)+([0-9.-])+$/',$addressWithoutSpace) === 1) { //finish with the number, without punctuation $info1 = explode( , $address); $return['number'] = end($info1);//the last elemento of array is the number array_pop($info1);//pop the number from array $return['street'] = implode( ,$info1);//the rest of the string is the street name } else { //case without any number if (preg_match('/,/',$addressWithoutSpace) === 1) { $return['number'] = NULL; $endArray = explode(',', $address); $return['complement'] = end($endArray);//complement is the last element of array array_pop($endArray);// pop the last element $return['street'] = implode( , $endArray);//the rest of the string is the name od street } else { $return['number'] = NULL; $return['street'] = $address;//address is just the street name } } } return ($return); } $address = $_POST['address']; $addressArray = getInfoAddress($address); print_r($addressArray);?>This is working in the most cases (enough for me for while), so I'd like to improve some points:Improve Readability: I care with readable code, but in this case I think that I couldn't be a good job. Are there some useless if/else block for example?Improve Reliability: The code fails in some cases (when the street name includes number like in 5thAvenue or when the complement is before number like rue de la montagne BL2 52, for exemple). Are there some way to improve the reliability?I also would like some suggestions of improvement without the use of regex, although I have not been able to figured out anything in this way. | Address filter with PHP using regex | php;strings;regex | Looking very quickly, it seems like a frail code.Still, it works as expected.But I saw a few things to improve:You called your function getInfoAddress().It sounds like it wil fetch the address somewhere, but that isn't the case...The address is being parsed. A name like parseAddress() seems better.But, your function casing is wrong, in my opinion.PHP isn't case-sensitive regarding function names.If you write parseaddress(), you may have problems in the future, if you need to change something.My recommendation goes on parse_address()Be explicit regarding your regular expressions.Avoid this: /^([[:alpha:]]|[[:punct:]])+(.)*$/Be explicit. I have no idea what punct means. It is ponctuation?You over-use preg_match.You have this line:if (preg_match('/,/',$addressWithoutSpace) === 1){You should use strpos() for this:if (strpos($addressWithoutSpace, ',') !== false){This will improve the performance by quite a bit.Please, don't mix Portuguese with English.Your $endereco variable should have other name.Please, only and only give English names to your variables.Everybody will thank you.Right on top, you normalize your input:$addressWithoutSpace = str_replace(' ', '', $endereco);But you use that $endereco variable everywhere. Maybe it was by mistake?Avoid closing the PHP tag on a file that only has PHP codeThis will avoid frustrations due to a forgotten whitespace after the closing tag.Many services, like Github, add a newline to the end.PHP automatically ignores 1 and only 1 whitespace after ?>, but not more.If you leave 1 more newline by mistake, you can seriously break stuff everywhere.Just remove the ?> at the end. |
_codereview.125330 | Part of a project I'm getting started on requires access to the Stack Exchange API for certain data, as a result I built a .NET implementation to interact with it.The implementation is pretty simple, in my opinion. You can also find it on GitHub.It features a main Configuration class, in which you put a lot of the API configuration information./// <summary>/// Represents a Stack Exchange API configuration for use with API requests./// </summary>public class Configuration{ /// <summary> /// Represents the base endpoint for the Stack Exchange API url. /// </summary> public const string ApiUrlBase = {Protocol}://api.stackexchange.com/{Version}/; /// <summary> /// This is the upper bound of the Page Size for <b>most</b> requests. Currently 100. /// </summary> public const int MaxPageSize = 100; /// <summary> /// The application API key. Can be <code>null</code> for anonymous requests. /// </summary> public string Key { get; set; } /// <summary> /// If true then the HTTPS protocol will be used, otherwise the HTTP protocol will be used. Defaults to true. /// </summary> public bool UseHttps { get; set; } = true; /// <summary> /// Determines what version of the API will be used. This should never be modified unless absolutely necessary. Defaults to 2.2. /// </summary> public string Version { get; set; } = 2.2; /// <summary> /// Returns the <see cref=ApiUrlBase/> formatted with the provided parameters. /// </summary> public string FormattedUrl => ApiUrlBase.Replace({Protocol}, UseHttps ? https : http).Replace({Version}, Version); /// <summary> /// Appends the current <see cref=Key/> to the provided url. /// </summary> /// <param name=url>The URL to append to. Should be the result of <see cref=FormattedUrl/>, then the API </param> /// <returns></returns> public string AppendKey(string url) => string.IsNullOrWhiteSpace(Key) ? url : url + (url.Contains('?') ? '&' : '?') + key= + Key; /// <summary> /// Returns the fully formatted URL for Stack Exchange API requests. /// </summary> /// <param name=requester>The fully filled <see cref=IRequest/> making the request.</param> /// <returns>The formatted url.</returns> public string GetFormattedUrl<T>(IRequest<T> requester) where T : IBaseModel => AppendKey(FormattedUrl + requester.FormattedEndpoint);}You don't actually have to put anything in any of these fields. You can access the API with a default instance of this class as an anonymous user.Then, as you can see, there is an IRequest<T> interface, which specifies what a request object must contain./// <summary>/// This representes a generic request against the Stack Exchange API. Though this does not make use of the type parameter intrinsically, it's necessary for generic inference and type constraints./// </summary>/// <typeparam name=T>A <see cref=IBaseModel/> representing the returned model from the request. When used with <see cref=Handler.SubmitRequest{T}(IRequest{T}, bool)/> this will return a type of <see cref=Wrapper{TObject}/> where <code>TObject</code> is this type.</typeparam>public interface IRequest<T> where T : IBaseModel{ /// <summary> /// The basic endpoint for the <see cref=IRequest{T}/>. /// </summary> string EndpointUrl { get; } /// <summary> /// Gets the formatted endpoint for the <see cref=IRequest{T}/>. This should <b>NOT</b> contain the Stack Exchange API base URL or key. /// </summary> string FormattedEndpoint { get; } /// <summary> /// This should verify that all the provided parameters required for the <see cref=IRequest{T}/> are present. /// </summary> /// <returns>True if the required parameters pass verification, false otherwise.</returns> bool VerifyRequiredParameters(); /// <summary> /// This should return a message to be used to indicate to the user what the verification should be. /// </summary> string VerificationError { get; }}This interface allows us to interact more generically with an API request. It provides a few features we need to guarantee that we are at least attempting to submit a valid API request.Next we have a Handler class which actually submits and processes the API request./// <summary>/// Fires and processes the actual SE API requests./// </summary>public class Handler{ /// <summary> /// The <see cref=Configuration/> to use for general API access. /// </summary> public Configuration Configuration { get; } /// <summary> /// Creates a new instance of the <see cref=Handler/> with the specified <see cref=Configuration/>. /// </summary> /// <param name=configuration>The <see cref=Configuration/> to use for SE API requests.</param> public Handler(Configuration configuration) { Configuration = configuration; } /// <summary> /// Submits a request to the SE API. /// </summary> /// <typeparam name=T>The type of the object to be returned. This should be inferred from the <see cref=IRequest{T}/>.</typeparam> /// <param name=request>The <see cref=IRequest{T}/> being performed.</param> /// <param name=throwVerificationExceptions>If true, verification errors will result in exceptions. If false, a <code>null</code> object will simply be returned.</param> /// <returns>A <see cref=Wrapper{TObject}/> for the API request.</returns> public Wrapper<T> SubmitRequest<T>(IRequest<T> request, bool throwVerificationExceptions = true) where T : IBaseModel { if (!request.VerifyRequiredParameters()) { if (throwVerificationExceptions) { throw new ArgumentException($At least one of the required parameters for {nameof(request)} was invalid., new ArgumentException(request.VerificationError)); } else { return null; } } var response = ; var url = Configuration.GetFormattedUrl(request); var webRequest = (HttpWebRequest)WebRequest.Create(url); webRequest.Headers.Add(HttpRequestHeader.AcceptEncoding, gzip,deflate); webRequest.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate; using (var webResponse = webRequest.GetResponse()) using (var sr = new StreamReader(webResponse.GetResponseStream())) { response = sr.ReadToEnd(); } return DataContractJsonSerialization.Deserialize<Wrapper<T>>(response); }}In order for all these type constraints to work, we have a generic IBaseModel which does nothing except provide a type constraint./// <summary>/// Represents a generic model for a Stack Exchange API request. Only required (at the moment) for type constraints./// </summary>public interface IBaseModel{}We have a Wrapper<T> model which represents the base result from an SE API request. All responses from the API (as of now) return this Wrapper with the Items field set to strong-types of the data returned. So, we use a generic implementation so we can provide the strong type ourselves./// <summary>/// This is a general wrapper for Stack Exchange API request models. All API requests should contain these basic fields./// </summary>/// <typeparam name=T>The type of the object list/array returned by the API request.</typeparam>[DataContract]public class Wrapper<T> : IBaseModel where T : IBaseModel{ /// <summary> /// A list of the objects returned by the API request. /// </summary> [DataMember(Name = items)] public List<T> Items { get; set; } /// <summary> /// Whether or not <see cref=Items/> returned by this request are the end of the pagination or not. /// </summary> [DataMember(Name = has_more)] public bool HasMore { get; set; } /// <summary> /// The maximum number of API requests that can be performed in a 24 hour period. /// </summary> [DataMember(Name = quota_max)] public int QuotaMax { get; set; } /// <summary> /// The remaining number of API requests that can be performed in the current 24 hour period. /// </summary> /// <remarks> /// As far as I know, this resets to <see cref=QuotaMax/> at 00:00:00 UTC+0000. /// </remarks> [DataMember(Name = quota_remaining)] public int QuotaRemaining { get; set; } /// <summary> /// The optional number of seconds that the programme making the API requests should stop submitting requests for. /// </summary> /// <remarks> /// Programmes that fail to follow this backoff may be subject to being banned from making API requests for any period of time. /// </remarks> [DataMember(Name = backoff)] public int? Backoff { get; set; }}Then, we have some type of IRequest<T> object which contains the actual data for the API request.Here are all the currently implemented IRequest<T> models./// <summary>/// Represents an API request for Stack Exchange Site information./// </summary>public class InfoRequest : IRequest<Info>{ private const string _endpointUrl = info?; /// <summary> /// The destination endpoint for the API request. /// </summary> public string EndpointUrl => _endpointUrl; /// <summary> /// The Stack Exchange Site to query the <see cref=Info/> for. /// </summary> public string Site { get; set; } /// <summary> /// The final endpoint URL that should be appended to the Stack Exchange API base url. /// </summary> public string FormattedEndpoint { get { var values = new Dictionary<string, string>(); values.Add(nameof(Site).ToLower(), Site); var qs = StringExtensions.BuildQueryString(values); return EndpointUrl + qs; } } /// <summary> /// Returns whether or not the <see cref=Site/> passed verification. /// </summary> /// <returns>True if <see cref=Site/> is not a null, empty or whitespace string, false otherwise.</returns> public bool VerifyRequiredParameters() => !string.IsNullOrWhiteSpace(Site); /// <summary> /// Gets a message indicating how <see cref=Site/> is validated. /// </summary> public string VerificationError => $The value for {nameof(Site)} must be a valid, non-null, and non-whitespace string.;}/// <summary>/// Submits a request to the sites API endpoint, to return a <see cref=Site/> object./// </summary>/// <remarks>/// Endpoint URL is <see cref=EndpointUrl/>./// </remarks>public class SitesRequest : IRequest<Site>{ private const string _endpointUrl = sites?; /// <summary> /// The destination endpoint for the API request. /// </summary> public string EndpointUrl => _endpointUrl; /// <summary> /// Determines how many sites will be returned for each page. /// </summary> /// <remarks> /// As of this writing, this endpoint value is unbounded. Defaults to 1000. /// </remarks> public int PageSize { get; set; } = 1000; public int Page { get; set; } = 1; /// <summary> /// Returns the fully formatted endpoint for this <see cref=SitesRequest/> instance. /// </summary> public string FormattedEndpoint { get { var values = new Dictionary<string, string>(); values.Add(nameof(PageSize).ToLower(), PageSize.ToString()); values.Add(nameof(Page).ToLower(), Page.ToString()); var qs = StringExtensions.BuildQueryString(values); return EndpointUrl + qs; } } /// <summary> /// Verifies that the <see cref=PageSize/> is a valid value. /// </summary> /// <returns>True if <see cref=PageSize/> is greater than 0, false otherwise.</returns> public bool VerifyRequiredParameters() => PageSize > 0; /// <summary> /// Gets a message indicating how <see cref=PageSize/> is validated. /// </summary> public string VerificationError => $The value for {nameof(PageSize)} must be an integer greater than 0.;}public class BadgeRequest : IRequest<Badge>{ private const string _endpointUrl = badges?; public string EndpointUrl => _endpointUrl; public OrderType Order { get; set; } public SortType Sort { get; set; } public string Site { get; set; } public int PageSize { get; set; } = 10; public int Page { get; set; } = 1; public string Min { get; set; } public string Max { get; set; } public DateTime? FromDate { get; set; } public DateTime? ToDate { get; set; } public string FormattedEndpoint { get { var values = new Dictionary<string, string>(); values.Add(nameof(Order).ToLower(), Order == OrderType.Ascending ? asc : desc); values.Add(nameof(Sort).ToLower(), Sort.ToString().ToLower()); values.Add(nameof(Site).ToLower(), Site); values.Add(nameof(PageSize).ToLower(), PageSize.ToString()); values.Add(nameof(Page).ToLower(), Page.ToString()); if (Min != null) { values.Add(nameof(Min).ToLower(), Min.ToString()); } if (Min != null) { values.Add(nameof(Max).ToLower(), Max.ToString()); } if (ToDate != null) { values.Add(nameof(ToDate).ToLower(), DateTimeExtensions.ToEpoch(ToDate.Value).ToString()); } if (FromDate != null) { values.Add(nameof(FromDate).ToLower(), DateTimeExtensions.ToEpoch(FromDate.Value).ToString()); } var qs = StringExtensions.BuildQueryString(values); return EndpointUrl + qs; } } public string VerificationError => $The value for {nameof(Site)} must be a valid, non-null, and non-whitespace string; the value for {nameof(PageSize)} must be greater than 0 and less than or equal to {Configuration.MaxPageSize}.; public bool VerifyRequiredParameters() => !string.IsNullOrWhiteSpace(Site) && PageSize > 0 && PageSize <= Configuration.MaxPageSize; public enum SortType { Rank, Name, Type, }}The OrderType is a simple enum:public enum OrderType{ Ascending, Descending,}It's in a different file as I may need to use it in other requests.Finally, there is some type of model that is returned by the API request. Here are all the currently implemented models:/// <summary>/// Represents a badge from the Stack Exchange API./// </summary>/// <remarks>/// http://api.stackexchange.com/docs/types/badge/// </remarks>[DataContract]public class Badge : IBaseModel{ /// <summary> /// See <code>award_count</code> /// </summary> [DataMember(Name = award_count)] public int AwardCount { get; set; } /// <summary> /// See <code>badge_id</code> /// </summary> [DataMember(Name = badge_id)] public int BadgeId { get; set; } /// <summary> /// See <code>badge_type</code> /// </summary> [DataMember(Name = badge_type)] public string BadgeType { get; set; } /// <summary> /// See <code>link</code> /// </summary> [DataMember(Name = link)] public string Link { get; set; } /// <summary> /// See <code>name</code> /// </summary> [DataMember(Name = name)] public string Name { get; set; } /// <summary> /// See <code>rank</code> /// </summary> [DataMember(Name = rank)] public string Rank { get; set; } /// <summary> /// See <code>user</code> /// </summary> [DataMember(Name = user)] public ShallowUser User { get; set; }}/// <summary>/// Represents certain statistical data about a Stack Exchange <see cref=Site/>./// </summary>/// <remarks>/// http://api.stackexchange.com/docs/types/info/// </remarks>[DataContract]public class Info : IBaseModel{ /// <summary> /// See <code>answers_per_minute</code> /// </summary> [DataMember(Name = answers_per_minute)] public decimal AnswersPerMinute { get; set; } /// <summary> /// See <code>api_revision</code> /// </summary> [DataMember(Name = api_revision)] public string ApiRevision { get; set; } /// <summary> /// See <code>badges_per_minute</code> /// </summary> [DataMember(Name = badges_per_minute)] public decimal BadgesPerMinute { get; set; } /// <summary> /// See <code>new_active_users</code> /// </summary> [DataMember(Name = new_active_users)] public int NewActiveUsers { get; set; } /// <summary> /// See <code>questions_per_minute</code> /// </summary> [DataMember(Name = questions_per_minute)] public decimal QuestionsPerMinute { get; set; } /// <summary> /// See <code>total_accepted</code> /// </summary> [DataMember(Name = total_accepted)] public int TotalAccepted { get; set; } /// <summary> /// See <code>total_answers</code> /// </summary> [DataMember(Name = total_answers)] public int TotalAnswers { get; set; } /// <summary> /// See <code>total_badges</code> /// </summary> [DataMember(Name = total_badges)] public int TotalBadges { get; set; } /// <summary> /// See <code>total_comments</code> /// </summary> [DataMember(Name = total_comments)] public int TotalComments { get; set; } /// <summary> /// See <code>total_questions</code> /// </summary> [DataMember(Name = total_questions)] public int TotalQuestions { get; set; } /// <summary> /// See <code>total_unanswered</code> /// </summary> [DataMember(Name = total_unanswered)] public int TotalUnanswered { get; set; } /// <summary> /// See <code>total_users</code> /// </summary> [DataMember(Name = total_users)] public int TotalUsers { get; set; } /// <summary> /// See <code>total_votes</code> /// </summary> [DataMember(Name = total_votes)] public int TotalVotes { get; set; }}/// <summary>/// Represents a site relation to a <see cref=Site/> in the <see cref=Site.RelatedSites/> list./// </summary>/// <remarks>/// http://api.stackexchange.com/docs/types/related-site/// </remarks>[DataContract]public class RelatedSite : IBaseModel{ /// <summary> /// See <code>api_site_parameter</code> /// </summary> [DataMember(Name = api_site_parameter)] public string ApiSiteParameter { get; set; } /// <summary> /// See <code>name</code> /// </summary> [DataMember(Name = name)] public string Name { get; set; } /// <summary> /// See <code>relation</code> /// </summary> [DataMember(Name = relation)] public string Relation { get; set; } /// <summary> /// See <code>site_url</code> /// </summary> [DataMember(Name = site_url)] public string SiteUrl { get; set; }}/// <summary>/// Represents a partial user on the Stack Exchange API./// </summary>/// <remarks>/// http://api.stackexchange.com/docs/types/shallow-user/// </remarks>[DataContract]public class ShallowUser{ /// <summary> /// See <code>accept_rate</code> /// </summary> [DataMember(Name = accept_rate)] public int? AcceptRate { get; set; } /// <summary> /// See <code>display_name</code> /// </summary> [DataMember(Name = display_name)] public string DisplayName { get; set; } /// <summary> /// See <code>link</code> /// </summary> [DataMember(Name = link)] public string Link { get; set; } /// <summary> /// See <code>profile_image</code> /// </summary> [DataMember(Name = profile_image)] public string ProfileImage { get; set; } /// <summary> /// See <code>reputation</code> /// </summary> [DataMember(Name = reputation)] public int? Reputation { get; set; } /// <summary> /// See <code>user_id</code> /// </summary> [DataMember(Name = user_id)] public int? UserId { get; set; } /// <summary> /// See <code>user_type</code> /// </summary> [DataMember(Name = user_type)] public string UserType { get; set; }}/// <summary>/// Represents a Stack Exchange site./// </summary>/// <remarks>/// http://api.stackexchange.com/docs/types/site/// </remarks>[DataContract]public class Site : IBaseModel{ /// <summary> /// See <code>aliases</code> /// </summary> [DataMember(Name = aliases)] public List<string> Aliases { get; set; } /// <summary> /// See <code>api_site_parameter</code> /// </summary> [DataMember(Name = api_site_parameter)] public string ApiSiteParameter { get; set; } /// <summary> /// See <code>audience</code> /// </summary> [DataMember(Name = audience)] public string Audience { get; set; } /// <summary> /// See <code>closed_beta_date</code> /// </summary> [DataMember(Name = closed_beta_date)] public long? ClosedBetaDate { get; set; } /// <summary> /// A .NET DateTime? representing the <see cref=ClosedBetaDate/>. /// </summary> public DateTime? ClosedBetaDateTime { get { return DateTimeExtensions.FromEpoch(ClosedBetaDate); } set { ClosedBetaDate = DateTimeExtensions.ToEpoch(value); } } /// <summary> /// See <code>favicon_url</code> /// </summary> [DataMember(Name = favicon_url)] public string FaviconUrl { get; set; } /// <summary> /// See <code>high_resolution_icon_url</code> /// </summary> [DataMember(Name = high_resolution_icon_url)] public string HighResolutionIconUrl { get; set; } /// <summary> /// See <code>icon_url</code> /// </summary> [DataMember(Name = icon_url)] public string IconUrl { get; set; } /// <summary> /// See <code>launch_date</code> /// </summary> [DataMember(Name = launch_date)] public long LaunchDate { get; set; } /// <summary> /// A .NET DateTime representing the <see cref=LaunchDate/>. /// </summary> public DateTime LaunchDateTime { get { return DateTimeExtensions.FromEpoch(LaunchDate); } set { LaunchDate = DateTimeExtensions.ToEpoch(value); } } /// <summary> /// See <code>logo_url</code> /// </summary> [DataMember(Name = logo_url)] public string LogoUrl { get; set; } /// <summary> /// See <code>markdown_extensions</code> /// </summary> [DataMember(Name = markdown_extensions)] public List<string> MarkdownExtensions { get; set; } /// <summary> /// See <code>name</code> /// </summary> [DataMember(Name = name)] public string Name { get; set; } /// <summary> /// See <code>open_beta_date</code> /// </summary> [DataMember(Name = open_beta_date)] public long? OpenBetaDate { get; set; } /// <summary> /// A .NET DateTime? representing the <see cref=OpenBetaDate/>. /// </summary> public DateTime? OpenBetaDateTime { get { return DateTimeExtensions.FromEpoch(OpenBetaDate); } set { OpenBetaDate = DateTimeExtensions.ToEpoch(value); } } /// <summary> /// See <code>related_sites</code> /// </summary> [DataMember(Name = related_sites)] public List<RelatedSite> RelatedSites { get; set; } /// <summary> /// See <code>site_state</code> /// </summary> [DataMember(Name = site_state)] public string SiteState { get; set; } /// <summary> /// See <code>site_type</code> /// </summary> [DataMember(Name = site_type)] public string SiteType { get; set; } /// <summary> /// See <code>site_url</code> /// </summary> [DataMember(Name = site_url)] public string SiteUrl { get; set; } /// <summary> /// See <code>styling</code> /// </summary> [DataMember(Name = styling)] public Styling Styling { get; set; } /// <summary> /// See <code>twitter_account</code> /// </summary> [DataMember(Name = twitter_account)] public string TwitterAccount { get; set; }}/// <summary>/// Represents the <see cref=Site.Styling/>./// </summary>/// <remarks>/// http://api.stackexchange.com/docs/types/styling/// </remarks>[DataContract]public class Styling : IBaseModel{ /// <summary> /// See <code>link_color</code> /// </summary> [DataMember(Name = link_color)] public string LinkColor { get; set; } /// <summary> /// See <code>tag_background_color</code> /// </summary> [DataMember(Name = tag_background_color)] public string TagBackgroundColor { get; set; } /// <summary> /// See <code>tag_foreground_color</code> /// </summary> [DataMember(Name = tag_foreground_color)] public string TagForegroundColor { get; set; }}Using the API is super simple, which is how I wanted it. It's pretty much four lines of code (if you don't do inline construction) to send an anonymous request:var config = new Configuration();var handler = new Handler(config);var request = new SitesRequest();var sites = handler.SubmitRequest(request);To send a non-anonymous request with an application key, simply put the key in the config object:var config = new Configuration();config.Key = Some SE API Key.;var handler = new Handler(config);var request = new SitesRequest();var sites = handler.SubmitRequest(request);You can also compress things into fewer lines pretty easily.var sites = new Handler(new Configuration { Key = Some SE API Key. }).SubmitRequest(new SitesRequest());Feel free to comment on anything and everything. I want to make sure I'm doing this in an understandable, modular and powerful way before I proceed. | Accessing the Stack Exchange API | c#;object oriented;.net;stackexchange;polymorphism | Overall I like your code. It is understandable, (almost) easy to read and well structured. But I have some small pointers I would like to address. In the Handler.SubmitRequest<T>() method you have var url = Configuration.GetFormattedUrl(request); where I would preffer using the explicit type so one just looking at your code doesn't need to check that GetFormattedUrl() returns a string. Another thing is the name of that method because it isn't only submitting a request but is processing the response as well. Maybe ProcessRequest would be a better name. If one can use the Configuration class without setting any property he/she should have the possibility to use the Handler class without having to pass a Configuration to its constructor. You could add an parameterless constructor which passes a new Configuration() to the current constructor. The FromattedEnpointUrl property of the InfoRequest looks odd to me. Whats the need for a dictionary if it contains only one key and value ? Why not let the StringExtensions.BuildQueryString() method have 2 string parameters ? Some of the properties which aren't autoimplemented ones are hard to read because you placed both the getter and setter on the same line. This leads to the need to scroll to the right which is removing readability. Like this public DateTime? OpenBetaDateTime { get { return DateTimeExtensions.FromEpoch(OpenBetaDate); } set { OpenBetaDate = DateTimeExtensions.ToEpoch(value); } } IMO this would be better like so public DateTime? OpenBetaDateTime { get { return DateTimeExtensions.FromEpoch(OpenBetaDate); } set { OpenBetaDate = DateTimeExtensions.ToEpoch(value); } } Btw, I hope the DateTimeExtensions.ToEpoch() method can handle nullable's well. I don't like the AppendKey() method of the Configuration class either. Having 2 ternary expressions in a oneliner is IMO too much. |
_unix.12489 | Is there a Linux equivalent of the note-taking software Notational Velocity? | Notational Velocity on Linux? | linux;software rec;notes | NVpy is a cross-platform simplenote-syncing note-taking app inspired by Notational Velocity.Looking at screenshots NVpy looks like a closer match than TomBoy.Here's a little write-up of NVpy on Lifehacker.Though this guy didn't like it and tweaked gvim a bit instead.Looking more, I found NVim which is a clone of the mac app Notational Velocity in vim. It is designed for fast plain-text note-taking and retrieval. |
_unix.117508 | I have an arduino with an ethernet shield that is controlling a relay, which control the lights. I usually just use the phone to send a http get request from a widget in tasker, but I would like to be able to just type light on or light_on in the command line/terminal, and it would turn it on.I found that I could use wget to send the request wget 192.168.1.177?s=1, but that is listening for an output afterwards. But the arduino doesn't output anything, it just listens for the get.So in conclusion, I need to be able to make a custom command, and need a command to customize, to send an http GET request. Don't have to use wget, just seemed easiest since it was already installed.Oh, and my OS is Ubuntu 12. 04 LTS. | Custom command wget without response | ubuntu;command line;wget;arduino | null |
_unix.352805 | I have managed to install ubuntu on a 128GB SD card (class 10) which dual boots with windows 10. However, ubuntu works like an old guy. It comes up so slowly. It tends to halt and overall very slow function. And the bigger problem is that the wifi connection disconnects every two minutes.Does this slow function mean that it is a bad idea to install ubuntu on a SD card or it means that there is something wrong somewhere?Here is the rw speed of the SD card:dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output10240+0 records in10240+0 records out83886080 bytes (84 MB, 80MiB) copied, 0.219472 s, 382 MB/s | Why ubuntu is so slow on the SD card? | sd card | null |
_unix.12501 | For some reason, today, everytime I hit tab in the terminal this shows up:cat bash: warning: setlocale: LC_CTYPE: cannot change locale (en_CA)Display all 150 possibilities? (y or n)This particular one happens when I type cat then hit TAB. I never changed any setting or anything. Anyone know what's going on? | Weird Stuff in Terminal When I Hit Tab | bash;autocomplete;locale | null |
_unix.57895 | A while back, I installed Backtrack 5 R2 to Dual Boot on a Windows 7 PC. It worked fine for a while, but now when I try to load it, the screen gets all messed up (tried to get a picture with my phone, but it wasn't working too well). The last line before it freezes says:fb: conflicting fb hw usage: nouveaufb vs VESA VGA - removing generic driverSince it last worked, the only changes I can think of are that the computer is now connected via the ethernet cable to the router, and I upgraded the graphics card (to Nvidia). Because of the last line, I personally would put my money on the latter, but I still have no clue how to fix this. Can someone help me? | Backtrack 5 R2 Dual-Boot w/Windows 7 No Longer Loads | windows;dual boot | null |
_softwareengineering.122150 | UPDATEI work on a small team of devs, 4 guys. They have all used source control. Most of them can't stand source control and instead choose not to use it. I strongly believe source control is a necessary part of professional development. Several issues make it very difficult to convince them to use source control:The team is not used to using TFS. I've had 2 training sessions, but was only allotted 1 hour which is insufficient.Team members directly modify code on theserver. This keeps code out of sync. Requiring comparison just to be sure you are working with the latest code. And complex merge problems ariseTime estimates offered by developers exclude time required to fix any of these problems. So, if I say nono it will take 10x longer...I have to constantly explain these issues and risk myself because now management may perceive me as slow.The physical files on the server differ in unknown ways over ~100 files. Merging requires knowledge of the project at hand and, therefore, developer cooperation which I am not able to obtain.Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control.Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence speaks for itself in their view.Developers argue that directly modifying server code, bypassing TFS saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming. Multiply this by the 10+ projects we manage. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because publishing breaks the project. Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points.So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to fix much deeper, much more time consuming problems. Servers, hard drive space are extremely difficult to come by. Yet, user expectations are rising.What can be done in this situation? | How can I convince cowboy programmers to use source control? | version control;teamwork;team foundation server;cowboy coding | It's not a training issue, it's a human factors issue. They do not want to, and are creating road blocks. Deal with the broken group dynamics, what is the route cause of their objection - usually fear, is it just fear of change, or is it more sinister. No professional developer today, or for the last 20 years, has resisted source control. Once, about 30 or 40 years ago, when computers were slow, source control even slower and programs consisted of one 500 line file, it was a pain and there were valid reasons not to use it. These objections can only be coming from the worst kind of cowboys I can think of.Is the system forced on them making their lives difficult in some way? Find out what it is, and change the system to invalidate the objection. Repeat until done. I suggest looking at GIT or Mercurial. You can create a repository in each source code tree, they won't even notice and can keep on hacking the way they do now. You can track the changes they have hacked into the code base, make commits, merge them into other source trees etc. When they see you do a merge of a weeks worth of effort into another product in seconds, they might change their ideas. When you roll back one of their screw ups with one command, and save their ass, they might even thank you (don't count on it). At worst, you look good in front of the boss and, for a bonus, make them look like the cowboys they are. Merging would take not only a great knowledge of the projectIn that case, I am afraid you're up the proverbial creek with no paddle. If merging is not an option, neither is implementing it, so you are saying that you can no longer add features you already have in one branch (term used loosely) to another. If I were you I would reconsider my career prospects... |
_codereview.69571 | I got this background console script running, which pulls all necessary XML files from the server and stores them locally for faster performance. These XML files contain Graphs full of data, which will be shown inside the application. Right now the user can get to the graphs in about 45 seconds if he directly goes towards them. And that is just about enough time for the files to download before the user can reach them. But as years go by, more graphs will become available and they might not be done downloading by the time the user gets there.Is it possible to speed up the process of fetching those files? Comments on naming and readability are also appreciated.using System;using System.Collections.Generic;namespace MonavisaBackGroundLoading{ // Request form class to log into monavisa public class MonavisaRequestForm { #region variables public readonly string username; public readonly string password; public string url; public System.Net.WebClient webclient; #endregion public MonavisaRequestForm(string username, string password, string url, ref System.Net.WebClient webclient) { this.username = username; this.password = password; this.url = url; this.webclient = webclient; } } class Program { static System.DateTime dt = System.DateTime.Now; static bool firstToTrigger = false; static void Main(string[] args) { //Check if Pre-Fetch has been run before, if so the application will keep this in mind and re-load the last month //So that this data is fully up to date string[] text = null; if(System.IO.File.Exists(string.Format(@{0}\Nioo Graph Data {1}.xml, C:/Users/LAB2-Computer/Desktop/nioo 2.0/NIOO V3.1, last pull date as registerd by apllication.txt))) text = System.IO.File.ReadAllLines(string.Format(@{0}\Nioo Graph Data {1}.xml, C:/Users/LAB2-Computer/Desktop/nioo 2.0/NIOO V3.1, last pull date as registerd by apllication.txt)); Console.WriteLine(text); System.Threading.Thread.Sleep(5000); //Start download of files, from start year till current year. Console.WriteLine(Initiating downloading XML); for(var i = 2013; i <= dt.Year; i++) { Console.WriteLine(i); for (var ii = 0; ii < 12; ii++) { string dateAndTime; //Modified string for the file name string fileDateAndTime; //As the date presumes a end date we have to set 12 to 1 so that the 12th month gets pulled //For the rest we just need to do +2 if (ii == 12) { dateAndTime = string.Format({0}-{1}-{2}+12:00, 01, 01, i); fileDateAndTime = string.Format({0}-{1}-{2}, 01, 01, i); } else { dateAndTime = string.Format({0}-{1}-{2}+12:00, 01, ii + 2, i); fileDateAndTime = string.Format({0}-{1}-{2}, 01, ii + 1, i); } //Check if the file already exists if (System.IO.File.Exists(string.Format(@{0}\Nioo Graph Data {1}.xml, System.IO.Directory.GetParent(AppDomain.CurrentDomain.BaseDirectory).ToString(), fileDateAndTime))) { //If it exists but is the last downloaded in the previous download, download it again to update that month if (text != null) { if (i == Convert.ToInt32(text[1]) && ii == Convert.ToInt32(text[0])) { CreateRequest(dateAndTime, fileDateAndTime); continue; } else continue; } continue; } else { //File does not exist, create a download request. CreateRequest(dateAndTime, fileDateAndTime); continue; } } } //Update the previous download sesion with the latest date. string[] lines = {dt.Month.ToString(),dt.Year.ToString(),End of program}; System.IO.File.WriteAllLines(string.Format(@{0}\Nioo Graph Data {1}.xml, System.IO.Directory.GetParent(System.IO.Directory.GetParent(AppDomain.CurrentDomain.BaseDirectory).ToString()).ToString(), last pull date as registerd by apllication), lines); } //Create a login request for monavisa with login info for the date requested private static System.Net.WebClient myWebClient = new System.Net.WebClient(); private static void CreateRequest(string dateAndTime, string fileDateAndTime) { MonavisaRequestForm myRequest = new MonavisaRequestForm ( foo, bar, string.Format(http://www.monavisa.info/CreateGraphData?graphs=1&graph[0]=1226&todate={0}&period={1}&step={2}&b_id=194&inter=1&other_graph=false, dateAndTime, 3, 1), ref myWebClient ); MonavisaFetch.instance.PreObtainData(ref myRequest, dateAndTime, fileDateAndTime); } } public class MonavisaFetch { private static MonavisaFetch fetchmonavisa; private MonavisaFetch() { } public static MonavisaFetch instance { get{ if (fetchmonavisa == null){ fetchmonavisa = new MonavisaFetch(); } return fetchmonavisa; } } private Queue<MonavisaRequestForm> requestQueue = new Queue<MonavisaRequestForm>(); private System.Timers.Timer timer = new System.Timers.Timer(3000); private bool initialized = false; //Start a timer for the fetch public void initialize() { timer.AutoReset = true; timer.Elapsed += new System.Timers.ElapsedEventHandler(onTimerElapsed); timer.Start(); } //If timer ends qeue form private void onTimerElapsed(object sender, System.Timers.ElapsedEventArgs args) { if (requestQueue.Count > 0) { MonavisaRequestForm tmpRequest = requestQueue.Peek(); GetData(ref tmpRequest); } } public void GetData(ref MonavisaRequestForm request) { if (!initialized) initialize(); //Fetch the document using local php login try { if (request.username != null) { if (!request.webclient.IsBusy && requestQueue.Count == 0) { request.url = request.url.Replace(&, %26); request.url = request.url.Replace(+, %2B); Uri uri = new Uri(string.Format(http://localhost/login.php?username={0}&password={1}&request={2}, request.username, request.password, request.url)); request.webclient.DownloadStringAsync(uri); } else if (!request.webclient.IsBusy && requestQueue.Count > 0) { Uri uri = new Uri(string.Format(http://localhost/login.php?username={0}&password={1}&request={2}, requestQueue.Peek().username, requestQueue.Peek().password, requestQueue.Peek().url)); requestQueue.Peek().webclient.DownloadStringAsync(uri); requestQueue.Dequeue(); } else { requestQueue.Enqueue(request); } } else { Uri uri = new Uri(request.url); request.webclient.DownloadStringAsync(uri); } } catch (System.Net.WebException ex) { if (ex.Status != System.Net.WebExceptionStatus.ProtocolError) { throw; } } } public void PreObtainData(ref MonavisaRequestForm request, string dateAndTime, string fileDateAndTime) { if (!initialized) initialize(); try { if (!request.webclient.IsBusy && requestQueue.Count == 0) { request.url = request.url.Replace(&, %26); request.url = request.url.Replace(+, %2B); Uri uri = new Uri(string.Format(http://localhost/login.php?username={0}&password={1}&request={2}, request.username, request.password, request.url)); request.webclient.DownloadFile(uri, @Nioo Graph Data + fileDateAndTime + .xml); } else if (!request.webclient.IsBusy && requestQueue.Count > 0) { Uri uri = new Uri(string.Format(http://localhost/login.php?username={0}&password={1}&request={2}, requestQueue.Peek().username, requestQueue.Peek().password, requestQueue.Peek().url)); requestQueue.Peek().webclient.DownloadStringAsync(uri); requestQueue.Dequeue(); } else { requestQueue.Enqueue(request); } } catch (System.Net.WebException ex) { if (ex.Status != System.Net.WebExceptionStatus.ProtocolError) { throw ; } } } }} | A faster way to obtain multiple XML files from a server | c#;performance | If you call continue; in an if block the else isn't needed any more. So thisif (text != null){ if (i == Convert.ToInt32(text[1]) && ii == Convert.ToInt32(text[0])) { CreateRequest(dateAndTime, fileDateAndTime); continue; } else continue;}continue; could be replaced by if (text != null){ if (i == Convert.ToInt32(text[1]) && ii == Convert.ToInt32(text[0])) { CreateRequest(dateAndTime, fileDateAndTime); }}continue; or much better if we see the whole if..else beast //Check if the file already existsif (System.IO.File.Exists(string.Format(@{0}\Nioo Graph Data {1}.xml, System.IO.Directory.GetParent(AppDomain.CurrentDomain.BaseDirectory).ToString(), fileDateAndTime))) { //If it exists but is the last downloaded in the previous download, download it again to update that month if (text != null) { if (i == Convert.ToInt32(text[1]) && ii == Convert.ToInt32(text[0])) { CreateRequest(dateAndTime, fileDateAndTime); } }}else{ //File does not exist, create a download request. CreateRequest(dateAndTime, fileDateAndTime);}Thisstring dateAndTime;//Modified string for the file namestring fileDateAndTime;//As the date presumes a end date we have to set 12 to 1 so that the 12th month gets pulled//For the rest we just need to do +2if (ii == 12){ dateAndTime = string.Format({0}-{1}-{2}+12:00, 01, 01, i); fileDateAndTime = string.Format({0}-{1}-{2}, 01, 01, i);}else{ dateAndTime = string.Format({0}-{1}-{2}+12:00, 01, ii + 2, i); fileDateAndTime = string.Format({0}-{1}-{2}, 01, ii + 1, i);} can be replaced by string dateAndTime = string.Format({0}-{1}-{2}+12:00, 01, ii + 2, i);string fileDateAndTime = string.Format({0}-{1}-{2}, 01, ii + 1, i);because ii will never be 12Basically you should for optimization purpose store results of the same operation inside variables like Convert.ToInt32(text[1]) avoid checks where the result won't change like if (text != null) So your main() method can be simplified to static void Main(string[] args){ int year = -1; int month = -1; string[] text = null; if (System.IO.File.Exists(string.Format(@{0}\Nioo Graph Data {1}.xml, C:/Users/LAB2-Computer/Desktop/nioo 2.0/NIOO V3.1, last pull date as registerd by apllication.txt))) { text = System.IO.File.ReadAllLines(string.Format(@{0}\Nioo Graph Data {1}.xml, C:/Users/LAB2-Computer/Desktop/nioo 2.0/NIOO V3.1, last pull date as registerd by apllication.txt)); if (text != null && text.Length == 3) { month = Convert.ToInt32(text[0]); year = Convert.ToInt32(text[1]); } } String directory = System.IO.Directory.GetParent(AppDomain.CurrentDomain.BaseDirectory).ToString(); for (var i = 2013; i <= dt.Year; i++) { Console.WriteLine(i); for (var ii = 0; ii < 12; ii++) { string dateAndTime = string.Format({0}-{1}-{2}+12:00, 01, ii + 2, i); string fileDateAndTime = string.Format({0}-{1}-{2}, 01, ii + 1, i); string fileName = string.Format(@{0}\Nioo Graph Data {1}.xml, directory, fileDateAndTime); if (!System.IO.File.Exists(fileName) || (i == year && ii == month)) { CreateRequest(dateAndTime, fileDateAndTime); } } } string[] lines = { dt.Month.ToString(), dt.Year.ToString(), End of program }; System.IO.File.WriteAllLines(string.Format(@{0}\Nioo Graph Data {1}.xml, directory, last pull date as registerd by apllication), lines);} |
_unix.232307 | For the purpose of building an index I am searching for some words in a bunch of latex files. This process is complicated by the fact that latex has a discretionary hyphen command \-, which indicates to latex at which places it can break a word. I want to include it into my search, but so far I have not succeeded in doing so. For example I would need an expression that would match all ofpdapracrapda\-pracrap\-da\-pra\-c\-raor differently hyphenated instances of the same word. I understand that to match the backslash beginning a latex command one has to type four backslashes, such as $ grep \\\\mycommand *tex`.In vim I can search for such an expression via /p\(\\-\)*da\(\\-\)*pra\(\\-\)*cra, so I thought in grep it would be something like grep p\(\\\\-\)\?da *tex (and so on, but already this one didn't match anything). | How to grep for a word taking into account hyphenation? | grep;latex | null |
_opensource.2260 | I'm working on a project I would like to release as an open source project in the future. In my work I'm using a number of third party libraries, either from open source projects or released to the public domain. SQLite, Log4Net, OpenJPEG and OpenSSL are four typical examples.Can I freely distribute the required libraries with my project as long as I include the related licensing documents and how does it influence my own licensing options? I would like to use a BSD licensing model. | How to deal with third party libraries? | license compatibility;license recommendation;bsd | You have to make sure that linking to the library doesn't introduce incompatibilities/extra restrictions. I.e., according to the FSF, just arranging for the program to link against a GPLed library forces the whole to be distributed under GPL (there isn't consensus, nor binding legal precedents). Check he licences carefully, ask e.g. here for detailed analysis, and possibly retain a lawyer to look into the matter. |
_cstheory.38048 | Could we find a fast integer factorization algorithm for any large semiprime $n=pq$, if we know that $p \mid q-1$? | Factorizing semiprime $n=pq$ with $p \mid q-1$ | ds.algorithms;factoring | null |
_unix.1429 | I recently migrated from vixie-cron to fcron on my laptop. I have a question: how does fcron knows if a particular job was run in given period of time?As a file can change during running, what constitute a different job? if I change for example from running daily to weekly, will it be re-run? | How does fcron knows whether the job was run? | cron | During initialization (and at any time a user runs fcrontab -z), fcron loads and compiles the fcrontabs. Fcron then computes the time before the next job execution and sleeps for that time.The time remaining until next execution is saved each time the system is stopped. |
_webmaster.17846 | Accoring to webmasters tools (sitemaps), the number of pages for one of my sites in the Google index has been steadily declining over the last few days. In the space of a week the number of pages has dropped from over 1000 to just over 400. Why might this be? Has anyone else experienced something similar recently? | Number of pages in Google web index falling? | google | null |
_unix.232728 | All I need to do is, search for CCSID in this file, wherever it finds CCSID, the CHAR in that line should be replaced with NCHAR and VARCHAR in that line should be replaced with NVARCHAR2.I tried using sed and awk. But I couldn't find a perfect way to solve this issue. CREATE TABLE JCR.ICMSTSYSCONTROL ( LIBRARYSERVERID INTEGER NOT NULL , LANGUAGECODE CHAR(3) CCSID 37 NOT NULL , SYSSEGMENTID SMALLINT NOT NULL , SYSSEGMENTTHRESHLD INTEGER NOT NULL , ACLBINDINGLEVEL SMALLINT NOT NULL , LIBRARYACLCODE INTEGER NOT NULL , PUBACCESSENABLED SMALLINT NOT NULL , DFLTACLCHOICE SMALLINT NOT NULL , SMSCHOICE SMALLINT NOT NULL , TRACELEVEL SMALLINT NOT NULL , MAXUSERS INTEGER NOT NULL , MAXUSERACTION SMALLINT NOT NULL , CURRENTUSERS INTEGER NOT NULL , MAXLOGONRETRY SMALLINT NOT NULL , PASSWORDDURATION SMALLINT NOT NULL , SYSADMINEVENTFLAG SMALLINT NOT NULL , SYSTEMFLAG SMALLINT NOT NULL , DATABASETYPE SMALLINT NOT NULL , MAXTXDURATION INTEGER NOT NULL , MAXRESULTSETSIZE INTEGER NOT NULL , ALLOWTRUSTEDLOGON SMALLINT NOT NULL , DOCROUTINGUPDATE INTEGER NOT NULL , DOCROUTINGFREQ SMALLINT NOT NULL , PLATFORM SMALLINT NOT NULL , SYSTIMEOUT SMALLINT NOT NULL , TIEUSERID CHAR(175) CCSID 37 DEFAULT NULL , TIEPASSWORD CHAR(72) FOR BIT DATA DEFAULT NULL , DATABASENAME VARCHAR(128) CCSID 37 NOT NULL , DBSCHEMANAME VARCHAR(128) CCSID 37 NOT NULL , TRACEFILENAME VARCHAR(128) CCSID 37 DEFAULT NULL , ENCRYPTIONKEY VARCHAR(128) FOR BIT DATA NOT NULL , KEEPTRACEOPEN SMALLINT NOT NULL , MULTIPLETRACEFILES SMALLINT NOT NULL , MAXTRACEFILESIZE INTEGER NOT NULL , PATHICMROOT VARCHAR(128) CCSID 37 NOT NULL , PATHICMDLL VARCHAR(128) CCSID 37 NOT NULL , SUSPENDSERVERTIME TIMESTAMP DEFAULT NULL , RMSTATUSINTERVAL SMALLINT NOT NULL , RMSTATUSTIMEOUT SMALLINT NOT NULL , TIEINTERVAL SMALLINT NOT NULL , LSCURRENTVERSION VARCHAR(128) CCSID 37 NOT NULL , TRACEUSER CHAR(175) CCSID 37 DEFAULT NULL , DIMSGDIGESTALGO SMALLINT NOT NULL DEFAULT 0 , DIENCRYPTIONALGO SMALLINT NOT NULL DEFAULT 0 , CONSTRAINT JCR.ICMSTSYSCONTROLPK PRIMARY KEY( LIBRARYSERVERID ) ) ;The output should be like:CREATE TABLE JCR.ICMSTSYSCONTROL ( LIBRARYSERVERID INTEGER NOT NULL , LANGUAGECODE NCHAR(3) CCSID 37 NOT NULL , SYSSEGMENTID SMALLINT NOT NULL , SYSSEGMENTTHRESHLD INTEGER NOT NULL , ACLBINDINGLEVEL SMALLINT NOT NULL , LIBRARYACLCODE INTEGER NOT NULL , PUBACCESSENABLED SMALLINT NOT NULL , DFLTACLCHOICE SMALLINT NOT NULL , SMSCHOICE SMALLINT NOT NULL , TRACELEVEL SMALLINT NOT NULL , MAXUSERS INTEGER NOT NULL , MAXUSERACTION SMALLINT NOT NULL , CURRENTUSERS INTEGER NOT NULL , MAXLOGONRETRY SMALLINT NOT NULL , PASSWORDDURATION SMALLINT NOT NULL , SYSADMINEVENTFLAG SMALLINT NOT NULL , SYSTEMFLAG SMALLINT NOT NULL , DATABASETYPE SMALLINT NOT NULL , MAXTXDURATION INTEGER NOT NULL , MAXRESULTSETSIZE INTEGER NOT NULL , ALLOWTRUSTEDLOGON SMALLINT NOT NULL , DOCROUTINGUPDATE INTEGER NOT NULL , DOCROUTINGFREQ SMALLINT NOT NULL , PLATFORM SMALLINT NOT NULL , SYSTIMEOUT SMALLINT NOT NULL , TIEUSERID NCHAR(175) CCSID 37 DEFAULT NULL , TIEPASSWORD CHAR(72) FOR BIT DATA DEFAULT NULL , DATABASENAME NVARCHAR2(128) CCSID 37 NOT NULL , DBSCHEMANAME NVARCHAR2(128) CCSID 37 NOT NULL , TRACEFILENAME NVARCHAR2(128) CCSID 37 DEFAULT NULL , ENCRYPTIONKEY VARCHAR(128) FOR BIT DATA NOT NULL , KEEPTRACEOPEN SMALLINT NOT NULL , MULTIPLETRACEFILES SMALLINT NOT NULL , MAXTRACEFILESIZE INTEGER NOT NULL , PATHICMROOT NVARCHAR2(128) CCSID 37 NOT NULL , PATHICMDLL NVARCHAR2(128) CCSID 37 NOT NULL , SUSPENDSERVERTIME TIMESTAMP DEFAULT NULL , RMSTATUSINTERVAL SMALLINT NOT NULL , RMSTATUSTIMEOUT SMALLINT NOT NULL , TIEINTERVAL SMALLINT NOT NULL , LSCURRENTVERSION NVARCHAR2(128) CCSID 37 NOT NULL , TRACEUSER NCHAR(175) CCSID 37 DEFAULT NULL , DIMSGDIGESTALGO SMALLINT NOT NULL DEFAULT 0 , DIENCRYPTIONALGO SMALLINT NOT NULL DEFAULT 0 , CONSTRAINT JCR.ICMSTSYSCONTROLPK PRIMARY KEY( LIBRARYSERVERID ) ) ;How to solve this? | Find and replace a string if particular pattern is found in a line | shell script;text processing;sed;awk | With sed:sed '/CCSID/ { s/ CHAR(/ NCHAR(/; s/ VARCHAR(/ NVARCHAR2(/ }' fileThe first pattern searches for lines with containing CCSID. Then the part inside {...} takes effect.s/ CHAR(/ NCHAR(/; replaces CHAR( (with a leading space) with NCHAR(.s/ VARCHAR(/ NVARCHAR2(/ and replace VARCHAR( with NVARCHAR2(. |
_webmaster.65405 | I was thinking of making a 'load on scroll' image gallery plugin, i.e. I do not want the image to load until it is in view in the viewport. That got me thinking in terms of implementation.I don't want to use markup that would mean that the images were not indexed by Google, but the question is, what can you do?You can't have img tags but then stop the files from being loaded because that defeats the purpose completely. You could not care about the markup and use image site maps instead, but that makes life harder for the person using the plugin. What, if anything, can one do? | Does Google index images in anything other than an img tag? | seo;html;images | Google indexes images that are either:In an image tag -- <img src=foo.jpg>The target of link -- <a href=foo.jpg>If you want to remove an image from the page but still have it indexed from that page, make a link to it on that page.This is a very good technique for image search optimization anyway. Google ranks very large images better than smaller images. The larger size images are harder to fit on page and take longer for users to download (slowing down the entire page load). A technique that works well is to use the thumbnail in the img tag but link to the larger image: <a href=/img/full/foo.jpg><img src=/img/thumb/foo.jpg></a> |
_unix.42749 | I was looking through the list of packages with dselect, but pressed Return twice by mistake, thereby making it confirm and quit the [S]elect option.When I go to the [I]nstall option, it's now suggesting to install a number of new packages that I don't want (and that have nothing to do with what I was looking for in the first place).Since I haven't proceeded with the installation itself, is there a way to reset the selection to what it was before I selected new packages, without going through the list one by one and pressing - for each package? (It doesn't matter if it's done via dselect or via another related command.)EDIT: (Adding an example)I've tried on another machine where dselect is installed. Let's assume that package gnugo isn't installed (that's just an example).Launch delect and choose [S]elect to get the list.Search for gnugo in this list (using /, if you're not familiar with dselect).Select it with +.Press Return to validate the suggestions and Return again to validate go back to the main menu (the mistake I made).Go to [I]nstall. It will now say:Reading package lists... DoneBuilding dependency tree Reading state information... DoneThe following NEW packages will be installed gnugo:i386 libgpm2:i386 libncurses5:i386 libreadline6:i386 libtinfo5:i3860 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.Need to get 1,926 kB of archives.After this operation, 9,634 kB of additional disk space will be used.Do you want to continue [Y/n]? Here, choose not to continue (n).Quit dselect.dpkg --get-selections | grep gnugo yields nothing at all.Start dselect again, go straight to [I]nstall again, the packages will still be selected for installation.Of course, I can go back into the [S]elect list, search for gnugo, press _ to deselect it, but in a more complex case, you may have to go through the new packages list one by one.[I]nstall in dselect is visibly a front-end to apt-get install, but I'm not sure where it gets its selection from. It appears dpkg --get-selection is not it.As far as dselect is concerned, I'd like to reset it in a state where everything maked with *** stays at it was, but what's now marked with only ** (and not installed yet) goes back to __, without having to go manually through the suggested list from the [I]nstall menu.EDIT 2:This is clearly related to the content of /var/lib/dpkg/status, which contains this entry:Package: gnugoStatus: install ok not-installedPriority: optionalSection: gamesIf I change this manually to Status: deinstall ok not-installed, it disappears from the selection in dselect (which makes sense).What I would like is a general way of turning whatever says Status: install ok not-installed into Status: deinstall ok not-installed (leaving packages saying Status: install ok installed unaffected). | Reset dselect selection before package installation | ubuntu;debian;package management;apt | It turns out dpkg --get-selections doesn't list what's marked for installation and not yet installed, but dpkg -l '*' does, and starts these lines with in.As a result, the following line resets these selections:dpkg -l '*' | grep '^in ' | awk '{ print $2 deinstall }' | dpkg --set-selections |
_codereview.60061 | I'm just trying to be more pythonic in my coding style and was wondering if this is a good way to get a list of seven days leading up to and including endDate:daysDelta = 6lastSeven = [endDate - datetime.timedelta(days=daysDelta)]for x in range(1,7): myOffset = daysDelta - x lastSeven.append(endDate - datetime.timedelta(days=myOffset)) | Getting a date range | python;datetime | You've hard-coded both 6 and 7, which means that if you either need to change the range, you'll have to change both numbers. That's a bug waiting to happen.The most Pythonic way would be to use a list comprehension, so that you define the entire list at once, rather than appending one element at a time.daysDelta = 6lastSeven = [endDate - datetime.timedelta(days=days) for days in range(daysDelta, -1, -1)] |
_unix.197736 | I am a little stumped on this one and maybe I have been staring at this for too long...but I cannot get my Mac (10.10.3) machine to connect to my Oracle Linux 7 (CentOS/RH 7) server with its firewall up. (I am trying to configure for NFSv3 only; I don't need v4)I have verified that NFS is working by issuing this command on the Mac (firewall OFF on OL 7 server) showmount -e myserver.home And I get this back:Export list for myserver:/var/www 192.168.10.0/24If I try connecting with Command-K and enter nfs://myserver.home it makes the connection and I can browse, edit and delete files as expected.Next, I enable the firewall on the OL7 server. I also open the ports as specified by Oracle OL 7 Documentation and when I issue the showmount command again, I get this error message:showmount: Cannot retrieve info from host: localhost: RPC: Program not registeredIf I turn off the firewall and it works again.So...what ports did I enable?#firewall-cmd --list-ports32803/tcp 662/udp 2049/udp 662/tcp 111/udp 32769/udp 892/udp 2049/tcp 892/tcp 111/tcpI checked to see what RPC was listening on (according to the Admin guide link above, it should be 2049 and 111)# rpcinfo -pprogram vers proto port service100000 4 tcp 111 portmapper100000 3 tcp 111 portmapper100000 2 tcp 111 portmapper100000 4 udp 111 portmapper100000 3 udp 111 portmapper100000 2 udp 111 portmapper100024 1 udp 47793 status100024 1 tcp 52921 status100005 1 udp 20048 mountd100005 1 tcp 20048 mountd100005 2 udp 20048 mountd100005 2 tcp 20048 mountd100005 3 udp 20048 mountd100005 3 tcp 20048 mountd100003 3 tcp 2049 nfs100003 4 tcp 2049 nfs100227 3 tcp 2049 nfs_acl100003 3 udp 2049 nfs100003 4 udp 2049 nfs100227 3 udp 2049 nfs_acl100021 1 udp 32769 nlockmgr100021 3 udp 32769 nlockmgr100021 4 udp 32769 nlockmgr100021 1 tcp 32803 nlockmgr100021 3 tcp 32803 nlockmgr100021 4 tcp 32803 nlockmgrAnd finally my /etc/sysconfig/nfs file:# Note: For new values to take effect the nfs-config service# has to be restarted with the following command:# systemctl restart nfs-config## Optional arguments passed to in-kernel lockd#LOCKDARG=# TCP port rpc.lockd should listen on.LOCKD_TCPPORT=32803# UDP port rpc.lockd should listen on.LOCKD_UDPPORT=32769MOUNTD_PORT=892STATD_PORT=662## Optional arguments passed to rpc.nfsd. See rpc.nfsd(8)RPCNFSDARGS=--port 2049# Number of nfs server processes to be started.# The default is 8. #RPCNFSDCOUNT=16## Set V4 grace period in seconds#NFSD_V4_GRACE=90## Set V4 lease period in seconds#NFSD_V4_LEASE=90## Optional arguments passed to rpc.mountd. See rpc.mountd(8) RPCMOUNTDOPTS=## Optional arguments passed to rpc.statd. See rpc.statd(8)STATDARG=## Optional arguments passed to sm-notify. See sm-notify(8)SMNOTIFYARGS=## Optional arguments passed to rpc.idmapd. See rpc.idmapd(8)RPCIDMAPDARGS=## Optional arguments passed to rpc.gssd. See rpc.gssd(8)RPCGSSDARGS=## Enable usage of gssproxy. See gssproxy-mech(8).GSS_USE_PROXY=yes## Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8)RPCSVCGSSDARGS=## Optional arguments passed to blkmapd. See blkmapd(8)BLKMAPDARGS=I am throughly confused on this one and a point in the right direction would be greatly appreciated. | NFS Port Blocking Firewall Issue | centos;firewall;nfs | I have solved this issue and wanted to post the answer here in case anyone else had the same difficulties as the documentation on Oracle's Website is incomplete.We need to open a port for the mountd service. To do this, issue the following commands:firewall-cmd --permanent --zone=<zone> --add-service mountdMake sure to enter your zone name. Mine was public but you also have the option of leaving it out and it will select the default zone.This part was missing from the Oracle documentation. Once I did that, I was able to connect my iMac to my NFS share with no problems. |
_unix.61093 | How can I change a remote host primary IP address without getting disconnected at all (without being in a no IP addr state). The matter is poorly discussed on Internet (according to my research). The best resource I found is a little bit tricky. EXAMPLE : change 10.0.0.11/24 to 10.0.0.15/241. ssh [email protected]. ip addr add 10.0.0.15/24 dev eth0 3. logout4. ssh [email protected]. ip addr del 10.0.0.11/24 dev eth0 Problem: The last command removes both IP addresses and the connection is lost because 10.0.0.11 is primary, and it removes its secondary addresses (to which 10.0.0.15 belongs) when deleted. I know I could cheat by adding 10.0.0.11/25 (instead of 24). However, I think it is theoretically possible to do this properly. What do you think? | Change remote host IP address without losing control (Linux) | linux;ip;routing | You need to set the promote_secondaries option on the interface, or on all interfaces:echo 1 > /proc/sys/net/ipv4/conf/eth0/promote_secondariesorsysctl net.ipv4.conf.eth0.promote_secondaries=1Change eth0 to all to have it work on all interfaces.This option has been in since 2.6.12. I tested this with a dummy interface and it worked there. |
_codereview.1772 | I'm trying to figure out how to do this more dynamically. Right now I save each and every form field individually/manually. I would love to maybe have some kind of master form list that I could loop through. I'm fairly new to C#, so I don't know many tricks yet. Please let me know what you think.Here is my simplified version of my code:Note: None of the variable names are numbered in my real code. I changed them for simplicity in my example. So I can't loop through form names by iterating.//BINDED FORM COLLECTION public class FormLink{ private string _fObj1; private string _fObj2; private string _fObj3; private string _fObj4; private bool _fObj5; private bool _fObj6; private bool _fObj7; public string fObj1 { get { return this._fObj1; } set { this._fObj1 = value; } } public string fObj2 { /*...*/ } public string fObj3 { /*...*/ } public string fObj4 { /*...*/ } public bool fObj5 { /*...*/ } public bool fObj6 { /*...*/ } public bool fObj7 { /*...*/ }}//SETTINGS HANDLEpublic class Settings{ private string SettingsFile = settings.xml; FormLink form; public Settings(FormLink form) { this.form = form; } public void iStart() { if (!File.Exists(this.SettingsFile)) { this.createDefaultsFile(); } this.iLoad(); } public void iEnd() { this.alterNodeValue(this.SettingsFile, Settings, fObj1, this.form.fObj1.ToString()); this.alterNodeValue(this.SettingsFile, Settings, fObj2, this.form.fObj2.ToString()); this.alterNodeValue(this.SettingsFile, Settings, fObj3, this.form.fObj3.ToString()); this.alterNodeValue(this.SettingsFile, Settings, fObj4, this.form.fObj4.ToString()); this.alterNodeValue(this.SettingsFile, Settings, fObj5, this.form.fObj5.ToString()); this.alterNodeValue(this.SettingsFile, Settings, fObj6, this.form.fObj6.ToString()); this.alterNodeValue(this.SettingsFile, Settings, fObj7, this.form.fObj7.ToString()); } private void createDefaultsFile() { XDocument xml = new XDocument( new XElement(Settings, new XElement(fObj1, string), new XElement(fObj2, string), new XElement(fObj3, string), new XElement(fObj4, string), new XElement(fObj5, false), new XElement(fObj6, false), new XElement(fObj7, false), )); xml.Save(this.SettingsFile, SaveOptions.None); } private void iLoad() { var settings = this.getNodes(XDocument.Load(this.SettingsFile, Settings); this.form.fObj1 = Help.getDictVal(settings, fObj1); this.form.fObj1 = Help.getDictVal(settings, fObj2); this.form.fObj1 = Help.getDictVal(settings, fObj3); this.form.fObj1 = Help.getDictVal(settings, fObj4); this.form.fObj1 = Help.getDictVal(settings, Help.stringToBool(fObj5)); this.form.fObj1 = Help.getDictVal(settings, Help.stringToBool(fObj6)); this.form.fObj1 = Help.getDictVal(settings, Help.stringToBool(fObj7)); } private void alterNodeValue(string xmlFile, string parent, string node, string newVal) { /* Alters a single XML Node and saves XML File */ } private Dictionary<string, string> getNodes(XDocument xml, string parent) { /* Retrieves all Child Nodes of specified Parent and returns them in a Dictionary */ }}//Basic Utilies classpublic static class Help{ public static bool stringToBool(string BoolMe) { /* Safely converts String to Bool */ } public static string getDictVal(Dictionary<string, string> dict, string key) { /* Safely gets value from Dictionay based on Key*/ }} | XML settings implementation | c#;xml | you can just use an XML serializer? var s = new XmlSerializer(typeof(FormLink));TextWriter w = new StreamWriter(settings.xml);s.Serialize(w, form);w.Close(); |
_unix.18227 | I want to list all hidden files and directories and then save result to file.Is there any command for this? | How to recursively list all hidden files and directories? | files;directory | If using GNU find, you can dofind /path -path '*/.*' -ls | tee output-fileEditTo avoid to show non-hidden items contained in hidden directoriesfind /path -name '.*' >output-file(as noted, tee could be avoided if you do not need to see the output, and -ls option should be used only if required). |
_softwareengineering.208540 | I'm designing the database schema for a new product feature. In my current design I have some related optional data. Rather than have nullable fields I have a separate table with a 0..1:1 relationship to the main table. I chose this design because the queries are simpler* if null data doesn't have to be taken into consideration.The team lead pointed out to me that it will complicate data binding in the UI and suggested that I just use nullable fields. I am wondering what complications the optional table approach will introduce?The obvious thing that comes to mind is the data aware controls can't bind to a closed data set so I'll need to either create a record on the fly when the user attempts to fill in the optional data or create the record at the same time as the main record, negating the purpose of have a separate table. Is there anything else I should be aware of?Clarification*Simpler, by which I refer to the dizzying amount of rules and ambiguities concerning null's behavior under different query scenarios in the SQL standard and the fact that no two vendors agree on which of these rules to implement. | What must I take into consideration when designing a UI around a 0..1:1 relationship? | database design;gui design | null |
_cs.54780 | Came across the following tile problem :Given a 2 x n board and tiles of size 2 x 1, count the number of ways to tile the given board using the 2 x 1 tiles. A tile can either be placed horizontally i.e., as a 1 x 2 tile or vertically i.e., as 2 x 1 tile.Recurrence formula is as mentioned below :count(n) = n if n = 1 or n = 2count(n) = count(n-1) + count(n-2)Explanation is as follows :count(n-1) : If the first tile is placed vertically, then problem reduces to a grid of size 2 x (n-1)count(n-2) : If the first tile is placed horizontally, then problem reduces to a grid of size 2 x (n-2), because we will need to place two tiles horizontally one above the other But my doubt is, the above recurrence is fine if we always place the tiles at the left most side of grid. What if I place the first vertical tile at center of grid instead of the left most. Should not be covering all the permutation and combinations possible on basis of where we place the first tile? | Tile Problem : Dynamic Programming | algorithms;dynamic programming | Your understanding of reducing the given problem to a subproblem is flawed. Say you have to place the tiles, if you start from the centre and keep on placing the tiles, finally the arrangement you have can also be obtained by placing tiles from the left hand side. You don't want to double count. So the recurrence relation you mentioned is correct. |
_softwareengineering.119253 | Possible Duplicate:I'm graduating with a Computer Science degree but I don't feel like I know how to program I finished University around 4 years ago double degree in Software Eng/Comp Sci. Got my first job at a startup in my final year, was with them for 2.5 years then started my own business.So far everything is going great, lots of clients and stead work etc, but coming right out of uni and into a start up I never had any form or senior software engineer guiding my work or suggesting improvements etc...Whats the best way for me to improve & learn more? Books? MS Exams? Other?I develop in C#, ASP.NET/MVC.UpdateThe problem isn't really with releasing products, I've released quite a few which are up and running with customers happy. It's more with quality of code, best practices, how do I know something I am code is correct, it may work but there may be ways of coding it much more efficiently or by adhering to some kind of standardCheers for any responses!Matt | Where to go from here, how to improve / learn more | c#;asp.net;education | null |
_webmaster.76522 | I am using HTML-encoded symbols like — or & inside the meta description field. Will they be displayed properly by Google and other engines or should I correct the text to have only ASCII symbols? | Are HTML encoded symbols allowed in meta description? | seo;meta tags;meta description | Yes, they are allowed in the meta description tag. They are valid HTML entities and will be handled and displayed correctly by search engines. |
_unix.189300 | I am currently working on backing up an inactive users data from a server. The data is roughly 7tb. For the backup task, I have two four TB drives. So what is a reasonable way to move the data into the two drives?I thought of creating a single volume spanning the two, but for archival purposes keeping them independent seems better. | Backing up data to two drives | backup | null |
_unix.223473 | I reinstalled my UBUNTU 12.04 after facing some crashing issues with my software. I have separate partitions for / and /home.Output of df -h:root@sougata-SATELLITE-L750:/home# df -hFilesystem Size Used Avail Use% Mounted on/dev/sda1 29G 3.8G 23G 15% /udev 2.0G 4.0K 2.0G 1% /devtmpfs 402M 860K 401M 1% /runnone 5.0M 0 5.0M 0% /run/locknone 2.0G 22M 2.0G 2% /run/shm/dev/sda6 558G 182G 348G 35% /homeIt shows 182 GB used in the /home folder but I can't find those files in /home anywhere.Output of ls -l /home:root@sougata-SATELLITE-L750:/home# ls -l /hometotal 96drwx------ 2 sougata sougata 16384 Oct 10 2012 lost+founddrwxr-xr-x 30 sougata sougata 4096 Aug 16 11:27 sougatadrwxr-xr-x 54 sougata sougata 73728 Aug 9 08:31 sougatapcAre those files saved inside the lost+found folder ? If so then how do I recover them & view them now? | Where are my saved files in home folder? | permissions;mount;partition;lost found | null |
_unix.72875 | Imagine this example: you have direct access to the internet on eth0, and you have a VPN on tun0.You want one application to access the internet directly through eth0 (using X public IP address) and another application to access the internet through tun0 (using Y public IP). | How to specify which interface an application uses? | linux;networking | null |
_webapps.27782 | Is possible to save the indent tags for bulleted lists? I don't want to have to constantly reset the borders for every new list I make. It's possible so save every other setting, so why not this? | How can I save indentation depth in Google Docs | google drive | null |
_unix.321964 | I have installed it with yum install wine.x86_64, then I have gone to test it to the PUTTY web page and I have download PUTTY.exe, and when I have tried to do wine putty.exe I have the error said, here you have a picture about the error. The language of the error I put in the tittle is in spanish in the picture.I also have tried to execute vlc, having the same error.I also have tried to install it from the web with a wget command, but finally I get another error:I don't know what I am doing wrong. I have done in several ways and all those comes to these 2 errors. | EXE format wrong error using wine in CentOS 7 | centos;wine;putty | null |
_webapps.89735 | A Twitter user's page usually shows Tweets, Following, Followers, Likes, and Lists at the top right below the banner image, as seen on @shanselman's Twitter page:But it's apparently possible to hide everything but Tweets and Followers, as seen on @IAmDeveloper's page: What Twitter setting accomplishes this? | How do I hide Twitter Following/Likes? | twitter | It's not a setting. It's what happens when you don't use them. If you didn't create any lists, Lists will not show. If you don't favourite/like/heart any tweets, that Likes tab will not show on your profile either. |
_scicomp.10746 | Are consumer grade GPUs (like the NVidia Geforce series) used for solving sparse linear equations systems in a professional setting? For example, by engineers performing finite element analysis?After all, at least for iterative methods, memory bandwidth is the bottleneck, and a Geforce 780 Ti provides a lot more of this than a Tesla C2075, while also costing a lot less. | Consumer GPUs for sparse matrices (e.g. FEA) | finite element;sparse;gpu | In a professional setting usually you run commercial software only on certified hardware, i.e. hardware on which the vendor has tested his software and is willing to provide user support. Certified and supported GPU's lists are usually very short...: there is really no technical reason, just commercial ones.In my experience, high end FEA solution vendors charge such horrible high sums for licenses that clients do not care for the cost of the GPU cards and just buy the supported ones.This said, if you are going to use consumer grade GPUs, you should test if, under sustained load, accuracy of the solution and performance is preserved. |
_webapps.106311 | Is there a way to see other people's industry field on their profile?It seems to me that it used to be publicly visible in the old interface, but under the latest major UI redesign I am unable to access this information. | How can I see other people's industry on LinkedIn? | linkedin | null |
_codereview.110905 | Related to this question (and even this Stack Overflow's one) I have been trying to avoid blocking the code. So what I understood it's that is preferable to make all the methods that rely on an async one async themselve (thus not defeating the purpose of asynchronous). Therefore I have changed my code to do the following (I include here more layers than in the questions I linked as I had to change accordingly those).My API, exposing a method:[Route(api/postconfiguration)]public async Task<IHttpActionResult> PostConfiguration(configurationData configurationData){ try { await _configurationRequestHandler.HandlePostRequest(configurationData); return Ok(); } catch (Exception e) { // Log error return InternalServerError(e); }}That uses this method:public async Task HandlePostRequest(Data Data){ var configuration = await _ConfigurationService.GetConfigurationAsync(Data.AgentId); // Use that configuration}And this would be the method calling the external apipublic class ConfigurationApiClient : IConfigurationService{ private readonly HttpClient _client; private const string resourcePath = configuration/{0}; public ConfigurationApiClient(IConfigurationManager configurationManager) { _client = new HttpClient(); _client.BaseAddress = new Uri(configurationManager.GetAppSetting(ConfigurationApiUrl)); _client.DefaultRequestHeaders.Accept.Clear(); _client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(application/json)); } public async Task<Configuration> GetConfigurationAsync(int agentId) { try { var response = await _client.GetAsync(String.Format(resourcePath, agentId)); response.EnsureSuccessStatusCode(); return await response.Content.ReadAsAsync<Configuration>(); } catch (HttpRequestException ex) { throw new Exception(string.Format(Exception when getting configuration for agent: {0}, agentId), ex); } }}This is my understanding of what I should do for making it asynchronous, but would appreciate if someone points me any error.I am using the API on one client myself, which does this:public async Task DeployConfiguration(Package package, EFEnvironment environment){ using (var client = new HttpClient()) { // Prepare the configurationData var agentTasks = agents.Select(a => { var configurationPostRequest = new ConfigurationData { // Create it }; return client.PostAsJsonAsync(api/postconfiguration, configurationPostRequest); }); await Task.WhenAll(agentTasks); }}Which is just called inside a sync method:_deployApiService.DeployConfiguration(package, environment); | Async - await all the way | c#;async await;asp.net web api | Your code falls into a classic trap, which IMO is a design error by Microsoft. 99% of the time, await Foo(); should be await Foo().ConfigureAwait(false);. The semi-official guidance isTo summarize this third guideline, you should use ConfigureAwait when possible. Context-free code has better performance for GUI applications and is a useful technique for avoiding deadlocks when working with a partially async codebase. The exceptions to this guideline are methods that require the context.I think that it would have been far better for ConfigureAwait(false) to be the default and to require ConfigureAwait(true) in those cases which require context, but it's too late now for Microsoft to change this, so we should all get into the habit of using ConfigureAwait(false) by default and in those rare cases documenting the reason for leaving it off (or even explicitly using ConfigureAwait(true) and supplementing it with a comment which gives the reason). |
_unix.215927 | I'm running an Ubuntu 14.04 server. For an experiment I'm running I have the following requirements:-Receive data from multiple (3-4) wireless networks.-Machines on a network will only need to speak to the server, never a machine on a different network.-All networks will be entirely wireless with no internet access.I haven't done much networking, so any leads are welcome. I don't think I need to bridge the networks. Can I just attach several wifi adapters and hardcode them each to a different network? Can I create virtual adapters and have those simultaneously connect to different networks? Thanks in advance. | Simultaneously operate on multiple wireless networks | networking;wifi;bridge | null |
_unix.232296 | I have initiated 50 processes and in one file say start_time, I have stored when process was started and in another file say end_time, I have stored when process finished.files are like below -Start time - Start time for Process A : 15/09/26 21:02:13Start time for Process B : 15/09/26 20:06:14Start time for Process C : 15/09/26 13:20:52Start time for Process D : 15/09/26 11:23:46End time file End time for Process B : 15/09/26 21:13:38End time for Process D : 15/09/26 12:31:29End time for Process A : 15/09/26 22:06:11End time for Process C : 15/09/26 12:17:10Now I want to calculate execution time for each processlike Process A : 10 minsProcess B : 5 minsetc. | subtract dates from two files for same process | regular expression | null |
_unix.288156 | I am trying to work with awk datetime=`date +\%Y\%m\%d` logdatetime=`date +\%Y\%m\%d` foldermonth=`date +\%B_\%Y` folderday=`date +\%d` inputdir=~/sboper/Standalone/fsplit/GLMS outputdir=~/sboper/StandAlone/input/$foldermonth outputdirday=~/sboper/StandAlone/input/$foldermonth/GLMS_Daily/$folderday awk -v outdir=$outputdirday 'BEGIN{ FS = ~ } ;{if( $12 == A ) filename=outdir/Customer_Create_Records.dat ; print >> filename;close(filename)}' $inputdir/$sourcefile awk -v outdir=$outputdirday 'BEGIN{ FS = ~ } ;{if( $12 == M ) filename=outdir/Customer_Modify_Records.dat ; print >> filename;close(filename)}' $inputdir/$sourcefilehowever the variable outdir=$outputdirday is not expanding to the value of path which is my output directory.I have declared the awk variable and did not use shell variable inside awk script. But its still giving me error: cannot open Customer_Create_Records.dat.Where am i going wrong with this script? When i copy complete path instead of 'outdir', it created new files and moved the record but doesnt work right when used in a variable. How can i create new files in the directory i want through awk? | Creating a new file inside awk but in different directory based on input field value | awk;variable | Wow. Where do I begin?You need to learn how to debug. How to debug a bash script?is a highly upvoted Stack Exchange reference on the topic,with contributions from several of our most prolific members.You can find other information on this site, and lots more on the web.For instance,Simplify.Get rid of folderday, foldermonthand the `date` assignments;just construct outputdirday with constant values.Avoid confusingly similar variable names.For example, outdir is equal to outputdirday,but outputdir is something different.Eliminate datetime, logdatetime, and outputdir,which are not even used in the script.Eliminate redundancy.You show two awk commands that are identicalexcept for a trivial difference.Does one of them work and one of them fail?Do they fail in different ways?I dont think so.Then dont show them both.Dont use long, eight-level pathnames(e.g., ~/sboper/StandAlone/input/$foldermonth/GLMS_Daily/$folderday/Customer_Create_Records.dat)unless you really really need to.For presentation purposes(i.e., when presenting your problem to others),avoid confusing naming conventions,like naming your output directory /input/.For presentation purposes,try to avoid lines that are wider than the screen(so people have to scroll to read them in their entirety).Most systems have ways to let you split commands and programsinto multiple lines.Do you really have two directories one called Standalone and one called StandAlone in your ~/sboper directory?If the end result isnt what you want,find out whats happening in the middle of the process.Display intermediate values. For example,outputdirday.When I run your script and do echo$outputdirday before the awk, Iget~/sboper/StandAlone/input/June_2016/GLMS_Daily/08I.e., because the ~ was in quotes,it didnt get expanded to my home directory.I had to change the assignment tooutputdirday=$HOME/sboper/StandAlone/input/$foldermonth/GLMS_Daily/$folderdayto get it to work.outdir.You say it isnt get set correctly, but have you tried to print it?When I do, I get the same value as I do for outputdirday.filename.This is where (part of) the problem is; see below.When youre asking for help, you shouldSimplify the problem, as discussed above.Describe what your command is supposed to be doing.Show what your input file looks like.Show what you expect your output to look like.Identify the versions of the software you are using.While I believe that I understand whats going wrong with your script,I cant reproduce your errors exactly(so, I guess Im running a different version of awk).It probably isnt really important,but you dont even say what operating system youre on.Heres the important part:If, as I suggested above,you had broken the main action block in your awk scriptdown into shorter lines, you might have realized that it was equivalent to{ if ( $12 == A ) filename=outdir/Customer_Create_Records.dat print >> filename close(filename)}As Michael Vehrs pointed out in his answer,this sets filename conditionally,and then writes every input line to filename unconditionally.As he suggests, you need to do something like{ if ( $12 == A ) { filename=outdir/Customer_Create_Records.dat print >> filename close(filename) }}If, as I suggested above,you had printed the value of filename after setting it,you would have seen that it wasoutdir/Customer_Create_Records.datbecause outdir (in quotes)means the string outdir,and not the value of the variable outdir.You need to dofilename = outdir /Customer_Create_Records.datAlmost as important:You should always quote your shell variable references(e.g., $outputdirday, $inputdir, and $sourcefile)unless you have a good reason not to,and youre sure you know what youre doing.That doesnt appear to be the cause of this problem,but eventually it will get you into trouble.Just for clarity, you might want to change `` to $() see this, this, and this.While you should always quote your shell variable referencesunless you have a good reason not to,the same does not apply to backslashes.You should use them only when you have a reason to.For example, date+%Y%m%d and date+%B_%Y, etc., work just fine.As I suggested earlier, for what you appear to be doing,you dont need two awk invocations.You can do the whole thing withawk -v outdir=$outputdirday ' BEGIN { FS = ~ filenameA = outdir /Customer_Create_Records.dat filenameM = outdir /Customer_Modify_Records.dat } { if ($12 == A) print >> filenameA if ($12 == M) print >> filenameM } ' $inputdir/$sourcefileorawk -v outdir=$outputdirday ' BEGIN { FS = ~ filenameA = outdir /Customer_Create_Records.dat filenameM = outdir /Customer_Modify_Records.dat } $12 == A { print >> filenameA} $12 == M { print >> filenameM} ' $inputdir/$sourcefileAs far as I can tell, you dont need to close the file(s) at all;files are typically closed automatically when a program exits. |
_softwareengineering.208395 | I've been studying object oriented analysis and, so to get started with this in practice I've decided to first of all build my own management system to have my customers data and so on. Trying to gather requirements for the first time and write use cases for the first time led me to some doubts. I want to ask here about some doubts regarding the use cases.One of the requirements I've thought of was manage customers. From this requirement I've obtained some use cases:Register new legal person as customer;Register new natural person as customer;Update and correct customer data;Remove a customer from the system;Read customer's summary;Now, I'm greatly in doubt about all of this. First, is this right: take one single requirement and from it derive many different use cases? Or should be just one use case Manage Customers containing everything?Second, I'll give an example about what I did. I choose to start by the first one. So I wrote the following use case:Title: Register new legal person as customer;Actor: User;Scenario: The user chooses to register a new company as customer. The user informs the company data and the system validates the data informed. The user is then prompted to inform the data of the responsible for the company and the system validates the data informed. The user is then taken to the list of all legal person customers.Now, I'm pretty sure I'm not doing this right. First, this seems to simple, it's just like if we said well, the user goes and register the data. Second, all of the others use cases would be exactly like that, so I can't see how this is going to help me out. Third, from this use case I could obtain just three possible objects legal person customer and responsible for the company, so I really think this is not enough.Also, usually there are many requirements like that manage customers, manage employees or manage suppliers and it seems at first tha they will always be like that. Is this correct? Those kind of requirements always end up with this kind of uses cases that are simple like this?Am I doing this right? Is there something wrong with the uses cases I've figured out and the way I wrote that particular use case? How do we work with this kind of use cases that are so simple that the scenario is almost like just stating again the objective of the use case?I know there are many questions there, but any kind of help is appreciated. I'm starting now to work with this kind of thing and I'm unsure if I'm doing things right. Thanks very much in advance! | Is this a good use case? | object oriented;object oriented design;use case | These use cases look fine to me.What you may not be realizing is that use cases can differ based on roles... that is, some people may have the authority to review customer accounts, but others may not. The roles and responsibilities of a product manager are going to be dramatically different than those of a clerk.Consequently, writing your use cases this way will surface requirements based not only on what kind of information certain users have access to, but also what services are required and expected from the system by each user.Your use cases will also help you define the roles... If you find that a lot of the use cases are similar, you can combine their common elements into a role. Like many other software development activities, this process is iterative: you will refine your use cases as you and your customer begin to understand them better. |
_unix.169363 | I want to grep a file, and I want to get all the lines that have a certain environment variable (to be exact, $PWD).Of course, using justcat file | grep '/'$PWD'/'is not working, since $PWD contains slashes.I am trying to figure out how to do it correctly, but I come up only with weird and over-complicated solutions. What's some simple way to do this? | How to grep for something that's in an environment variable and has a forward slash? | bash;grep | Just use double quote instead of single quote, and you don't have to use cat (See UUOC):grep -F -- $PWD fileAnd remember that without -F, $PWD would be treated as a regular expression as opposed to a string to be looked for in the file. |
_cs.47718 | Draw recursion trees and use them to find big theta bounds on the solutions to the following recurrences. For each, assume that T(1) = 1 and that n is a power of the appropriate integer. ex) T(n) = 8T(n-2) + nThere are multiple equations that I have to do with the above directions. I am lost on finding big theta bounds. I would prefer examples since I learn easiest that way. Thank you for any help! | recursion trees and big theta bounds | recursion;discrete mathematics | null |
_cstheory.18572 | Since both proofs make use of the diagonal argument, Im wondering whether there is an obscure link between the existence of uncountable infinite sets and the undecidability of the halting problem. Would the halting problem be decidable if all sets were countable? | Is there a hidden link between the existence uncountable sets and the undecidability of the halting problem? | halting problem | It's not a hidden link but one that has been made explicit using the language of category theory and also a very natural question to ask and study. There is a fair bit of material on the subject.CS Theory question asking the same thingAndrej Bauer's blog post about fixed point theorems and Cantor's theorem.A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points, Noson Yanofsky, 2003. Gives you a gentle introduction to Lawvere's paper, below.Diagonal arguments and cartesian closed categories, F. William Lawvere, 2006 republication of a 1969 article making these connections precise.Incompleteness in a General Setting, John Bell, 2007From Lawvere to Brandenburger-Keisler: interactive forms of diagonalization and self-reference, Samson Abramsky and Jonathan Zvesper, 2010. Extension of the Lawvere arguments to game-theoretic impossibility results. |
_unix.167684 | VAR=${VAR:-value}The purpose is to set the environment variable VAR to value unless it's already set.I see it works in bash and dash, but how portable is it? | How portable is this shell assignment? | shell | This assignment is portable, it works in any POSIX shell. POSIX define this expansion as:${parameter:-word}Use Default Values. If parameter is unset or null, the expansion of word shall be substituted; otherwise, the value of parameter shall be substituted.But this also sets VAR to value if VAR was set and null. To set VAR to value unless it's already set, you must use:VAR=${VAR-value}Omitting the colon make substitution occur only if the variable is unset:In the parameter expansions shown previously, use of the colon in the format shall result in a test for a parameter that is unset or null; omission of the colon shall result in a test for a parameter that is only unset. |
_webmaster.34980 | Possible Duplicate:When does a website require a privacy policy and/or terms of use? I finished building a website for an online chess club which I am a member of. This is my first website. The site has blogging feature so the members can log in and write blog posts and comment on other posts. The membership is limited to users of an online chess site (freechess.org) and any member of that site can join this site as well. I was wondering, is it needed to put up a terms and conditions for my new website? If so, can I have a model of that?I searched and found some models but they are all for big sites that have e-commerce etc. | Terms and conditions for a simple website | website features;terms of use | Terms and Conditions really can be very customizable. It can depends on your regions, country law, and your own preferences.Try to Google some similar website, read their T&C, take those suitable for yours, change the domain name to yours.Sample Link:http://www.businesslink.gov.uk/bdotg/action/detail?itemId=1076142035&type=RESOURCES |
_cogsci.13380 | NEURON is a software package for simulating neurons and networks in great detail. Although it's quite easy to find papers that use NEURON with a simple Google Scholar search, is there some way to find which models are publicly available? For example, some repository of NEURON models and their associated publications?If there is no standard, an answer with a few examples of recent (last 10 years) publications with a freely available NEURON model would be appreciated. | Publicly Available NEURON models | neurobiology;theoretical neuroscience;computational modeling | A repository of publicly available NEURON models can be found on ModelDB by filtering for Models that contain the Modeling Application: NEURON. |
_vi.10462 | Is there a way to have mixed indenting more like Sublime Text (left) than Vim does it?I would like the nested UL to follow the (line 10) to follow along the (main indentation). | Mixed PHP and HTML indenting | indentation;filetype html;filetype php | null |
_cs.42263 | Curry-Howard correspondence states the equivalence between logic/deduction and types/programs.The Church-Turing thesis states the equivalence of some models of computation. Specifically, all computable functions can be written both on a Turing Machine and in Lambda Calculus.Not all logics are equivalent however. 2nd order logic can quantify over sets whilst 1st order can't.Can someone explain the dissonance? Can some computable functions be expressed at a higher-level but not at a simpler-level? [CLARIFICATION]If I define a model for computation off say 2nd order logic whilst the Turing Machine is a model for 1st order logic, then does that mean my hypothetical language can compute uncomputable functions? It appears so to me because 2nd order logic is more powerful than 1st order logic. This is clearly nonsense because Haskell is based off a flavor of 2 order logic (system F with extensions) but isn't more powerful than say C: any program achievable in Haskell is achievable in C. The mapping (or equivalence) between logic and computation does not appear to preserve the notion of strength: 2nd order logic is stronger than 1st order logic but programming languages off 2nd and 1st order logic have the same strength. Does that make sense?The only conclusion I can come to is that the relationship between logic and types is not a straightforward mapping. The Turing Machine is not based off any specific logic and that any logic/deductive system can be mapped to the Turing Machine. Can you confirm?[CLOSING WORDS]I think that my mistake is in not realizing that there is a large divide between the colloquial program and programs in the context of types.Haskell which is based on a stronger, more expressive type system does indeed provide stronger, more expressive programs. These programs are stronger in the sense that more can be guaranteed about them (in the same sense as type safe). This does not mean that Haskell can compute more; it is still merely Turing complete. | Curry Howard correspondence and Church-Turing thesis | computability;logic;functional programming;church turing thesis;curry howard | I'm not sure where you see the dissonance. The Church-Turing thesis is a hypothesis stating that Turing Machines (equiv. Lambda Calculus or Recursive Functions) can do anything that we'd think of as computable.The Curry-Howard correspondence is a much stronger statement that certain types of intuitionistic logic are structurally identical to things kind of like the Lambda Calculus. It doesn't really say which logic it corresponds to, it just shows that you can map the logical constructions in a canonical way to a (typed) computational model and vice versa. If you take different logics, you get different models and vice versa. The models may or may not be Turing-complete, although if we believe the Church-Turing thesis (it does seem to be doing alright so far), if you get a model that's more powerful than a Turing Machine, then you've got something incomputable in there.Nonetheless, the logic you deal with does limit what you can compute in the corresponding model. For example, the Calculus of Inductive Constructions, which is a type theory and the basis for Coq, only allows types that it can verify have decreasing definitions (i.e. a recursive destruction of the type will terminate), but there are perfectly computable types which it can't verify, so you're locked out of using them. |
_softwareengineering.197882 | What exactly does Security through obscurity means in the context of stroing unencrypted passwords?I'm using a small program (I won't name it, to not enlarge enough large shame on its author) that uses my Google account for some tasks. I've noticed, that it stores my password in plain-text unencrypted file. Just a string, clearly seen to everyone, that can drag&drop it to Notepad or use F3 in Total Commander.I have risen a ticket asking program author to fix this ASAP. I haven't got any reply yet, but my issue got one comment, that includes only above mentioned link to Wikipedia's Security through obscurity page.How should I understand this comment? Is it pro or con my issue? At first I thought, that it supports my statement of fixing this ASAP. But then I found a Eric Raymond's Fetchmail example (in The Cathedral and the Bazaar), who refused to implement config file encryption (passwords are stored in config file for Fetchmail), claiming that it is up to the user to assure security by not letting anyone from the outside access that configuration file.This statement (or refusal) is often brought as example of Security through obscurity. And looking from this point of view, I'm completely wrong and that program author is right. He do not have to implement encryption of file with my password, it can remain there, stored unencrypted and it is I, who is responsible for assuring security by not giving anyone access to this file or by deleting it each time I stop using that soft.(another question is, how can I achieve this on system as unsecure as Windows itself?)These seems to be in a complete opposition, to what I've been told and learnt for years, so I would like to ask more experienced developers, who is right here and how exactly I should understand StO? | Security through obscurity and storing unencrypted passwords | security;encryption;passwords | The problem is that data encryption is unnecessary, when data and key are kept on the same system.When the application would encrypt your password, it would have to include the decryption algorithm and the decryption key. Anyone with access to your data could just extract algorithm and key from the program itself and use it to decrypt the password file. That's why encrypting your password would just be security through obscurity. When I get 10 minutes alone with your computer, I just need to look at the file with the encryption key in addition to the file which stores your password to obtain your login information.The only way where it makes sense to encrypt local data is when you use a passphrase as decryption key which is not stored and must be entered by the user manually everytime the encrypted data is accessed. But when the only data which is protected by that scheme is another password, you could just have the user enter that password instead. |
Subsets and Splits