id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_softwareengineering.279106 | My team is facing a couple of new challanges in the near future, as we will start developing a couple of (micro)services which will run in a cloud environment. Therefore we want to establish a continuous delivery workflow (and maybe move on to continuous deployment someday).Currently we are developing an application based on Eclipse RCP together with some web applications (Java servlets). After we provide the released artifacts, the customer is responsible for operation. We release 3-4 times a year with additional maintenance releases when needed. We use git with a trunk-based (all-on-master) workflow and additional maintenance branches. The developers commit at least once a day, which triggers a build on our CI server. The master may get unstable once in a while and contains a lot of incomplete (and not hidden!) features.As we will be responsible for operation of our services, our main goal is a always release-ready mainline in order to react fast if something is broken.Thus we thought about extending our git workflow with feature branches. But we soon dropped that idea because of several reasons (e.g. merge conflicts with features/stories that take 1-2 weeks, no satisfying CI feedback, etc.)[1-2]. Hence we decided to keep our all-on-master workflow (without maintenance branches) and work with patterns like feature-toggles, branch-by-abstraction, etc. [3-4].However, we dont have much experience with those patterns and also think that its non trivial to always keep the mainline 100% clean and release-ready. Thus we decided to have an additional development branch which is merged back when needed or at least at the end of a sprint. This ensures that the master branch really stays release-ready, while the development branch dont has to be (but of course should be).After this workflow has established, we want to move a bit further and introduce some kind of short living feature branches. In my opinion the main problems with feature branches arise from their rather long life time. Imagine you have a user story which is composed of several subtasks. A task takes about 1 day of work. When the task is done, the changes get pushed to the mainline which triggers a full build on the build server. This is CI as we know it, right? However, my commit can still lead to an unstable mainline. Of course I can run all unit tests locally for example, but the build server might run additional tests or check other quality metrics which I dont want to run locally. Thus it would be nice to have a dedicated branch and build job for my task. This enables me to verify and check my changes before I push them to the mainline. So I grab a subtask to work on, and automatically get a feature branch and build job. When the task is finished it gets pushed to the mainline and the branch+build job is cleaned up.I know this workflow is similar to 'gitflow' for example, however the main difference is the lifetime of branches as mentioned above.What do you think about this workflow? Which workflows do you use?(1) Continuous Delivery, Jez Humble and David Farley,(2) martinfowler.com, FeatureBranch,(3) martinfowler.com, FeatureToggle,(4) martinfowler.com, BranchByAbstraction, | A good workflow to start with Continuous Delivery? | continuous integration;workflows;continuous delivery;gitflow | null |
_unix.356136 | I have a script, which starts several services in a bash script. To start these services, the user needs to start that script with a start parameter, i.e. ./script.sh start. Through history, there was also added another way to start these services with another parameter someother-start, i.e. ./script.sh someother-start.When adding this script to chkconfig, chkconfig only starts this script with start parameter.Is there any way to start the script in chkconfig with someother-start parameter? Or is the only way to start the script with someother-start through a config file, i.e. in the config file it is specified, which start the script should use. | Start parameter of chkconfig service | systemd;sysvinit;chkconfig | null |
_webmaster.23868 | I have an apache2 installation on www.main_domain.com (invalid underscore in domain name is intentional; this is an example), and the default page gives links for two secondary sites, also served by the same webserver since they point to the same address. The sites have lots of stuff in common, so I wouldn't want to do things using symbolic/hard links.What I would like to do is:if the client requests www.main_domain.com, I'd like to serve the current /var/www/index.html start page. That page contains links to secsite1.html and secsite2.html (the start pages for the secondary sites).if the client requests www.secondary_site_1.com, I'd like to serve the /var/www/secsite1.html start page.likewise for www.secondary_site_2.com: serve /var/www/secsite2.html as a first page.Note that I want to change only the start page; otherwise, any page/image/file should be interchangeably accessible using the same path under each domain name.Please let me know if I need to clarify more. | How do I select a different start/default page per domain? | apache2;configuration | I can't think of any reason why one would want this sort of setup, but to make the best of a bad situation, I would give the sites different docroots to avoid unnecessary duplication of content/URLs and make the sites more maintainable, e.g.:/var/www/site1/var/www/site2/var/www/site3To share assets, you ought to just keep the assets in the main site's docroot and use mod_rewrite to 301 redirect from the other domains. This will allow you to use a single folder to keep shared assets, prevent duplicate URLs, and allow visitors to share cached files between sites.In case you want to give a specific site its own version of a particular file, you just need to upload it to the respective path in its docroot, and visitors to that domain will see that version while still sharing all other assets. |
_softwareengineering.324528 | We have a method called attachDevice(Device device) which has only one argument. We had a situation to overload this method with one more parameter as like attachDevice(Device device, String deviceName). with single argumentpublic void attachDevice(Device device){ .. .. }with double argumentspublic void attachDevice(Device device, String deviceName){ .. device.setName(deviceName); genericDeviceMap.put(deviceName, device); ..}Actually, my Team Lead asked me to make these two methods into a single generic call. The overloaded method only have two additional lines than the single method (which are shown as above). I can pass empty string instead of passing the deviceName, because the invocation of overloaded method will be very lesser than the invocation single argument call. But how bad it is that if passing null value since i don't set the name if the argument will be null. Which would be the best practice for this scenario? Any suggestions are highly appreciated.Note: The actual problem I stated here is, that i) I wanted to merge overloaded methods into one method which should be generic to avoid code duplicationii) Need a recommended value to pass to the method (either empty string or null value) to stick with best coding practices.From the given answers, I got the best solution for my problem. The linked question which has been referred this question duplicate also provides info about handling null value, but not covered the first point I mentioned in the note. And I didn't write bad code by passing null as parameter, since I am aware that NPE will thrown sometimes.Thanks. | Recommended value to pass instead of String parameter for a method in java | java;clean code;method overloading | I would discourage you to ever use null since it can lead to a further NPE, which are hard to debug (and cost a lot if they occur in production code).Solution 1 (overload method)If no deviceName is provided, you can provide a default one instead. The biggest disadvantage from this approach is the danger in genericDeviceMap.put(deviceName, device) because it can silently override the entry whose key is the default name (therefore, losing track of the previous Device).public void attachDevice(Device device){ attachDevice(device, DefaultName); }public void attachDevice(Device device, String deviceName){ .. device.setName(deviceName); genericDeviceMap.put(deviceName, device); ..}Solution 2 (extract method)Maybe that with your current architecture it doesn't make sense to add an entry to genericDeviceMap when attachDevice is called without a name. If so, a good approach is to only extract the common behaviour between the two attachDevice into private methods. I personnally don't like this approach for 2 reasons:The behaviour between the two attachDevice is not the same, one has a side-effect (device.setName(deviceName)) and the other notThe side-effect in itself who often lead to subtle bugs because you alter an object who's coming from an outside scopeCode:public void attachDevice(Device device){ preAttachDevice(); postAttachDevice(); }public void attachDevice(Device device, String deviceName){ preAttachDevice(); device.setName(deviceName); genericDeviceMap.put(deviceName, device); postAttachDevice();}private void preAttachDevice(){ ...}private void postAttachDevice(){ ...}Solution 3 (remove method)My favorite, but the hardest. Ask yourself if you really need these two methods ? Does it make really sense to be able to call attachDevice either with a name or not ? Shouldn't you be able to say that attachDevice must be called with a name ?In this case the code is simplified to only one methodpublic void attachDevice(Device device, String deviceName){ .. device.setName(deviceName); genericDeviceMap.put(deviceName, device); ..}Or on the other hand, do you really need to maintain a Map of devices and devices names and set the device's name ? If not, you can get rid of the second method and only keep the first one.public void attachDevice(Device device){ ... ... } |
_scicomp.19902 | I'm looking for an open-source (to use and learn from) software which computes Bessel functions of integer order of real argument to double precision the fastest among all such implementations. Currently I've tried Boost.Math and GSL. From these GSL appeared to be faster for smaller arguments and much slower for larger ones.Are there any implementations specially designed to be very fast for the whole working ranges of arguments and orders, but still not neglecting precision? | What is the fastest opensource implementation of Bessel functions computation? | numerics;performance;special functions;open source | The appropriate and fastest library depends on several things. Which Bessel functions (only J, Y & Hankel or modified Bessel functions I & K too), for which types of arguments (real or complex, integer, fractional or general order)?Amos's libraries are written in Fortran-77 (there are Fortran-90 coverted versions of TOMS 644 on a mirror of Alan Miller's fortran page: http://jblevins.org/mirror/amiller/) and are numerically accurate and fast for complex argument of all Bessel functions, but they are difficult to understand the source code. He produced several versions of his library, but only the one included in SLATEC (http://www.netlib.org/slatec/) is actually open source, the other versions have a copyright associated with the transactions on mathematical software journal.http://octave-bug-tracker.gnu.narkive.com/ym1U5WEL/bessel-function-scaling-limited-rangempmath (http://mpmath.org/) uses a totally different and more general approach, by expressing all Bessel functions as special cases of hypergeometric functions. Mpmath is also arbitrary precision, so this is probably overkill if you just need double precision. This is more general, but slower. The author of mpmath has a C-based library called arb (http://fredrikj.net/arb/) that implements some of the functions in mpmath but much more quickly (since it is C-based rather than Python-based). Since arb and mpmath are arbitrary precision, they might be good for benchmarking, but probably not for speed testing.I haven't personally worked much with the Gnu Scientific Library, but I believe it is fast and accurate for what functions it covers. It may not have all the functions you need. It would probably be a better example of well-written modern code, compared to the Amos libraries. |
_webmaster.47995 | Right from the day I installed WordPress on my site, I started getting spam comments/trackbacks. I have also prevented anonymous comments. I get the following email whenever a new spam is posted: New trackback on the post `<post name>` is waiting for your approval`<url>`Website : jackettl36 (IP: 112.111.173.89 , 112.111.173.89)URL : <redact>Trackback excerpt:<strong>fake OAKLEY sunglasses...</strong>...How to prevent such automated spam? | How to prevent common spam on a WordPress blog | email;spam prevention;blog;wordpress | I would recommend you to install and enable Akismet: http://wordpress.org/extend/plugins/akismet/Akismet checks your comments against the Akismet web service to see if they look like spam or not and lets you review the spam it catches under your blog's Comments admin screen. |
_webmaster.101578 | We sometimes get email as a source of traffic in Google Analytics, but dont send out email newsletters, of course this could be from our url being in other people newsletters, but i wondered if it was due to us having our url in our email footers. Do urls in email footers show up a source email in google analytics ? | Do urls in email footers show up a source email in google analytics ? | google analytics | null |
_unix.359840 | (Wed Apr 19 14:16:47 2017) [sssd[be[xx.xx.COM]]] [sdap_get_generic_op_finished] (0x0400): Search result: Referral(10), 0000202B: `RefErr`: DSID-03100781, data 0, 1 access pointsWhat does this mean? | How to understand this SSSD log? | logs;sssd | null |
_softwareengineering.110678 | I am starting working on a web project using django. While researching whether to use Sqlalchemy or raw sql when django orm is not sufficient which is also a question I asked here Raw Sql vs SqlAlchemy when Django ORM is not enoughOne question bugged me.As the load on the website increases and we find out that sql queries ran by ORM needs tuning. But we have no control on how does an ORM executes queries. In that case we have to use raw sql queries because they give us the best control. But if we start using raw sql queries, all the benefits of using an ORM in first place are gone. We are stuck with both sql and orm code which will surely mess up the reading and maintainability of the code.It feels to me that we are opting for easy code in lieu of many problems later. No doubt our initial code will be fast and easy maintainable but this can cause later problems.I would like to know thoughts of others on it. I am not starting a question comparing orms and sql but I would like to know what are the options when we have created web app using ORM and comes the situation when its clear that ORM is not supporting our cause? | What are the options when Sql generated by ORM needs tuning? | sql;performance;orm;django | This is pretty straightforward. Two solutions :Tune the ORM. This is the application of separation of concerns and the cleaner solution in the long run. Anyway this has drawbacks.The ORM codebase could be hard to tweak.This could be hard to achieve in a given deadline.The tweak has to be ported to other versions of the ORM, or continuous obolescance will arise pretty quickly. Working with people behind the ORM is the way to go, but some company policies can prevent it.The ORM can be impossible to modify that way. It can be a proprietary software or the company organisation could prevent you from doing this.Use manually done queries mapped by the ORM.This is easy and fast to do. get the job done.Has to be unique, or you codebase will be quickly doomed with such hacks everywhere.Has to be packed in an abstraction. Make it look like standard use of the ORM. This limits the dirtyness to a small portion of code.The ORM may not provide this function (and if it the case, you probably choosed a crappy one).Depending on the conext and deadlines, solution 2 is a real world solution, even if it not clean. Technical debt can be made in a project with precaution to limit it to a small portion of code and justified by some extra technical constraints. |
_webapps.79070 | The problem I'm having is that when an option in a drop-down box is selected, I need the selected option to be entered into a calculation in another field. However, the selected option is a number but due to the selected option from the drop-down box being formatted as text, it is not allowing me to make the calculation I need in the following field as it needs to be a number format.Is there a way I can change the selected option from the drop-down into a number in order to enter it into my calculation? | Trying to do a calculation with a text field and a number in Cognito Forms | cognito forms | null |
_webmaster.39046 | Unfortunately i read an article on how to avoid destroying your websites SEO from a redesign article AFTER its was too late! Here is the article On 20 November 12 completely redesigned our site. We get ALL our customers from our website as we do not have a shop. Since that dreaded day a month ago the phone pretty much stopped, basically no emails, Google rankings down and Google analytics have halved by 50%. Yesterday i did some research into as as i had no idea that a re-design of a website could have such a damaging effect - yes i am a novice and use a WYSIWYG type web builder.There are lots of info on how to AVOID this from happening BUT what do i do as i have already made the mistake?Yesterday i reloaded my OLD site with my new pages in the background hoping this would be a start. I really have no idea of how to get out of this mess. | Redesigning my website has destroyed my SEO | seo;website design | null |
_cs.1647 | I have had problems accepting the complexity theoretic view of efficiently solved by parallel algorithm which is given by the class NC:NC is the class of problems that can be solved by a parallel algorithm in time $O(\log^cn)$ on $p(n) \in O(n^k)$ processors with $c,k \in \mathbb{N}$.We can assume a PRAM.My problem is that this does not seem to say much about real machines, that is machines with a finite amount of processors. Now I am told that it is known that we can efficiently simulate a $O(n^k)$ processor algorithm on $p \in \mathbb{N}$ processors.What does efficiently mean here? Is this folklore or is there a rigorous theorem which quantifies the overhead caused by simulation?What I am afraid that happens is that I have a problem which has a sequential $O(n^k)$ algorithm and also an efficient parallel algorithm which, when simulated on $p$ processors, also takes $O(n^k)$ time (which is all that can be expected on this granularity level of analysis if the sequential algorithm is asymptotically optimal). In this case, there is no speedup whatsover as far as we can see; in fact, the simulated parallel algorithm may be slower than the sequential algorithm. That is I am really looking for statements more precise than $O$-bounds (or a declaration of absence of such results). | How to scale down parallel complexity results to constantly many cores? | complexity theory;reference request;parallel computing | If you assume that the number of processors is bounded by a constant, then you are right that a problem being in NC does not mean much in practice. Since any algorithm on a PRAM with k processors and t parallel time can be simulated with a single-processor RAM in O(kt) time, the parallel time and the sequential time can differ only by a constant factor if k is a constant.However, if you assume that you can prepare a computer with more processors as the input size grows, then a problem being in NC means that as long as you can prepare more processors, running time will be very short or, more precisely, polylogarithmic in the input size. If you think that this assumption is unrealistic, compare it to the assumption of unbounded memory: actual computers have only finite amount of space, but in the study of algorithms and complexity, we almost always assume that a computational device does not have a constant upper bound on space. In practice, this means that we can prepare a computer with more memory as the input size grows, which is how we usually use computers in the real world. NC models an analogous situation in parallel computation. |
_cs.73889 | So from what I understand, if a language is recognisable then using a TM it can be accepted and halted or rejected or halted, however a language that is decidable can be accepted and always halts on rejection?I have been given the language $A_{\mathrm{DFA}} = \{\langle A \rangle\mid A \text{ is a DFA and } L(A) = \{0, 1\}^\}$ and asked if it is decidable, if so write a high level TM, if not, prove by contradiction.I am quite unsure, what I have come up with so far is that it is a recognisable language, I don't believe it can be rejected, only accepted and halted due to the fact once the read head comes across a blank it can just accept it. Is this the way I should be thinking? Am I correct in thinking it is not decidable or is this not a valid way to argue the point?What should my process of thinking be when I am trying to prove it by contradiction if it is not decidable?Thanks in advance for any tips! | Understanding how to decide whether a language for a DFA is decidable | turing machines;finite automata;undecidability;proof techniques | null |
_webmaster.88192 | I would like to get a list of uploaded files which aren't linked at all.Is there an equivalent of Special:OrphanedPages for files?Or is there a filter for Special:ListFiles for orphans?Or is there a SQL query which returns the file names? | MediaWiki: Show orphaned files | mediawiki | There is a special page Special:UnusedFiles that shows all files that are not in use. |
_unix.281049 | Can anybody please explain about following parameters found /etc/security/limits filedefault: fsize = 2097151 core = 2097151 cpu = -1 data = 262144 rss = 65536 stack = 65536 nofiles = 2000 | AIX : /etc/security/limits file Parameters | security;aix;ulimit | null |
_codereview.16303 | It's my first window program. It simply searches for a specified file in the whole computer.File Search.h (header file that contains the prototypes of fileSearcher class methods.) #ifndef UNICODE#define UNICODE#endif#include <Windows.h>#include <queue>namespace fileSearch{class fileSearcher{public: fileSearcher(); ~fileSearcher(); //So far, this class doesn't allocate memory on the heap,therefore destructor is empty void getAllPaths(const TCHAR* fileName,std::queue<TCHAR*> &output); /*Returns all matching pathes at the current local system. Format: [A-Z]:\FirstPath\dir1...\fileName [A-Z]:\SecondPath\dir2...\fileName ... [A-Z]:\NPath\dirN...\fileName */ void findFilesRecursivelly(const TCHAR* curDir,const TCHAR* fileName,std::queue<TCHAR*> &output); //Searches for the file in the current and in sub-directories. Sets nothingFound=false if the file has been found.private: static const int MAX_LOCATIONS = 20000; bool nothingFound; void endWithBackslash(TCHAR* string);};}File Search.cpp ( ... definitions)#ifndef UNICODE#define UNICODE#endif#include File Search.husing namespace fileSearch;fileSearcher::fileSearcher(){ nothingFound = true;}fileSearcher::~fileSearcher(){}void fileSearcher::getAllPaths(const TCHAR* fileName,std::queue<TCHAR*> &output){ TCHAR localDrives[50]; TCHAR currentDrive; int voluminesChecked=0; TCHAR searchedVolumine[5]; nothingFound = true; if(!wcslen(fileName)) { output.push(TEXT(Invalid search key)); return; } GetLogicalDriveStrings(sizeof(localDrives)/sizeof(TCHAR),localDrives); //For all drives: for(int i=0; i < sizeof(localDrives)/sizeof(TCHAR); i++) { if(localDrives[i] >= 65 && localDrives[i] <= 90) { currentDrive = localDrives[i]; voluminesChecked++; } else continue; searchedVolumine[0] = currentDrive; searchedVolumine[1] = L':'; searchedVolumine[2] = 0; findFilesRecursivelly(searchedVolumine,fileName,output); } if(nothingFound) output.push(TEXT(FILE NOT FOUND));}void fileSearcher::findFilesRecursivelly(const TCHAR* curDir,const TCHAR* fileName,std::queue<TCHAR*> &output){ HANDLE hFoundFile; WIN32_FIND_DATA foundFileData; TCHAR* buffer; buffer = new TCHAR[MAX_PATH+_MAX_FNAME]; wcscpy(buffer,curDir); endWithBackslash(buffer); if(!SetCurrentDirectory(buffer)) return; //Fetch inside current directory hFoundFile = FindFirstFileEx(fileName,FINDEX_INFO_LEVELS::FindExInfoBasic,&foundFileData ,FINDEX_SEARCH_OPS::FindExSearchNameMatch,NULL,FIND_FIRST_EX_LARGE_FETCH); if(hFoundFile != INVALID_HANDLE_VALUE) { nothingFound = false; do { buffer = new TCHAR[MAX_PATH+_MAX_FNAME]; wcscpy(buffer,curDir); endWithBackslash(buffer); wcscat(buffer,foundFileData.cFileName); wcscat(buffer,TEXT(\r\n)); output.push(buffer); } while(FindNextFile(hFoundFile,&foundFileData)); } //Go to the subdirs hFoundFile = FindFirstFileEx(TEXT(*),FINDEX_INFO_LEVELS::FindExInfoBasic,&foundFileData ,FINDEX_SEARCH_OPS::FindExSearchLimitToDirectories ,NULL , NULL); if(hFoundFile != INVALID_HANDLE_VALUE ) { do { if(wcscmp(foundFileData.cFileName,TEXT(.)) && wcscmp(foundFileData.cFileName,TEXT(..))) { TCHAR nextDirBuffer[MAX_PATH+_MAX_FNAME]=TEXT(); wcscpy(nextDirBuffer,curDir); endWithBackslash(nextDirBuffer); //redundant? wcscat(nextDirBuffer,foundFileData.cFileName); findFilesRecursivelly( nextDirBuffer,fileName,output); } } while(FindNextFile(hFoundFile,&foundFileData)); } }void fileSearcher::endWithBackslash(TCHAR* string){ if(string[wcslen(string)-1] != TEXT('\\')) wcscat(string,TEXT(\\));}main.cpp (simple window interface for the fileSearcher class)#ifndef UNICODE#define UNICODE#endif#include <Windows.h>#include <queue>#include File Search.hstatic HWND textBoxFileName;static HWND searchButton;static HWND textBoxOutput;LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam);using namespace fileSearch;int WINAPI wWinMain(HINSTANCE hInstance,HINSTANCE hPrevInstance,LPWSTR cmdLine,int nCmdShow){ TCHAR className[] = LMain window class; WNDCLASS wc = {}; wc.lpfnWndProc = WindowProc; wc.hInstance = hInstance; wc.lpszClassName = className; RegisterClass(&wc); HWND hMainWindow = CreateWindowEx(WS_EX_CLIENTEDGE,className,LPath getter,WS_OVERLAPPEDWINDOW,CW_USEDEFAULT,CW_USEDEFAULT,600,300,NULL,NULL,hInstance,NULL); if(hMainWindow == NULL) return 0; textBoxFileName = CreateWindowEx(WS_EX_CLIENTEDGE, LEdit, NULL,WS_CHILD | WS_VISIBLE | ES_AUTOHSCROLL, 10, 10, 300, 21, hMainWindow, NULL, NULL, NULL); searchButton = CreateWindowEx(WS_EX_CLIENTEDGE,LButton,LSearch,WS_CHILD | WS_VISIBLE | ES_CENTER, 10, 41,75,30,hMainWindow,NULL,NULL,NULL); textBoxOutput = CreateWindowEx(WS_EX_CLIENTEDGE,LEdit,NULL,WS_CHILD | WS_VISIBLE | WS_VSCROLL | ES_AUTOVSCROLL | ES_MULTILINE | ES_READONLY ,10,81,500,90,hMainWindow,NULL,NULL,NULL); ShowWindow(hMainWindow,nCmdShow); MSG msg = { }; while (GetMessage(&msg, NULL, 0, 0)) { TranslateMessage(&msg); DispatchMessage(&msg); } return 0;}LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam){ switch (uMsg) { case WM_COMMAND: if((HWND)lParam == searchButton) { fileSearcher searcher; TCHAR key[1000]; std::queue<TCHAR*> buffer; SetWindowText(textBoxOutput,NULL); GetWindowText(textBoxFileName,key,1000); searcher.getAllPaths(key,buffer); SendMessage(textBoxOutput,EM_LIMITTEXT,WPARAM(0xFFFFF),NULL); while(!buffer.empty()) { SendMessage(textBoxOutput,EM_SETSEL,GetWindowTextLength(textBoxOutput),GetWindowTextLength(textBoxOutput)); SendMessage(textBoxOutput,EM_REPLACESEL,FALSE, (LPARAM) buffer.front()); delete [] buffer.front(); buffer.pop(); } } return 0; break; case WM_DESTROY: PostQuitMessage(0); return 0; case WM_PAINT: { PAINTSTRUCT ps; HDC hdc = BeginPaint(hwnd, &ps); HBRUSH pedzel; pedzel = CreateSolidBrush(RGB(249,224,75)); FillRect(hdc, &ps.rcPaint, pedzel); EndPaint(hwnd, &ps); } return 0; } return DefWindowProc(hwnd, uMsg, wParam, lParam);}Shortcomings that I've noticed so far:1) The window freezes during work2) Only a few queries can be done in a row - a memory after each isn't deallocated from heap (I simply don't know how to fix it). | Review my file-searching program | c++;windows | A few points:Best Practice: There's no need to define UNICODE in File Search.h since it's defined at the top of those files that include it. Consider creating a configuration header that gets included by the .cpp first and let it handle the defines and other preprocessor logic. That way if you decide to make a non-UNICODE build, you only need to change one definition, not two or three.Best Practice: Class names in C++ generally follow one of two conventions: CamelCaseStyle or underscore_style. Consider using one of these for your class.Potential Build Error / Best Practice: Good use of TCHAR and the TEXT macro to conform to the Windows function definitions. Be sure to use the macro instead of directly using L... and L'.', also, so that the code can compile a non-UNICODE build. Same goes for the string-manipulating functions: wcscpy (should be _tcscpy), wcscat (should be _tcscat), and wcscmp (should be _tcscmp). More on strings later, though, since all of this (including the use of TCHAR*) is the hard way to manage strings in C++. Memory Leak: In fileSearcher::fineFilesRecursivelly, buffer is allocated with new, but never deleted. Currently the function can return in two places, so you'll need to have delete [] buffer before those two places (more on strings like buffer next). [edit: buffer is allocated twice in the function. Be sure that it's freed before allocating it again.]Best Practice: You're using C-style strings in your C++ code, which means you need to allocate them yourself (like buffer), deallocate them yourself, and call the various string-managing functions. Instead, consider using C++'s std::basic_string-derived classes to handle the tedious details. I think the related changes you'd need to make are out of scope for a code review because of the heavy use of C-style strings in your code, so check out Stack Overflow's many questions and answers on the topic.Potential Runtime Error / Best Practice: You're using std::queue<TCHAR*> to make a queue of strings, but the strings are not being copied to the queue, only referenced. The string's memory is not guaranteed to be correct once the string variable goes out of scope (I'm a little surprised it works [edit: it's not as broken as I first thought, but I still recommend the change mentioned here] ). You'll want to use std::queue<std::basic_string<TCHAR>> to ensure that copies are created and destroyed. More on strings in the previous note.Shortcomings that you mentioned:The leg-work in your code is being done from WindowProc, so additional messages cannot be processed until the work is finished. You could use a separate thread to do the work, but that topic is beyond the scope of this review.I think the memory leak is buffer, as mentioned above. |
_codereview.94035 | I am creating a simple 2D canvas game engine in JavaScript. Are there any optimizations that I could make, or obvious issues (performance, semantics or otherwise) that you can see?JS Bin herevar GameCanvas = { // GameCanvas Variables animation: { requestAnimationFrame: null, halt: false }, canvas: { element: null, context: null, width: 500, height: 500, backgroundColor: '#000000' }, objects: [ ], baseObject: function(){ return { // Custom internal name of object (Mostly for debugging) name: '', position: { width: 0, height: 0, x: 0, y: 0 }, // Any custom data for this object data: { }, // the draw method draw: function(GameCanvas){ }, criticalObj: true }; }, addObject: function(obj){ this.objects.unshift(obj); }, init: function(CanvasID){ // Get Canvas by ID this.canvas.element = document.getElementById(CanvasID); if(this.canvas.element===null){ this.console.error(No valid canvas with ID \+CanvasID+\ found.); return false; } // Get Canvas dimensions this.canvas.width = this.canvas.element.width || this.canvas.width; this.canvas.height = this.canvas.element.height || this.canvas.height; // Get the context this.canvas.context = this.canvas.element.getContext('2d'); if(this.canvas.context===null){ this.console.error(Failed to get context.); return false; } // Setup Request Animation Frame // RequestAnimationFrame Shim this.animation.requestAnimationFrame = (function(){ return window.requestAnimationFrame || window.webkitRequestAnimationFrame || window.mozRequestAnimationFrame || function(callback){ //hack for RequestAnimationFrame not being there this.console.log('No support for RequestAnimationFrame.'); window.setTimeout(callback, 1000 / 60); }; })(); this.console.log(Initialized Successfully.); return true; }, console: { data: '', // Allows for custom error output error: function(data){ if(console.error!==undefined){ console.error('GameCanvas: '+String(data)); } this.data += data+\n; }, log: function(data){ if(console.log!==undefined){ console.log('GameCanvas: '+String(data)); } this.data += data+\n; } }, run:function(){ this.console.log(Starting mainLoop().); this.mainLoop(); }, mainLoop: function(){ // Fill Background this.canvas.context.fillStyle = this.canvas.backgroundColor; this.canvas.context.fillRect(0,0,this.canvas.width,this.canvas.height); //Loop through objects for(var obj in this.objects){ try{ this.objects[obj].draw(this); } catch(err){ this.console.log(Object +this.objects[obj].name+:+obj+ throws error:\n+err); if(this.objects[obj].criticalObj){ this.console.error(Critical Object +this.objects[obj].name+:+obj+ not drawn. Halting mainLoop().); this.animation.halt = true; } } } // Looping call if(!this.animation.halt){ this.animation.requestAnimationFrame.call( window, // Call function in context of window // Call the mainLoop() function in the context of the current object this.mainLoop.bind(this) ); } else{ this.console.log(mainLoop() halted.); this.animation.halt = false; } }};GameCanvas.init(canvas);var box = GameCanvas.baseObject();box.name = Box;box.draw = function(GameCanvas){ GameCanvas.canvas.context.fillStyle = #00FF00; GameCanvas.canvas.context.fillRect(this.position.x,this.position.y,this.position.width,this.position.height);};box.position.width = 50;box.position.height = 50;box.position.x = 10;box.position.y = 10;GameCanvas.addObject(box);GameCanvas.run(); | Javascript Canvas Game Engine | javascript;game;canvas | You have some weird spacing in places, like empty arrays and functions. For example, the below block of code, can be changed to objects: [].objects: []The same also applies to empty functions, like this:draw: function(GameCanvas){}Quite a bit of your indentation as well is inconsistent. For example, the below block of code, and ones like it with inconsistent indentation:function(callback){ //hack for RequestAnimationFrame not being there this.console.log('No support for RequestAnimationFrame.'); window.setTimeout(callback, 1000 / 60);}Should be changed to a more consistent style, like this:function(callback){ //hack for RequestAnimationFrame not being there this.console.log('No support for RequestAnimationFrame.'); window.setTimeout(callback, 1000 / 60);}I'd personally recommend either four spaces, or one tab for indentation in Javascript.In the line this.console.log(Object +this.objects[obj].name+:+obj+ throws error:\n+err);, you prefix console.log with this. In this situation, this isn't needed, and can be removed.Finally, you have many other style violations, so I'd recommend checking out a style guide, like this one for reference on how to properly style code. |
_codereview.51993 | Here's a function I wrote - am I getting DRY about right? I could add another argument to paramFilter maybe, for even less reuse. Or have I gone too overboard as it is?function goMageHack() { var ampFilter = function (str) { var amps = [&, amp%3B]; for (var j = 0; j < amps.length; j++) { str = fullReplace(amps[j],amp;,str); } return str; } var paramFilter = function(element, param) { var $element = jQuery(element); for (var i = 0; i < param.length; i++) { if ($element.attr(param[i])) { $element.attr(param[i], ampFilter($element.attr(param[i]))); $element.attr(param[i], fullReplace(amp;, &, $element.attr(param[i]))); } } } jQuery(.pages,.sort-by).find(a).each(function (_, element) { paramFilter(element, [data-param, href]); }); jQuery(.limiter).find(option).each(function(_, element){ paramFilter(element, [value]); });}For completeness, since it is called from that function:function fullReplace(needle, haystack, str) { str = String(str); var newStr; while ((newStr = str.replace(needle, haystack)) !== str) { str = newStr; } return newStr;}Additionfunction handleChangingDropBox(attrib, that) { var $this = jQuery(that); var activeSet = Number($this.children(:selected).data(checkboxkey)); var checkBoxes = checkBoxSets[activeSet]; var size; if (currentlySelectedDropDownBox !== null) { checkBoxSets[currentlySelectedDropDownBox].trigger(click); } currentlySelectedDropDownBox = activeSet; if (size = checkBoxes.size()) { var lastCheckBox = checkBoxes.last(); if (size !== 1) { if ($this.val() === choose) { var dirtyParams = splitUrlIntoParams(lastCheckBox.data('param')); delete dirtyParams[attrib]; lastCheckBox.attr(data-param, createAttribString(dirtyParams)); lastCheckBox.trigger(click); currentlySelectedDropDownBox = null; } else { var newParams = []; var not = checkBoxes.not(lastCheckBox); not.each( function (index, element) { var $element = jQuery(element); newParams.push(getTheDiffForATextBox($element, lastCheckBox, attrib)); } ); createTheDataAttribsForTheLastCheckBox(lastCheckBox, newParams, not.first(), attrib); } } else { var differentindex = (activeSet == 0) ? 1 : 0; var notCheckBox = checkBoxSets[differentindex].first(); createTheDataAttribsForTheLastCheckBox(lastCheckBox, [], notCheckBox, attrib); } lastCheckBox.trigger(click); } | Filter functions | javascript;jquery | null |
_unix.275689 | I have a problem editing a html file on a server via vim. The file is utf-8 encoded.While editing with vim (v7.3, no plugins active) I can see umlauts and editing and saving a line before the umlaut is ok. But if I edit after the umlaut it seems that the umlaut consumes two chars while only one char is visible and all edits are shifted. I can see this only after saving and reopening the file. And I can insert an umlaut but for removing I have to press x twice (the char changes meanwhile).I have no idea where to search for the issue vim, terminal or ssh connection?remote:> file index.htmlindex.html: HTML document, UTF-8 Unicode text> echo $TERMxterm-256color> locale charmapANSI_X3.4-1968> grep CHARMAP /etc/default/console-setup CHARMAP=UTF-8local:> locale charmapUTF-8 | problem editing utf8 text file with vim | ssh;vim;unicode | null |
_softwareengineering.81207 | These days is it required to test a desktop website for IE6 and IE7? Or is IE8 and IE9 enough?I heard that IE8 has replaced IE7. | These days is it required to test a desktop website for IE6 and IE7? Or is IE8 and IE9 enough? | html;css;html5;xhtml | You need to consider your where your target audience is from.For example, looking at the United Kingdom:http://gs.statcounter.com/#browser_version-GB-monthly-201005-201105Results: 1.72% IE6, and 6.66% IE7.So for websites designed for UK businesses targeting UK clients, I feel safe dropping IE6. I try to make it work in IE7 where possible, but it's fine if it's not perfect. It's much the same story for America.On the other hand, if you're looking at India:http://gs.statcounter.com/#browser_version-IN-monthly-201005-201105Results: 11.81% IE6, and 5.33% IE7.IE6 actually has higher usage than IE7. I can't comment on India, but those statistics don't look good.China makes me cry:http://gs.statcounter.com/#browser_version-CN-monthly-201005-201105Results: 40.54% IE6, and 5.64% IE7.Worldwide:http://gs.statcounter.com/#browser_version-ww-monthly-201005-201105Results: 3.84% IE6, and 6.39% IE7. |
_softwareengineering.349693 | I've got a class Shop which contains a collection of Item objects. You can create a shop in two different ways:Create a shop filled with test data (for debug purposes);Create a shop by reading the data from fileI also need to write the shop back to file. There is any pattern that can elegantly wrap this behavior?This is the implementation right now (in Java):class Item { public final String name; public Item(String n) { name = n; }}class Shop { public List<Item> items; private Shop() { items = new LinkedList<>(); } public static Shop generateTestData() { Shop shop = new Shop(); shop.items.add(new Item(Product A)); shop.items.add(new Item(Product B)); shop.items.add(new Item(Product C)); return shop; } public static Shop readFromFile(String filepath) { // read from file } public static void writeToFile(Shop shop, String filepath) { // write to file }}N.B. I considered applying a sort of Prototype pattern in this way: I create a static instance of the Shop filled with test data, then the user can get a copy of that and modify it as he/she wants. Is it a good idea? | Design Pattern: getting a collection of objects from different sources | design patterns;object oriented design | In short, you are missing the support of a dedicated layer (abstraction) addressed to facilitate the access to the data and to its differents storages. The DALThe DAL allow you to decouple your actual Shop from its storage. The layer stablishes a well-defined separation of concerns (responsabilities).The design of the DAL usually vary among projects. However, I think that DAO and Repository patterns could do the job perfectly in this specific case.In terms of abstraction, DAOs and Repositories belongs to different layers. Being the Repository the higher and the DAO the lowerBussines Layer -> Repository -> DAOIt's not necessary to implement both. Whether you implement one of them or both, the key here is the single responsability principle.Translated to your specific case, you can do something like this:ShopRepository:FileShopDAO InMemoryShopDAODataBaseShopDAOetc...For every specific situation you could use one of the DAOsTesting : InMemoryShopDAOProduction : DataBaseShopDAO or FileShopDAOThe key here is that all the DAOs implement the same interface. For instance: ShopDAO. public ShopRepository { private ShopDAO dao; public ShopRepository(ShopDAO dao){ this.dao = dao; } //Data access methods....}Choose according to the requirements and your preferences.For instance, you may decide to go with 3 different Repositories rather than 3 different DAOs. That's up to you. |
_unix.85164 | In Linux, I had created a userid. After creating this, I encountered a problem that the .EXE Files are not opened on simple click. They seem to be not privileged for my user account.How can I overcome from this? | Privileges on Linux? | linux;permissions;executable | Assuming these .exefiles were actually compiled for Linux (and your specific architecture) you need to ensure they have execute permissions:chmod +x your_file_names_hereTo make sure these files are actually meant to run on Linux, check the output offile one_file_name_here |
_codereview.10180 | This is intended to be part of a generalised solution to the problem of converting any (with some minor restrictions) CSV content into XML. The restrictions on the CSV, and the purpose of the schema should be apparent from the annotations.The main review criteria I request are:Is it suitable for non-destructive round-trip transformations from .csv to .xml and back again to .csv?Is the schema clear and readable enough?Is there a simpler way to do the same thing?Are there any obvious errors?This schema, as well as associated XSLT style-sheets, when polished, will be put to good use in the public domain with a creative commons license.Here is the schema to be reviewed: <?xml version=1.0 encoding=UTF-8?><xs:schema xmlns:xs=http://www.w3.org/2001/XMLSchema xmlns:xcsv=http://seanbdurkin.id.au/xslt/xcsv.xsd elementFormDefault=qualified targetNamespace=http://seanbdurkin.id.au/xslt/xcsv.xsd version=1.0> <xs:import namespace=http://www.w3.org/XML/1998/namespace schemaLocation=xml.xsd/> <xs:element name=comma-separated-single-line-values> <xs:annotation><xs:documentation xml:lang=en> This schema describes an XML representation of a subset of csv content. The format described by this schema, here-after referred to as xcsv is part of a generalised solution to the problem of converting general csv files into suitable XML, and the reverse transform. The restrictions on the csv content are: * The csv file is encoded either in UTF-8 or UTF16. If UTF-16, a BOM is required. * The cell values of the csv may not contain the CR or LF characters. Essentially, we are restricted to single-line values. The xcsv format was developed by Sean B. Durkin… www.seanbdurkin.id.au </xs:documentation></xs:annotation> <xs:complexType> <xs:sequence> <xs:element ref=xcsv:notice minOccurs=0 maxOccurs=1/> <xs:element name=row minOccurs=0 maxOccurs=unbounded> <xs:annotation><xs:documentation xml:lang=en> A row element represents a row or line in the csv file. Rows contain values. </xs:documentation> <xs:appinfo> <example> <csv-line>apple,banana,red, white and blue,quote this()</csv-line> <row> <value>apple</value> <value>banana</value> <value>red, white and blue</value> <value>quote this()</value> </row> </example> </xs:appinfo> </xs:annotation> <xs:choice minOccurs=1 maxOccurs=unbounded> <xs:annotation><xs:documentation xml:lang=en> Empty rows are not possible in csv. We must have at least one value or one error. </xs:documentation></xs:annotation> <xs:element name=value> <xs:annotation><xs:documentation xml:lang=en> A value element represents a decoded (model) csv value or cell. If the encoded value in the lexical csv was of a quoted form, then the element content here is the decoded or model form. In other words, the delimiting double-quote marks are striped out and the internal escaped double-quotes are de-escaped. </xs:documentation></xs:annotation> <xs:simpleType> <xs:restriction base=xs:string> <xs:pattern value=[^\n]*/> <xs:whiteSpace value=preserve/> <xs:annotation><xs:documentation xml:lang=en> Cell values must fit this pattern because of the single-line restriction that we placed on the csv values. </xs:documentation></xs:annotation> </xs:restriction> </xs:simpleType> </xs:element> <xs:group ref=xcsv:errorGroup> <xs:annotation><xs:documentation xml:lang=en> An error can be recorded here as a child of row, if there was an encoding error in the csv for that row. </xs:documentation></xs:annotation> </xs:group> </xs:choice> </xs:element> <xs:group ref=xcsv:errorGroup> <xs:annotation><xs:documentation xml:lang=en> An error can be recorded here as a child of the comma-separated-values element, if there was an i/o error in the transformational process. For example: CSV file not found. </xs:documentation></xs:annotation> </xs:group> </xs:sequence> <xs:attribute name=xcsv-version type=xs:decimal fixed=1.0 use=required/> </xs:complexType> </xs:element> <xs:element name=comma-separated-multiline-values> <xs:annotation><xs:documentation xml:lang=en> Similar to xcsv:comma-separated-multi-line-values but allows multi-line values. </xs:documentation></xs:annotation> <xs:complexType> <xs:sequence> <xs:element ref=xcsv:notice minOccurs=0 maxOccurs=1/> <xs:element name=row minOccurs=0 maxOccurs=unbounded> <xs:choice minOccurs=1 maxOccurs=unbounded> <xs:element name=value> <xs:simpleType> <xs:restriction base=xs:string> <xs:whiteSpace value=preserve/> </xs:restriction> </xs:simpleType> </xs:element> <xs:group ref=xcsv:errorGroup> </xs:group> </xs:choice> </xs:element> <xs:group ref=xcsv:errorGroup> </xs:group> </xs:sequence> <xs:attribute name=xcsv-version type=xs:decimal fixed=1.0 use=required/> </xs:complexType> </xs:element> <xs:element name=notice type=xcsv:notice-en /> <xs:annotation><xs:documentation xml:lang=en> This is an optional element below comma-separated-single-line-values or comma-separated-multiline-values that looks like the example. </xs:documentation> <xs:appinfo> <example> <notice xml:lang=en>The xcsv format was developed by Sean B. Durkin…www.seanbdurkin.id.au</notice> </example> </xs:appinfo></xs:annotation> <xs:complexType name=notice-en> <xs:simpleContent> <xs:extension base=xcsv:notice-content-en> <xs:attribute ref=xml:lang use=required fixed=en /> </xs:extension> </xs:simpleContent> </xs:complexType> <xs:simpleType name=notice-content-en> <xs:restriction base=xs:string> <xs:enumeration value=The xcsv format was developed by Sean B. Durkin…www.seanbdurkin.id.au/> </xs:restriction> </xs:simpleType> <xs:element /> <xs:group name=errorGroup> <xs:annotation><xs:documentation xml:lang=en> This is an error node/message in one or more languages. </xs:documentation> <xs:appinfo> <example> <error error-code=2> <message xml:lang=en>Quoted value not terminated.</message> <message xml:lang=ru> .</message> <error-data></error-data> </error> </example> <example> <error error-code=3> <message xml:lang=en>Quoted value incorrectly terminated.</message> <message xml:lang=ru> .</message> </error> </example> </xs:appinfo> </xs:annotation> <xs:element name=error> <xs:element name=message minOccurs=1 maxOccurs=unbounded type=xcsv:string-with-lang /> <xs:annotation><xs:documentation xml:lang=en> Although there can be multiple messages, there should only be at most one per language. </xs:documentation></xs:annotation> <xs:element name=error-data minOccurs=0 maxOccurs=1 > <xs:simpleContent> <xs:restriction base=xs:string> <xs:whiteSpace value=preserve/> </xs:restriction> </xs:simpleContent> </xs:element> <xs:attribute name=error-code type=xs:positiveInteger default=1 /> <xs:annotation><xs:documentation xml:lang=en> Each different kind of error should be associated with a unique error code. A map for the error codes is outside the scope of this schema, except to say the following: * one (1) means a general or uncategorised error. (Try to avoid this!) </xs:documentation></xs:annotation> </xs:element> </xs:group> <xs:complexType name=string-with-lang> <xs:annotation><xs:documentation xml:lang=en> This is an element with text content in some language as indicated by the xml:lang attribute. </xs:documentation></xs:annotation> <xs:simpleContent> <xs:extension base=xs:string> <xs:attribute ref=xml:lang use=required default=en /> </xs:extension> </xs:simpleContent> </xs:complexType></xs:schema>Use casesCase 1Lines ending in CR LF, including the last line.The CSV:1st name,2nd nameSean,Brendan,Durkin,<This is a place-marker for an empty row>,The XML equivalent (schema valid):<xcsv:comma-separated-values xmlns:xcsv=http://seanbdurkin.id.au/xslt/xcsv.xsd xmlns:xml=http://www.w3.org/XML/1998/namespace xcsv-version=1.0> <xcsv:notice xml:lang=en>The xcsv format was developed by Sean B. Durkin…www.seanbdurkin.id.au</xcsv:notice> <xcsv:row> <xcsv:value>1st name</xcsv:value> <xcsv:value>2nd name</xcsv:value> </xcsv:row> <xcsv:row> <xcsv:value>Sean</xcsv:value> <xcsv:value>Brendan</xcsv:value> <xcsv:value>Durkin</xcsv:value> </xcsv:row> <xcsv:row> <xcsv:value>,</xcsv:value> </xcsv:row> <xcsv:row> <xcsv:value /> </xcsv:row> <xcsv:row> <xcsv:value /> <xcsv:value /> </xcsv:row></xcsv:comma-separated-values>Case 2As case 1, but with line endings as just LF.XML as case 1.Case 3Lines ending in CR LF, including the last line.The CSV:Fruit,ColourBanana,YellowThe XML equivalent (schema valid):<xcsv:comma-separated-values xmlns:xcsv=http://seanbdurkin.id.au/xslt/xcsv.xsd xmlns:xml=http://www.w3.org/XML/1998/namespace xcsv-version=1.0> <xcsv:row> <xcsv:value>Fruit</xcsv:value> <xcsv:value>Colour</xcsv:value> </xcsv:row> <xcsv:row> <xcsv:value>Banana</xcsv:value> <xcsv:value>Yellow</xcsv:value> </xcsv:row></xcsv:comma-separated-values>Case 4Same as case 3, but last line ends in eof. In other words, the last byte of the file is the UTF-8 code for 'w'.Same XML!Case 5Empty file. The size of the file is zero.Valid XML instance:<xcsv:comma-separated-values xmlns:xcsv=http://seanbdurkin.id.au/xslt/xcsv.xsd xcsv-version=1.0 />Case 6.The file has one byte: the UTF-8 code for LF.CSV:LFValid XML instance:Same XML as case 5!Case 7CVS encoding errorsThe CSV (not valid):Fruit,ColourBanana,YellowThe valid XML instance:<xcsv:comma-separated-values xmlns:xcsv=http://seanbdurkin.id.au/xslt/xcsv.xsd xmlns:xml=http://www.w3.org/XML/1998/namespace xcsv-version=1.0> <xcsv:row> <xcsv:value>Fruit</xcsv:value> <xcsv:error error-code=2> <xcsv:message xml:lang=en>Quoted value not terminated.</xcsv:message> <xcsv:error-data></xcsv:error-data> </xcsv:error> <xcsv:value>Colour</xcsv:value> </xcsv:row> <xcsv:row> <xcsv:value>Banana</xcsv:value> <xcsv:error error-code=3> <xcsv:message xml:lang=en>Quoted value incorrectly terminated.</xcsv:message> <xcsv:error-data></xcsv:error-data> </xcsv:error> <xcsv:value>Yellow</xcsv:value> </xcsv:row></xcsv:comma-separated-values>Case 8Specific application where CSV looks like:1st name,2nd nameSean,DurkinPeter,PanIn this specific application, the header is always there, with columns in the specified order:<people> <person first-name=Sean first-name=Durkin /> <person first-name=Peter first-name=Pan /></people>Step 1. Transform .cvs into .xcvs, using a generic library XSLT style-sheet.Step 2. Transform .xcsv into the application-specific structure as above, using a trivial XSLT style-sheet.Case 9This use case demonstrates the necessary XML encoding on a lexical level for & and < and raw data. No special encoding is required at the XML parser API level.The CSV: Character,Name &,Ampersand <,Less thanThe equivalent schema-valid XML instance:<xcsv:comma-separated-values xmlns:xcsv=http://seanbdurkin.id.au/xslt/xcsv.xsd xmlns:xml=http://www.w3.org/XML/1998/namespace xcsv-version=1.0> <xcsv:row> <xcsv:value>Character</xcsv:value> <xcsv:value>Name</xcsv:value> </xcsv:row> <xcsv:row> <xcsv:value>&</xcsv:value> <xcsv:value>Ampersand</xcsv:value> </xcsv:row> <xcsv:row> <xcsv:value><</xcsv:value> <xcsv:value>Less than</xcsv:value> </xcsv:row></xcsv:comma-separated-values> | XML Schema for an XML representation of CSV | xml;xsd;xslt | (This is more of a comment, than an answer, but there are several longer points I'd like to address which is easier in an answer).Could you show some use cases for this? Considering that both CSV and XML are both formats for general data storage, I don't see point in converting as CSV file into a non-specifc XML format instead directly into the specific XML format of the application in use.Also, the problem with CSV is that it's not really standardized. Despite the name they don't need to use commas as value separators. Semicolons or tabs are common variants. Also some variants require quoting all values, or allow single quotes, or use backslashes to escape quotes in values, or allow line breaks in values (which is the one variant you curiously disallow). If you really need non-destructive round-trip transformations you should consider all these variants and store the features of the CSV implementation in your XML.On the other hand, you store the information if a value is quoted or not, but this isn't really part of the relevant information. Take a, for example, similar conversation: XML -> DOM -> XML. Here it is also not stored if or how a value is quoted. An XML document such as<example><![CDATA[ <&> ]]></example>after reading it into a DOM structure and then re-serializing it, it could (and often would) come out as:<example> <&> </example>because both encodings are equivalent.Similarly in your case, it shouldn't matter if a value was originally quoted or not. So if a row such asapple,banana,red, white and blue,quote this()come out asapple,banana,red, white and blue,quote this()should be irrelevant - unless the specific CSV application requires quoting. So it's more important to store that information in the XML, than whether or a single value was quoted or not. |
_unix.387787 | How to login into the skype with Microsoft account from Linux terminal in Ubuntu?I used the following command:echo username password | skype --pipeloginBut it is directing the credentials to the Skype Name login tab. But I want to login with a Microsoft account.Can I login into Skype with a Microsoft account from Linux terminal?I am using Skype 4.3.0.37 2014 Skype and/or Microsoftand Ubuntu 16.04 machine. | How to login into the skype with a Microsoft account from Linux terminal? | linux;ubuntu;terminal;login;skype | null |
_codereview.63921 | I've made a function to print all paths from root to all leaves in a binary search tree. I already have an insert function, which I haven't included in the code here as I felt it was irrelevant to the question. However, assume that it works. The code provided does produce the correct results. However, is it optimal? Is there room for improvement? Also, am I correct in thinking that the time complexity of this function is \$O(n)\$?public static void printPaths(Node node,ArrayList<Integer> path) { if(node == null) { return; } path.add(node.value); if(node.leftChild == null && node.rightChild == null) { System.out.println(path); return; } else { printPaths(node.leftChild,new ArrayList<Integer>(path)); printPaths(node.rightChild,new ArrayList<Integer>(path)); } }public static void main(String[] args) { BST tree = new BST(); tree.insertNode(20); tree.insertNode(8); tree.insertNode(22); tree.insertNode(12); tree.insertNode(10); tree.insertNode(14); tree.insertNode(4); ArrayList<Integer> path = new ArrayList<Integer>(); printPaths(tree.root, path); | Print all nodes from root to leaves | java;optimization;tree;complexity | However, is it optimal? I don't see wasted operations or opportunities to simplify the main logic itself.However, optimal is a bit tricky term. To begin with, optimal in terms of what? In terms of readability, I think this is fine. In terms of performance, it's not great. Cloning the list of nodes for every path is not efficient. If performance is important to you, then you need to rethink that part. I can think of at least these alternative algorithms:Use a shared linked list passed down to all method calls, that grows and shrinks as you go deeper or come back higher in the tree. This will avoid the duplication of the entire path.Use a shared array list passed down to all method calls, and also pass the current depth n. In each method call, overwrite the n-th element, and print the list contents up until the n-th element. This will avoid the duplication of the entire path. For an extra boost, if you know in advance the depth of the tree, initialize the array list with a size big enough to contain the entire path, so it doesn't need to be resized along the way. In fact, instead of an array list, you could use a plain array for best performance.Is there room for improvement? Use interface types in method signatures and variable declarations. For example:printPaths(Node node, List<Integer> path) { ... }List<Integer> path = new ArrayList<Integer>();You can omit the return statement here:if(node.leftChild == null && node.rightChild == null) { System.out.println(path); return;} else { printPaths(node.leftChild,new ArrayList<Integer>(path)); printPaths(node.rightChild,new ArrayList<Integer>(path));}And you should add a space in front of the opening paren of the if, and after commas in argument lists, like this:if (node.leftChild == null && node.rightChild == null) { System.out.println(path);} else { printPaths(node.leftChild, new ArrayList<Integer>(path)); printPaths(node.rightChild, new ArrayList<Integer>(path));}Another alternative is to keep the return but drop the else:if (node.leftChild == null && node.rightChild == null) { System.out.println(path); return;}printPaths(node.leftChild, new ArrayList<Integer>(path));printPaths(node.rightChild, new ArrayList<Integer>(path));Also, am I correct in thinking that the time complexity of this function is O(n)?Yes. You visit all the n nodes exactly once.Unfortunately, no, as @Florian explains very well in his answer.You might want to consider accepting his answer instead. |
_unix.96366 | I'm trying to write a man page for a software, and would like to include some code snippets. I'm currently using the .RS and .RE macros as part of a custom-made .SAMPLE macro, but for some reason that doesn't work. Here is the man page:.TH MYMANPAGE 1 .de SAMPLE.br.RS.nf.nh...de ESAMPLE.hy.fi.RE...SH TEST SECTION HEADINGThis is a test section heading..TP.B Test Paragraph LabelThis is some test paragraph text. This is some test paragraph text. Thisis some test paragraph text. This is some indented test code:.SAMPLEint main(void) { return 42;}.ESAMPLEThis is more text after the test code. This is more text after the testcode.What ends up happening is that the text after .ESAMPLE is not indented as much as the paragraph text. Instead, it's lined up with the paragraph label. What would be the proper .[E]SAMPLE macro definitions to get them to play nice with .TP? | Properly inserting code samples in man pages | man;roff;groff | The .RE restores the default indentation level, not the current .TPindentation level. All you need to do is save and restore the actualindent in play when .RS is called. The fix below assumes you willnot nest SAMPLEs inside SAMPLEs:.de SAMPLE.br.nr saveIN \\n(.i \ double the backslash when defining a macro.RS.nf.nh...de ESAMPLE.hy.fi.RE.in \\n[saveIN]u \ 'u' means 'units': do not scale this number ..$ man ./i[...]Test Paragraph Label This is some test paragraph text. This is some test paragraph text. This is some test paragraph text. This is some indented test code: int main(void) { return 42; } This is more text after the test code. This is more text after the test code. |
_scicomp.26229 | I'm writing a library that involves some approximations of variational calculus problems. Whenever I implement routines to evaluate the derivative or Hessian of an action functional $A$, I write a test to check that these are working correctly. To do this, I evaluate the error in using the first and second derivatives of the action to compute a local quadratic approximation to the objective functional:$E = A(u + \delta v) - \left(A(u) + \delta\cdot dA(u)\cdot v + \frac{1}{2}\delta^2(d^2A(u)v)\cdot v\right)$and check that $E/\delta^3$ doesn't grow for $\delta = 2^{-1},\ldots,2^{-N}$ for some big enough $N$.This works well for moderately sized $N$ (up to 16 or so), but for $N$ too large, truncation error begins to dominate and the error in the approximation gets worse.What is a good way to test derivative approximations in the face of truncation error?The way I see it, I have only bad options:Write an initial version of the test that prints the approximation errors, find the $N$ for which truncation error takes over, then use this $N$ as the cutoff point in the final version of the unit test.Have some way of automatically diagnosing where truncation error takes over and discard all the results thereafter.Don't make my unit tests return failure unless it's really obvious. Graph the results and let the user decide.Determine analytically where truncation error is likely to occur.I don't like option 1 because it feels like cherry-picking my tests so that they will pass. I don't like option 2 because a genuinely failing implementation's inherent mathematical errors might be erroneously assessed as truncation error, so that automated regression testing would miss the bug. At that rate, I'd have to check the results manually, in which case I might as well go with option 3. But I don't like option 3 because I want automated testing. I think option 4 is practical for functions of a single variable -- evaluating $\sqrt{\epsilon|f(x)/f''(x)|}$ where $\epsilon$ is the machine epsilon gives you a pretty good upper bound -- but for big PDE problems where the unknown is a field with thousands of variables, not so much. | testing derivative approximations | testing | null |
_webapps.44614 | You have the green thing which means they are online by computer. Then you have the phone with the number next to it saying how long they've been away. So if the phone is by itself does that mean that the person is currently on facebook with their mobile device? | What does the phone by itself mean? | facebook;facebook chat | So if the phone is by itself does that mean that the person is currently on Facebook with their mobile device?Yes, that's pretty much what it means |
_cstheory.29117 | One of the main parameters in the construction of extractors is $k$, the min-entropy of the source distribution. In practice, suppose we want to extract randomness from a given source $S$. How do we determine $k$ in this source $S$? More generally, how do we quantify the amount of (extracted) randomness in the output distribution of the extractor. | Extractors in Practice: How to Determine the Min-Entropy in the Source Distribution | cc.complexity theory;randomness;derandomization | In practice, if you want to extract randomness from a source $S$, you don't use an extractor. Instead, you use a cryptographic hash function and hash the data from the source. (This is how cryptographic-strength pseudo-random number generators distill randomness from many non-uniformly distributed sources of randomness.) This can be proven safe in the random oracle model.Of course, if you care mostly about theoretical bounds, extractors are better because they provide provable bounds without requiring cryptographic assumptions, so theoretical work will naturally focus on extractors -- but if you want to do this in practice, that consideration is less important. If you want to do this in practice, I recommend you use a cryptographic hash function, as the assumptions are reasonable and you get something that does not waste any entropy. (And before you reject the idea of depending upon cryptographic assumptions: Hey, if it's good enough to protect e-commerce and banks and classified documents, it's probably good enough for your purposes.)If you use a cryptographic hash function, the bottom line (essentially) is that all of the entropy of the source is preserved (up to the security level of the cryptographic hash function). So, quantifying the amount of extracted randomness comes down to quantifying how much randomness is present in the source.There is no good black-box way to determine the min-entropy of the source distribution. So, in practice, you don't try to determine the min-entropy of a source through observational methods (e.g., observing many samples and then doing something). Instead, you need to have a model of how the source works, and to derive estimates of its entropy based upon that model. That model might be based upon the physics of the source or other domain-specific assumptions. This is not a question that algorithms can help you with much.Why is it hard to estimate the min-entropy of a source? Consider two sources: $S_1$ outputs a random $1000$-bit value $x$ (distributed uniformly at random); $S_2$ picks a random $128$-bit value $k$, then uses AES in CTR mode (a secure pseudorandom generator) to stretch $k$ to a $1000$-bit value $y$, and outputs $y$. Notice that, assuming AES is secure, there is no feasible way to distinguish source $S_1$ from source $S_2$ by just observing their outputs: any algorithm for distinguishing the two sources would require something like $2^{128}$ steps of computation. However, the min-entropy of $S_1$ is very different from the min-entropy of $S_2$ ($1000$ bits vs $128$ bits). This example shows that computing the min-entropy of a source is a difficult in general, so you should not expect to find any efficient algorithm to do that for you based solely on observing some outputs from the source.For a more rigorous treatment of the complexity of estimating the min-entropy, see e.g. the following paper:The Complexity of Estimating Min-Entropy. Thomas Watson. CC 2015.The bottom line is that it's hard. (Thanks to epsfooling for pointing me to this paper.)The literature on cryptographic pseudorandom number generators has lots more on this topic. |
_unix.346126 | Handbrake new VP9 codec is grayed out and could not be used.Linux Mint.I installed it a fresh from ppa repo that is referenced on their homepage.libav and the vpx-tools are installed. What have I missed? What can I check? | Handbrake new VP9 codec is grayed out | video encoding;codec;handbrake | VP9 is associated with the MKV container.Choose a different preset. e.g. Matroska -> VP9 720p |
_unix.91620 | I am attempting to install Arch linux to a new (and very crappy) HP Pavillion 15 Notebook.This is a UEFI-based machine. After several swings at it, I have managed to get pretty far. Legacy mode is disabled in the system setup, and I have EFI-booted to the Arch DVD I burned, and progressed through both the Arch Beginner's Guide and the more advanced Installation Guide to the point where I am installing grub.While chrooted, I execute:grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=arch_grub --recheck --debugThis emits a ton of output, including:EFI variables are not supported on this systemThe first time I got to this point, I continued with the installation, not knowing if it was an actual problem. Turns out it was, as when I rebooted the machine no bootable medium could be found and the machine refused to boot. I was able at that point to go in to the UEFI setup menu and select an EFI file to boot, and the Arch Linux would boot up.But I am now going back and reinstalling again, trying to fix the problem above.How can I get GRUB to install correctly? | EFI variables are not supported on this system | uefi | The problem was simply that the efivars kernel module was not loaded.This can be confirmed by:sh-4.2# efivar-testerUEFI variables are not supported on this machine.If you are chrooted in to your new install, exit out, and then enable efivars:exitmodprobe efivars...and then chroot back in. In my case, this means:chroot /mntbut you should chroot the same way you did before.Once back in, test again:efivar-testerThis will no longer report an error, and you can install grub the same way you did before.grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=arch_grub --recheck --debug |
_unix.323194 | I'd like to check to make sure a handful of commands are available. If it's not, I'd like to print an error message and then exit.I'd like to do this without checking variables, because it's a small point in the script and I don't want it to sprawl over a bunch of lines.The shape I'd like to use is basically this:rsync --help >> /dev/null 2>&1 || printf %s\n rsync not found, exiting.; exit 1Unfortunately, the exit 1 is executed regardless of the rsync result. Is there a way to use this perl-type die message in bash, or no? | Put two commands after an || | bash | null |
_unix.164540 | How can we send a SMS message from a Linux server to mobile for notifying specific process status saying when a process goes down? | Sending SMS in Linux? | linux;monitoring;notifications;sms | null |
_cstheory.25229 | Fast rates generally refers to generalization bounds interpolating between the $1/n$ consistent rate and the $1/\sqrt n$ agnostic rate. I am aware of two basic approaches for obtaining these: (1) Talagrand's Bernstein-type inequality for empirical processes and (2) PAC-Bayesian bounds. Question: are there other methods? More to the point, is there a clean, user-friendly set of notes on this, suitable for (advanced) students? | Fast rates -- cleanest proof | machine learning;lg.learning | null |
_softwareengineering.205033 | I'm writing a library which links with modified LGPL library,so two questions:Do I have to make my code LGPL in case of static linking with LGPL library?in case of dynamic linking? | Static linking with modified LGPL code | licensing;lgpl | null |
_cstheory.18410 | If you have n points in 2d space, how quickly can you report the k nearest points to every point under Euclidean distance? If it helps speed things up we can ignore points that are more than some distance away as well so potentially return fewer than k. A randomized or approximate solution would also be interesting.One solution is to build a kd tree and do an independent look up for every point. Is this as good as it gets? | Finding k nearest neighbors for all points | ds.algorithms | $O(kn+n\log n)$. SeeP.B. Callahan, S.R. Kosaraju, A decomposition of multidimensionalpoint sets with applications to k-nearest-neighbors and n-body potential elds, J. ACM 42 (1995) 6790.In some models of computation the $O(n\log n)$ part can be reduced or removed; see alsoT. M. Chan, Well-separated pair decomposition in linear time?, Inf. Proc. Lett. 2008. |
_webapps.44082 | Wikipedia currently has a notification at the top of every single page saying that some accounts will be renamed due to a technical change when I am logged in. Clicking on Read more doesn't really explain much to me. What is it all about? | What does Wikipedia mean when they say some accounts will be renamed due to a technical change? | wikipedia;wiki | Wikipedia is currently trying to rename accounts that are currently not global. As you know, the English Wikipedia is only one of the many wikis available on the network of wikis owned by the Wikimedia Foundation. Many years ago, they decided to implement something called Single User Login (SUL) so that users can just log in on one wiki and be logged in on every single wiki on the network. This is provided that an account is considered global.This should not affect many people, as most of our accounts are created after this change was implemented, which means your account may be global as well. You can visit Special:Preferences to check this. If not, you most likely have received a notification about this change personally sent to you, and your account will be renamed automatically so that it can become a global account.Generally, you don't have to do anything, as this only affects a small number of users only. If you are one of these users, you would have to wait till August 2013 for your account to be automatically renamed before you can request for a proper name change on Meta. In the meantime, your username after this change would be in the form <user>~enwiki (if you have an account on the English Wikipedia. |
_codereview.67611 | I'm given a hexadecimal number in string form with a leading 0x that may contain 1-8 digits, but I need to pad the number with zeros so that it always has 8 digits (10 characters including the 0x).For example:0x123 should become 0x00000123.0xABCD12 should become 0x00ABCD12.0x12345678 should be unchanged.I am guaranteed to never see more than 8 digits, so this case does not need to be handled.Right now, I have this coded as:padded = '0x' + '0' * (10 - len(mystring)) + mystring[2:]It works, but feels ugly and unpythonic. Any suggestions for a cleaner method? | Padding a hexadecimal string with zeros | python;python 2.7;formatting | Perhaps you're looking for the .zfill method on strings. From the docs:Help on built-in function zfill:zfill(...) S.zfill(width) -> string Pad a numeric string S with zeros on the left, to fill a field of the specified width. The string S is never truncated.Your code can be written as:def padhexa(s): return '0x' + s[2:].zfill(8)assert '0x00000123' == padhexa('0x123')assert '0x00ABCD12' == padhexa('0xABCD12')assert '0x12345678' == padhexa('0x12345678') |
_webapps.102567 | I have looked at adding or subtracting time for spreadsheet, but haven't found a way to easily do this yet:Suppose I start the clock at 7am.I would like to have a way to calculate the accumulated minutes for the tasks to be done and display what the current time should be after having completed the said task (input as minutes).In the example below, A is the input in minutes, B is the calculated current time from the minutes from 7am. It goes on down the rows. I expect some formatting manipulations would be need as well for time.A | B | C0min | 7:00am | Start10min | 7:10am | warmup5min | 7:15am | techniques4min | 7:19am | first drillsHope this is something easy. I just can't quite figure where to start looking correctly. | Add up Time Used in Minutes for Spreadsheet or Doc Table | google spreadsheets | With A3:An formatted as number as shown and B2:Bn formatted as time as shown then in B3:=B2+A3/1440copied down to suit should serve. |
_cs.68387 | The textbook The Nature of Computation uses the following definition of quasipolynomial time: A quasipolynomial is a function of the form $f(n) = 2^{\Theta(\log^k n)}$ for some constant $k > 0$, where $\log^k n$ denotes $(\log n)^k$. Let us define QuasiP as the class of problems that can be solved in quasipolynomial time.So presumably the definition for QuasiP could be written $TIME(\bigcup_k 2^{\Theta(\log^k n)})$. However every other definition I've found on the web, in particular the one from Wikipedia, suggests the alternative definition $TIME(\bigcup_k 2^{O(\log^k n)})$.Now I can't see how these definitions are supposed to be equivalent. In fact I can imagine that there's a function $f$ that requires $2^{\log n}$ steps for even values of $n$ and $1$ step for odd values of $n.$ $f$ would fail to be in $2^{\Theta(\log^0 n)}$ because of the even values of $n$ and it would fail to be in $2^{\Theta(\log^k n)}$ for any $k > 0$ because of the odd values of $n$. However $f$ is still in $2^{O(\log^1 n)}$.So apparently such an $f$ is in QuasiP according to the second definition but not according to the first. Did I make a mistake in the reasoning here? And if not am I correct in assuming that the definition in The Nature of Computation is erroneous? | Conflicting definitions of quasipolynomial time | terminology;time complexity;asymptotics;landau notation | The equality$Time\left(\bigcup\limits_k 2^{O\left(\log^k n\right)}\right)=Time\left(\bigcup\limits_k 2^{\theta\left(\log^k n\right)}\right)$is an equality betweeen two sets of languages decidable by certain Turing machines, and not an equality between sets of functions $f:\mathbb{N}\rightarrow\mathbb{N}$.The function you constructed is an example of a function in $\bigcup\limits_k 2^{O\left(\log^k n\right)}\setminus \bigcup\limits_k 2^{\theta\left(\log^k n\right)}$, but this does not contradict the above equality.Let $L\in Time\left(2^{O\left(\log^k n\right)}\right)$ for some $k\in\mathbb{N}$, be a language decidable by a Turing machine which runs in time $2^{O\left(\log^k n\right)}$. You can simply construct an equivalent Turing machine which runs in time $2^{\theta\left(\log^k n\right)}$ by adding redundant steps in case the computation ended too quickly, and this shows $L\in Time\left(2^{\theta\left(\log^k n\right)}\right)$. |
_codereview.155716 | I'm making my first simple project which I decided would be a calculator and I'm kind of stuck on implementing the operations. I have two classes one of which is the GUI part and the other one represents all the processes 'inside' of the calculator. I decided to use BigDecimals instead of doubles. The problem is I really like the idea of putting all the possible calculator's operations in an enum associating their names with the mathematical operations they perform and the signs they are represented by.Here is the yet incomplete class: final class Workings { private BigDecimal operand1, operand2, memory; private int precision; private JTextField screen; Workings (int precision, JTextField screen) .... } enum Operation { ADDITION(+, Ary.BINARY) { BigDecimal apply(BigDecimal op1, BigDecimal op2, int scale) { return op1.add(op2); } }, SUBTRACTION(-, Ary.BINARY) { BigDecimal apply(BigDecimal op1, BigDecimal op2, int scale) { return op1.subtract(op2); } }, MULTIPLICATION(*, Ary.BINARY) { BigDecimal apply(BigDecimal op1, BigDecimal op2, int scale) { return op1.multiply(op2); } }, DIVISION(/, Ary.BINARY) { BigDecimal apply(BigDecimal op1, BigDecimal op2, int scale) throws DivideByZeroException { if(op2.signum() == 0) throw new DivideByZeroException(); return op1.divide(op2, scale, RoundingMode.HALF_UP); } }, ...; abstract BigDecimal(BigDecimal op1, BigDecimal op2, int scale); private enum Ary { UNARY, BINARY } private final String symbol; private final Ary ary; Operation (String symbol, Ary ary) { this.symbol = symbol; this.ary = ary; } String getSymbol() { return symbol; } } }At first I didn't even know such constant-specific method implementations in enums were possible but it turns out Joshua Bloch presents them also on the example on calculator in Effective Java Second Edition (item 30). Only it gets a little bit more complicated with BigDecimals as well as unary operations (like square or square root for instance). The hard part for me is that the methods need different parameters. For example I don't need the scale unless I want to divide and I don't need two operands if i want to compute the square root. I think it's not very elegant to take unnecessary parameters. On the other hand this approach automatizes things and minimizes the space for error when expanding the class. One more thing I keep in mind is that it would be nice to have all the buttons in one enum. It would complicate things even further if i tried to add, for instance, operations on the calculator's memory since they would again need different parameters.I've come up with a few solutions but none of them is perfect. Here are two of them:The instance variables (operand1, operand2, memory) could be made static so that the methods in enum can freely use them without passing and returning themThe calculator operations could simply be instance methods of the Workings class and the 'enum' would only hold the names and symbols of operations:final class Workings {private BigDecimal operand1, operand2, memory;private int precision;private JTextField screen;Workings (int precision, JTextField screen) ...}void addition() { operand1 = operand1.add(operand2);}void subtraction() { operand1 = operand1.subtract(operand2);}...enum Operation { // new enum ADDITION(+), SUBTRACTION(-), MULTIPLICATION(*), DIVISION(/), SQUARE(x^2), SQUARE_ROOT(x), MEMORY_RECALL(MR), MEMORY_ADD(M+), MEMORY_SUBTRACT(M-), ZERO(0), ONE(1), TWO(2) ... ; private final String symbol; Operation (String symbol) { this.symbol = symbol; } String getSymbol() { return symbol; }}}But then each of them would have to be connected manually with corresponding methods.Is there a better approach to do that? | Enum of calculator operations | java;swing;calculator;enum;static | But then each of them would have to be connected manually with corresponding methods.When you look closer on that statement you have to do this manual connection between the enum and the actual code that performs the operation anyway. When you write that code in the enums implementation of the abstract method it is exactly that: manually connecting the enum and the code. This still leaves us with the question where we should create this connection. While you use Swing I'd suggest to do it in the View/Controller (since swing does not support strict separation of them...)public class CalulatorViewController{ private final JTextComponent precisionInput = new JTextField(); private final JTextComponent operandInput = new JTextField(); private final JTextComponent calculationDisplay = new JTextArea(30,5); private BigDecimal accumulator = BigDecimal.ZERO; public CalulatorViewController(Container mainPanel){ mainPanel.setLayout(new BorderLayout()); mainPanel.add(createTextInputAndOutputPanel(),BorderLayout.CENTER); mainPanel.add(createButtonPanel(),BordrLayout.BOTTOM); } private void createTextInputAndOutputPanel(){ // not important for now } private void createButtonPanel(){ JPanel buttonPanel= new JPanel(new GridLayout(0,7)); // 7 columns, rows as needed... buttonPanel.add(new JButton(new AbstractAction(1){ public void actionPerformed(ActionEvent ae){ operandInput.setText(operandInput.getText()+1); } }); buttonPanel.add(new JButton(new AbstractAction(2){ public void actionPerformed(ActionEvent ae){ operandInput.setText(operandInput.getText()+2); } }); buttonPanel.add(new JButton(new AbstractAction(3){ public void actionPerformed(ActionEvent ae){ operandInput.setText(operandInput.getText()+3); } }); }); buttonPanel.add(new JButton(new AbstractAction(+){ public void actionPerformed(ActionEvent ae){ accumulator= accumulator.add(new BigDecimal(operandInput.getText())); } }); // continue for all the buttons you need return buttonPanel; }This might look like lots of duplicated code. And you're right.Good thing is that you can collect things with same behavior in classes:// this could be in a file of its own...class NumberButtonAction extends AbstractAction{ private final JTextComponent operandInput; private int number; NumberButtonAction(JTextComponent operandInput, String number){ super(number); this.operandInput = operandInput; } public void actionPerformed(ActionEvent ae){ operandInput.setText(operandInput.getText()+getValue(Action.NAME).toString()); }} private void createButtonPanel(){ JPanel buttonPanel= new JPanel(new GridLayout(0,7)); // 7 columns, rows as needed... buttonPanel.add(new JButton(new NumberButtonAction(operandInput, 1))); buttonPanel.add(new JButton(new NumberButtonAction(operandInput, 2)));// got the idea?The OperatorButtonActions must have an individual implementation tough because each has a different behavior. But never the less they could be placed as top level custom classes in their own files.And this is what OOP is all about: separate concerns and limit the responsibilities of the individual parts of your code.First make your code run without thinking to much about the design (which does not men that you should not think about design at all). Afterwards reduce code duplication as much as possible (aka refactoring). Here you might introduce new classes and/or create parameterized methods.Writing UnitTest (and doing it before the production code) will support the refactoring. |
_unix.151174 | I am trying to find the proper terminology for the problem I am having so I can hunt down a solution.I am using the nouveau driver. When the mouse pointer becomes a grabbing hand, a box forms around it that no longer shows the proper image on the screen. It shows the screensaver image (if one is set) or a black image (if there is no screen saver). It is as though the screen saver is behind the image of the desktop and I am getting a square box that tears through it to see behind the desktop.My searches keep turning up vanishing cursors, cursors that don't move, cursors using the wrong image, etc... I cannot find anyone describing the problem of seeing through to the background around the cursor.If this helps, this is a more technical description of what I am seeing, based on my understanding of how X works: When a part of the screen needs to be updated, a bounding box is defined that encompasses the area. X, eventually, updates the graphics in that area. As you move your mouse around, you tell X that it needs to redraw around your pointer - otherwise you would have a pointer smeared all over the desktop. For nearly all versions of the cursor (pointer, paragraph, scroll) this works fine. When I have a grabbing hand, such as the one when you mouse over Google Maps, the box is sent to X. Instead of updating it with the proper image, it is updating it with my screensaver image. Then, a second later, it updates it with the proper image. So, as I move my mouse over Google Maps, I see a square of my screensaver surrounding the little hand. | Nouveau cursor tearing | xorg;cursor;nouveau | null |
_softwareengineering.290316 | I have an actor which is having three asynchronous web service calls. Lets say A,B and C. All return promise objects which has their respective response. I added loggers processA,processB,processC respectively.Now I use F.Promise.sequence(promiseA, promiseB, promiseC) [Play]and applies some logic by using flatmap and for each [Java]. I added loggers logA, logB and logC respectively. What would be the order of loggers? will it be like once after complete processing a web service call, the respective operations call ? Please explain the concepts of promise and combining multiple promise through Play F.Promise.sequence. | Promise Akka Play Java Sequence | java;playframework;akka | null |
_unix.116402 | ls -laLR /home/tools > all_site1.txtI need the state of all folders and files in subfolders of /home/tools. With this command I put all in this file. What I need now is a solution how to parse from this file all_sites1.txt only lines where owner of dir or owner of the file is root.I tried this:awk '/root/{print $0}' all_sites1.txtBut I get all places where root occurs. I need only those entries where owner is root and what file and/or dir is to be printed. | parsing file from ls -laLR | text processing;sed;find;awk | If you are looking to save the state of folders/files, getfacl is a much better command to use. You could do the following:getfacl -LR /home/tools >all_site1.txtOne thing to note though is that the behaviour of the -L option is different from that of ls. It only follows symlinks to directories and not files.You could print the output of files owned by root like this (provided none of the paths contain newlines):awk '$2==file: && file_line=$0 {} $2==owner: && $3==root { print substr(file_line,9) }' \ all_sites1.txt |
_codereview.155359 | I have a small script that checks what the user selects, and based in his selection (1, 2 or 3) show the elements, but I'm not quite happy with the result. I believe that it could be much better the code, and repeat less times, here is my code example.$(function() { //Hide elements $('#input').hide(); $('#textarea').hide(); $('#option').hide(); //Show element based on selection $('#input_type').change(function(){ // use class or use $('select') var option_selected = $(this).val(); switch(option_selected) { case 1: $('#input').show(); $('#textarea').hide(); $('#option').hide(); break; case 2: $('#textarea').show(); $('#input').hide(); $('#option').hide(); break; case 3: $('#option').show(); $('#textarea').hide(); $('#input').hide(); break; default: $('#input').hide(); $('#textarea').hide(); $('#option').hide(); } }); }); | Show input type based on selection | javascript;jquery | null |
_unix.39211 | ContextI do not know all standard linux/unix commands, so I need manpages.I found QNX's manuals (invoked by use [command name]) terse; I prefer them to linux manpages.I can get them from QNX with shell script, but I need to have QNX installed and it seems system does not have use for all commands.QuestionIs there archive of QNX's use messages on internet? | Archive of QNX's use messages? | man;qnx | I just found them here: http://www.esrl.noaa.gov/gmd/dv/hats/cats/stations/qnxman/ |
_cs.70245 | In chapter 2 of this book the map overlay algorithm, which takes as inputs two doubly connected edge lists, is explained. Basically a line intersection algorithm allows to update both the half edges records and the vertices records. After such step the detection of boundary cycles is applied, and later a graph is built, where each connected component represents a new face record. It is explained how to detect whether a boundary cycle is clockwise or not, but I'm not sure how an overall module that implements this part should work. Say I have a DCEL where the face records are not valid, but both the half-edges and the vertices are, do you know any detailed algorithm the actually build the graph? Even a pseudocode is fine. I can understand how to implement the boundary cycles detection, I would iterate through the half-edges list, navigate through the next record, remove such record from the list and when I reach the initial half-edge I would go to the next half-edge of the list and repeat. I would put such sub list of half edges into another list so I would have a list of boundary cycles. I would also label each element of the boundary cycles list as clock wise or anti-clockwise based on the trick the book explains. Once I have such a list how do I figure out what boundary cycles are incident to the same face? In practice what's the test applied? | Map overlay, any algorithm for face updating step? | graphs;computational geometry | null |
_unix.384555 | I have some issue to boot to Kali since i reboot the computer. I don't think i install some software, but i may be have run apt-get update since the last success boot.With the kernel 4.9.0-Kali4-amd64, the system fails booting with the following message:[ powerplay ] VBIOS did not find boot engine clock value in dependency table. Using Memory DPM level 0!In recovery mode the system boot stop progressing at the following step:admgpu 0000:0a:00.0: GPU pci config resetI have a multi boot with windows 10 who is still working.The system is running on a HP laptop with intel i5-5200 cpu and ADM R5 m255 GPUAny clue how i can solve my boot issue?thanks | Boot issues - GPU pci config reset | kali linux;configuration;gpu;pci | null |
_webmaster.93155 | Almost 3 months ago I've put no-index tag to about 3,000 pages on my website and 301 redirect (which is permanent redirect) for almost 15,000 pages. (The all site is about 50,000 pages)And yet as for today, all of these pages are still appearing in google index.I've also updated the sitemap.What can cause it? any advice?example of no-index page: http://www.carz.co.il/review/8186example of 301 redirect page:http://www.carz.co.il/gen/491/overview/2010/All other pages are exactly the same (same meta tags, HTTP response etc..)according to Google webmaster tools there are about 4000 pages crawled per day. | 301 and no-index tag don't work | 301 redirect;googlebot;google index | null |
_webmaster.25180 | I am getting error on FireFox connection partially encrypted other browsers show that's not encrypted... When I setup apache with ssl I use /var/www/html/ for default directory, but all scripts are in /var/www/cgi-bin/ so is there a problem? or I'am on wrong way?But when switch to /var/www/ I get an default CentOS web. (Like new server)Maybe, someone have an ideas?Thanks in advance.Strelson | connection partially encrypted, SSL for perl app | https;httpd.conf | null |
_codereview.107422 | I've been working on Bill of Materials mini schema for a while. At first I had single Part table where I've referenced itself. I was told it would be better to have separate table because we'd need to have info like quantity and that quantity is not part of Part itself. Here's the script...CREATE TABLE Part( ID INT NOT NULL PRIMARY KEY IDENTITY, PartNumber NVARCHAR(50) NULL, [Description] NVARCHAR(MAX) NULL, ListPrice DECIMAL(12,2) NULL)CREATE TABLE BOM( ID INT NOT NULL PRIMARY KEY IDENTITY, PartId INT NOT NULL, ParentId INT NULL, Quantity INT NULL)ALTER TABLE BOM ADD CONSTRAINT BOM_PartId_FKFOREIGN KEY (PartId) REFERENCES Part(ID)ALTER TABLE BOM ADD CONSTRAINT BOM_ParentId_FKFOREIGN KEY (ParentId) REFERENCES Part(ID)insert into Part (PartNumber, Description, ListPrice) values ('AAA', 'A', 250.00)insert into Part (PartNumber, Description, ListPrice) values ('AA', 'A', 100.00)insert into Part (PartNumber, Description, ListPrice) values ('BBB', 'B', 250.00)insert into Part (PartNumber, Description, ListPrice) values ('BB', 'B', 90.00)insert into Part (PartNumber, Description, ListPrice) values ('B', 'B', 40.00)insert into BOM (PartId) values (1)insert into BOM (PartId, ParentId, Quantity) values (2, 1, 5)insert into BOM (PartId, ParentId, Quantity) values (4, 3, 10)insert into BOM (PartId, ParentId, Quantity) values (5, 4, 50)insert into BOM (PartId, ParentId, Quantity) values (4, 1, 50)Would this be ok as beginner BOM schema?I've tested with the below query, this gets immediate children of BOM with ID of 1.select e.*from BOM bjoin BOM e on b.PartId = e.ParentIdwhere b.ID = 1I will recursively call this from c# to populate children from any level in the BOM, but usually top most level. | Basic Bill of Materials schema | beginner;sql;sql server | You're mixing up casing styles for T-SQL keywords - pick one: ANNOYINGCASE or readablecase, but don't use both in the same script. I have no bias or preference whatsoever for either*, all that matters is consistency.Semicolons are not required to indicate the end of a statement, but they are nonetheless a good habit to have.You inline primary keys and let the server name them. I might have a fetish for naming things, but I like my PK's named PK_TableName.Speaking of naming, the Part table should be named Parts, and should probably have a natural key in the form of a unique constraint on the PartNumber column... which I find is a redundant name - Number would be better.Foreign keys have potentially ambiguous names, or will be when your schema grows. I like naming my FK's FK_ReferencedTable_AlteredTable[_ColumnName], where the [_ColumnName] part is only needed for when there are multiple FK's on the same table referencing the same table, like when an OrderHeaders table references a FiscalCalendars table with its OrderDateCalendarId, ShipDateCalendarId and CancelDateCalendarId, which would respectively be FK_FiscalCalendars_OrderHeaders_OrderDate, FK_FiscalCalendars_OrderHeaders_ShipDate and FK_FiscalCalendars_OrderHeaders_CancelDate. So in your case that would be FK_Parts_BillOfMaterials and FK_BillOfMaterials_BillOfMaterials. Being rigorously consistent about naming foreign keys makes it easy to instantly know what tables are involved and in which direction, just by looking at the FK's name. Oh, and I agree with @RubberDuck - use BillOfMaterials, and keep bom for table aliases when querying the schema.You're not specifying the schema you're creating the tables in. Better be explicit; you're not specifying the seed for your IDENTITY columns - again, better be explicit.I hate to see Description be syntax-highlighted in SSMS - I like that you're avoiding that by enclosing the name in square brackets. But then, the issue could be avoided altogether by calling the column Name instead - that way you don't have to use square brackets everywhere, and Description doesn't stick out like a sore thumb by being the only name using square brackets.That said, NVARCHAR(MAX) has implications that make it an annoying field, especially if the longest description/name is going to be 200, or even 1024 characters. Don't use MAX for anything below 4000 characters.Also, I think you're abusing NULL columns. Does a part that has no number, no description and no price really make sense? Wouldn't that rather be an empty number, an empty description and a 0 price?Comma-first style makes it easier to change the order of the columns if you ever want to do that.Lastly, and again this might be just me, but I like when I can run a T-SQL script twice in a row without it blowing up or doing weird things.Putting it all together (making a few assumptions):use [database_name]; -- trust me, this one is a life savergo;/* drop FK constraints */if exists (select * from sys.foreign_keys where name = 'FK_BillOfMaterials_BillOfMaterials') alter table BillOfMaterials drop constraint FK_BillOfMaterials_BillOfMaterials;if exists (select * from sys.foreign_keys where name = 'FK_Parts_BillOfMaterials') alter table BillOfMaterials drop constraint FK_Parts_BillOfMaterials;/* drop tables */if exists (select * from sys.tables where name = 'Parts') drop table Parts;if exists (select * from sys.tables where name = 'BillOfMaterials') drop table BillOfMaterials;/* create tables */create table dbo.Parts ( Id int identity(1,1) not null ,Number nvarchar(50) not null ,Name nvarchar(250) not null ,ListPrice decimal(12,2) not null -- 2 decimals might be a little short in some cases ,constraint PK_Parts primary key clustered (Id) ,constraint NK_Parts unique (Number) -- NK == Natural Key (assumption here));create table dbo.BillOfMaterials ( Id int identity(1,1) not null ,PartId int not null ,ParentId int null ,Quantity int not null ,constraint PK_BillOfMaterials primary key clustered (Id));/* add FK constraints */alter table dbo.BillOfMaterials add constraint FK_BillOfMaterials_BillOfMaterials foreign key (ParentId) references dbo.BillOfMaterials (Id);alter table dbo.BillOfMaterials add constraint FK_Parts_BillOfMaterials foreign key (PartId) references dbo.Parts (Id);*cough |
_codereview.168800 | I came across a situation where I needed code to run every n invocations of a method. Specifically, I was clearing out massive amounts of data and needed to hint to the VM that it should run garbage collection. My solution was a little more condensed than this, but this class offers reuseability and ensures accurate counts.public class ExecutionLimiter { private final AtomicInteger ai = new AtomicInteger(0); private final Runnable task; private final int runEvery; public ExecutionLimiter(Runnable task, int runEvery){ this.task = task; this.runEvery = runEvery; } public boolean tryRun(){ synchronized(ai){ if(ai.incrementAndGet() != runEvery) return false; ai.set(0); } task.run(); return true; } public static void main(String... args) throws InterruptedException{ AtomicInteger example = new AtomicInteger(0); ExecutorService service = Executors.newWorkStealingPool(5); ExecutionLimiter limiter = new ExecutionLimiter(example::incrementAndGet, 10); int invocations = 100; for(int i=0;i<invocations;i++){ service.submit(()->{ try{ Thread.sleep(Math.round(Math.random() * 1000)); }catch(InterruptedException ignore){} if(limiter.tryRun()) System.out.println(Thread.currentThread().getName() + : Successfully ran: value is + example.get()); }); } service.shutdown(); service.awaitTermination(invocations, TimeUnit.SECONDS); }}I can't help but feel that something like this exists already in Java, but I haven't been able to find it. The closest I've come is CyclicBarrier, but I don't want to hang my threads while incrementing.A few concerns:Is AtomicInteger overkill for this class? Would it be better to use an int and synchronize on a lock Object?Should I be synchronizing AtomicInteger to perform the incrementAndGet-and-set, or is there a better way to go about this?The main is just for demonstration. | Run code every n invocations | java;multithreading | synchronized(ai){ if(ai.incrementAndGet() != runEvery) return false; ai.set(0); }You should either use AtomicInteger or use synchronized. Using both together is certainly overkill. Consider private static int count = 0; private static final Object lock = new Object();and later synchronized (lock) { if (++count != runEvery) { return false; } count = 0; }Then you don't need the AtomicInteger at all. I prefer a descriptive name like count to an abbreviated type name like ai. This is especially so in this case, as my natural expansion of AI is Artificial Intelligence. You use AtomicInteger when you don't want to use a synchronized block. Perhaps int current = ai.incrementAndGet(); if (current % runEvery != 0) { return false; } int next = 0; do { if (ai.compareAndSet(current, next) { task.run(); return true; } current = ai.get(); next = current - runEvery; } while (next >= 0);Obviously this is more complicated than the synchronized version. But this is how one uses an AtomicInteger. The modulus is more reliable in case of high contention that keeps compareAndSet from returning true. If we aren't worried that the count will overflow the bounds of integer, we can get rid of the reset. Without the reset, this makes more sense: if (ai.incrementAndGet() % runEvery != 0) { return false; }That's how an AtomicInteger is supposed to work. Even if two threads call this code at the same time, only one will get past the check. Because the update happens atomically. The problem with the original code is that there are actually two updates and you need to synchronize them. Of course, if the range for ai is not constrained, then it will overflow eventually. The synchronized block is more reliable in that case. This usage fits synchronized better than atomic types because there's no atomic increment, compare, and reset operation. You could also use an explicit Lock here, but you don't seem to need it. The blocking behavior of synchronized better matches what you are doing. The lock would be better if it were OK to only sometimes update (when you can get the lock) and sometimes not (when you can't get the lock). But that's not what you are doing here. It would be possible to create a new class with the necessary method (countCompareReset), but it may be simpler just to use the lock variable with the synchronization block. |
_webmaster.95908 | I bought a website with its subdomains registred as 3rd level domain name so main site is e.g. example.com and subdomains e.g. en.example.com, de.example.com, etc.I deleted subdomains from DNS because I want only example.com but in Google search results there are yet subdomains.What I have to do in order to remove all subdomains from search results?Using search console and submit URLs to remove would be stressful. | Remove subdomain from Google | google index | null |
_codereview.25583 | I'm a bit confused if saving the information to session code below, belongs in the controller action as shown below or should it be part of my Model? I would add that I have other controller methods that will read this session value later.public ActionResult AddFriend(FriendsContext viewModel){ if (!ModelState.IsValid) { return View(viewModel); } // Start - Confused if the code block below belongs in Controller? Friend friend = new Friend(); friend.FirstName = viewModel.FirstName; friend.LastName = viewModel.LastName; friend.Email = viewModel.UserEmail; httpContext.Session[latest-friend] = friend; // End Confusion return RedirectToAction(Home);}I thought about adding a static utility class in my Model which does something like below, but it just seems stupid to add 2 lines of code in another file.public static void SaveLatestFriend(Friend friend, HttpContextBase httpContext){ httpContext.Session[latest-friend] = friend;}public static Friend GetLatestFriend(HttpContextBase httpContext){ return httpContext.Session[latest-friend] as Friend;} | Saving data to a session | c#;asp.net mvc;session | null |
_unix.227898 | Can I add the following debian wheezy repository to kali 2.0 source list or will it hurt my system? Because I know kali was built on debian wheezy.deb http://httpredir.debian.org/debian wheezy main contrib non-freedeb-src http://httpredir.debian.org/debian wheezy main contrib non-freedeb http://httpredir.debian.org/debian wheezy-updates main contrib non-freedeb-src http://httpredir.debian.org/debian wheezy-updates main contrib non-freedeb http://security.debian.org/ wheezy/updates main contrib non-freedeb-src http://security.debian.org/ wheezy/updates main contrib non-free | Can I add officials debian repository to kali 2.0 source list | debian;kali linux;repository | Kali Linux 2.0 is based on Jessie and is, from now on, a rolling release, where they pull packages from Debian testing.This makes it a complicated situation. It could (!) work for now flawlessly to use jessie sources beside but you will probably run into situations where you get problems with broken packages.You could work around these problems by using the appropriate sources for your needs and keep the impact minimal by pinning carefully. |
_webmaster.5599 | I asked this at superuser but I'm having no joy there. I was really hoping someone could help because I'm stumped.I use a MacBook Pro running Mac OS X 10.6.4For a few years I have used MAMP to test websites locally but suddenly and for no apparent reason when I start the MAMP servers the mysql light stays red.I'm not very clued up on how to actually run a server which is the main reason I use MAMP. I also have Sequel Pro installed for administering my databases. When I try to connect to mysql in Sequel Pro through a socket connection it saysThe socket file could not be found in any common location. Please supply the correct socket location.and thenMySQL said: Can't connect to local MySQL server through socket 'tmp/mysql.sock' (2)I can connect and access all my databases if I connect with host 127.0.0.1 but I used to just connect through the socket and all was fine. Also all my testing sites which I hosted locally are no longer being processed by PHP.I have no idea why this suddenly stopped working and any light someone could shine on the matter would be very much appreciated. A solution even more so.Thanks. | Why has mysql stopped working on localhost and how do I fix it? | php;mysql | null |
_codereview.61933 | The below idea seemed to look clean, to allow the object itself to validate its values for different scenarios. For eg: While creating the object, the value of Object1 and Object2 in SelfValidator object should be validated for not null.At one scenario like inserting the SelfValidator object to Database, validation of value1 as non-zero has to be done, which is not necessary for an Update. A new validate function: validateForInsert(), which will validate and set the error data, can be included to SelfValidator class. But I am sure, there might be some drawbacks, design flaws, which I would like to understand and improve. ErrorInfo.javapublic class ErrorInfo { String errorMessage = ; public void setErrorMessage(String errorMessage) { this.errorMessage = new StringBuffer(this.errorMessage).append(\n+).append(errorMessage).toString(); }}SelfValidator.javapublic class SelfValidator { String object1; String object2; int value1; ErrorInfo errorInfo; public ErrorInfo getErrorInfo() { return errorInfo; } public SelfValidator(String object1, String object2, int value1) { super(); this.object1 = (object1 == )?null: object1; validate(this.object1, object1); this.object2 = (object2 == )?null: object2; validate(this.object2, object2); this.value1 = value1; } private void validate(String value, String dataMember) { if(value == null){ if(errorInfo == null){ errorInfo = new ErrorInfo(); } errorInfo.setErrorMessage(dataMember + value is invalid); } }}Demonstrator.javapublic class Demostrator { public static void printValidOrNot(ErrorInfo errorInfo){ if(errorInfo == null){ System.out.println(Object valid); //Proceed with manipulating the object }else{ System.out.println(errorInfo.errorMessage); // Break and handle the error. } } public static void main(String[] args) { SelfValidator selfValidatingObject1 = new SelfValidator(Test, Test, 1); printValidOrNot(selfValidatingObject1.getErrorInfo()); SelfValidator selfValidatingObject2 = new SelfValidator(, , 1); printValidOrNot(selfValidatingObject2.getErrorInfo()); }} | Error-handling / Self-Validating mechanism | java;validation;error handling | null |
_webmaster.67767 | I have a form which submits using AJAX. My client requested analytics to be updated with a new URL when submit is successful, so they can get statistics on completion vs abandonment of that form.The analytics code they sent me looks like this<script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-00000000-1', 'auto'); ga('send', 'pageview'); </script>I'm thinking I should use an iframe with this code inside the page to trigger the analytics under a new URL. The iframe page url would have the word success in it so it can be useful in their reporting.I like the iframe approach because it allows me to call the analytics without refreshing the entire page. I could refresh when the 'success' event occurs, but in a multi-step process, with go-forward and go-back + animation, I am not able to refresh conveniently. I like the iframe solution because it accommodates that flexibility should I need it.Is that a good solution, or should I be doing it differently? | How to make analytics refresh after certain event? | google analytics;ajax;iframe | You don't need to use an iframe. This tracking code is for universal analytics. There are a few ways you can push this into Google Analytics:1) Trigger an Event on submitYou can use Javascript for this or jQuery$('#button').on('click', function() {ga('send', 'event', 'button', 'click','Form Completed');});2) Trigger a Virtual Pageview on submitga('send', 'pageview', {'page': '/form-completed','title': 'Page Name - Form Completed'});Both have pros and cons. With Events you cant really set up goal funnels. With Virtual Pageviews you will generate extra pageviews which will mess with your bounce rate.P.S.I would also look into using Google Tag Manager. It will help you set these things up without having to write too much code. Its not as easy as Google wants us to think it is but once you get used to it is great. |
_codereview.79071 | I'm using the following piece of JavaScript to emulate class extending. Is this a valid way to go or does it have some drawbacks which should definitely be fixed?var Validator = function (inputId) { use strict; var self = this; self.inputId = inputId; window.document.getElementById(inputId).onblur = function () { self.check(); };};Validator.prototype.inputId = '';/** * Validator fails by default * * @returns {Boolean} */Validator.prototype.isValid = function () { use strict; return false;};/** * Check whether the input is valid * * @returns {Validator} */Validator.prototype.check = function () { use strict; // Check and mark whether validation passes if (!this.isValid()) { this.markInvalid(); } else { this.markValid(); } return this;};/** * Mark the input as valid * * @returns {Validator} */Validator.prototype.markValid = function () { use strict; var input = window.document.getElementById(this.inputId); // Remove error class name input.className = input.className.replace( /(?:^|\s)error(?!\S)/g, '' ); return this;};/** * Mark input as invalid * * @returns {Validator} */Validator.prototype.markInvalid = function () { use strict; var input = window.document.getElementById(this.inputId); // Add error class when it is not set already if (!input.className.match(/(?:^|\s)error(?!\S)/g)) { input.className += ' error'; } return this;};Than this is used as a concrete validator implementation// Inherit from validatorvar VatCheck = Validator;/** * Validate vat number * * @note only format is checked, not whether the number is in use * * @returns {Boolean} */VatCheck.prototype.isValid = function () { use strict; var input = window.document.getElementById(this.inputId); // Check number by using Vat.js return !!checkVATNumber(input.value);};The reason I came to this solution is because only one or two methods of the parent class have different behavior, and the use of the strategy pattern seems somewhat over-engineering | Emulating class extending | javascript;inheritance | There is one bad problem in the code that can be answered by the basis of the pattern presented here:// Inherit from validatorvar VatCheck = Validator;This does not inherit anything, instead it causes VatCheck name to point to the exact same object (constructor function) as the Validator. Thus all you are doing is overwriting Validator.prototype.isValid (changing the base implementation), and aliasing the base implementation by different names.If you repeat this pattern again, the end result is that all validators have their isValid check method do the action of the last validator type that you define.Instead you need to do VatCheck.prototype = new Validator() but then you need to also redo the constructor. |
_webmaster.2109 | I've spent last days to compare different payments gateways solutions. And I've seen that if I want to integrate a payment service (such as paypal, or authorize.net) into my website I need to pay a monthly fee (and not only transaction costs).Is this correct ?For these reasons, I saw it is much more convenient for me to use Paypal Standard Payment Method and to only pay transaction fees.Could you tell me if I'm missing something, or something is wrong ? | how to avoid monthly fees for payment methods on my website | payments | You understand it correctly. The vast majority of payment services have monthly fees associated with them. This includes a merchant account monthly fee as well as a payment gateway monthly fee. If you wish to avoid this fee you can use some of the services offered by third party processors such as Paypal Standard and Google Checkout. The downside to these services is that you lose flexibility as these services typically do not allow for a seamless integration into your website (i.e. your customer has to leave your website to make payment). It's a trade off: no monthly fee vs customization. If you want to avoid monthly fees go with services like Paypal Standard that don't have monthly fees. If you want full control over the checkout process use a service like Authorize.Net. |
_codereview.109209 | My thoughts are to create a method that contains all of the variables declared. Is it better to say initialized? I would like it to be more efficient but still as clear as possible for future maintenance of the code. Any suggestions for the main bulk of information, such as methods?if (incomeDec <= 300){ taxRateDec1 = 0.15m; taxDec = incomeDec * taxRateDec1;}else if (incomeDec <= 450){ tierAmtInt = 300; taxRateDec1 = 0.15m; taxRateDec2 = 0.2m; tempValueDec = tierAmtInt * taxRateDec1; taxDec = (incomeDec - tierAmtInt) * taxRateDec2; taxDec = taxDec + tempValueDec;}else{ tierAmtInt = 300; tierAmtInt2 = 150; taxRateDec1 = 0.15m; taxRateDec2 = 0.2m; taxRateDec3 = 0.25m; tempValueDec = tierAmtInt * taxRateDec1; tempValueDec = tempValueDec + tierAmtInt2 * taxRateDec2; taxDec = (incomeDec - tempValueDec) * taxRateDec3; taxDec = taxDec + tierAmtInt;}incomeDec = incomeDec - taxDec; | Finding tiered income c-primer | c#;performance | If you want more maintainable code where you can easily add more calculations as needed, you may want to refactor it to something like this, where you have a dictionary of calc-functions that you get for each case: (it's not perfect yet, but it should show you the idea)public const decimal tierAmt1 = 300m;public const decimal tierAmt2 = 150m;public const decimal taxRate1 = 0.15m;public const decimal taxRate2 = 0.2m;public const decimal taxRate3 = 0.25m;private IDictionary<decimal, Func<decimal, decimal>> _calcFuncs = new Dictionary<decimal, System.Func<decimal, decimal>>;private decimal CalcIncome(decimal income){ // you can of course initialize this only once in a constructor if you like _calcFuncs[300] = new Func<decimal, decimal>(CalcIncome1); _calcFuncs[450] = new Func<decimal, decimal>(CalcIncome2); _calcFuncs[decimal.MaxValue] = new Func<decimal, decimal>(CalcIncome3); // replaces all if's var calcFunc = _calcFuncs.First(k => income <= k.Key).Value; return calcFunc(income);}private Decimal CalcIncome1(Decimal income){ var tax = 0m; tax = income * taxRate1; return income = income - tax;}private Decimal CalcIncome2(Decimal income){ var tax = 0m; var tempValue = 0m; tempValue = tierAmt1 * taxRate1; tax = (income - tierAmt1) * taxRate2; tax = tax + tempValue; return income = income - tax;}private Decimal CalcIncome3(Decimal income){ var tax = 0m; var tempValue = 0m; tempValue = tierAmt1 * taxRate1; tempValue = tempValue + tierAmt2 * taxRate2; tax = (income - tempValue) * taxRate3; tax = tax + tierAmt1; return income = income - tax;}CalcIncome(150);CalcIncome(320);CalcIncome(540);You don't need the Dec and Int suffixes; make your variables harder to read and understand. |
_webmaster.15931 | I have always thought about hosting sites on a local web server with MAMP or a web server application on a reliable and fast connection, but I was wondering if it would be better to just go with web hosting? Also how would web hosting or a web server handle a sudden surge in traffic in a huge scale? | Web Server vs Web Hosting | php | Hosting on a local server requires a lot of knowledge of server administration and security, and generally you'll need more than regular residential internet service for it to work. You would need to check with your ISP to find out whether it is even allowable, as some block inbound port 80 or consider home servers to be a violation of their terms of service.On a residential line, you need to obtain a dynamic DNS account since your server's IP will be changing all the time.Unless you are very familiar with server administration and have a strong understanding of DNS, I would not recommend attempting to serve a website out of your house.(Note: I successfully run a web server out of my house on a residential DSL line, but also work as a professional Linux administrator) |
_codereview.114213 | I have created a program which works out days, hours, minutes, seconds and milliseconds until a date which the user inputs. Is it possible to shorten/make the output a little clearer?from datetime import datetimewhile True: inp = input(Enter date in format yyyy/mm/dd hh:mm:ss) try: then = datetime.strptime(inp, %Y/%m/%d %H:%M:%S) break except ValueError: print(Invalid input)now = datetime.now()diff = then - nowprint(diff, until, inp)print(diff.total_seconds(),seconds) | Time till program | python;beginner;python 3.x;database | null |
_unix.171489 | So, I was moving my laptop around (and I have the bad habit of setting things on the keyboard...) and I woke up to discover this:$Display all 2588 possibilities? (y or n)What command would display something like this?I'm using Bash. | Shell: Display all 2588 possibilities? | bash;shell | Hitting TAB key helps you to auto complete either a command or a file/directory (as long as it is executable) you want to use, depending on what you are requesting.Double hitting the TAB key helps you displaying the available stuff you could use for next.e.g.Command completition:I want to edit my crontab. Typing cront and hitting TAB then I will see my command complete: crontab.File/Directory completition:I want to backup my crontab. crontab -l >> Type some words of the destination /ho TAB then I will see: /home/, type next us TAB then I will see: /home/user/Now, when you double hit TAB key without typing something, then the prompt expects something, so it will want to help you displaying all the possibilities. With the prompt empty, it's expecting a command or a file/directory so it will want to display all the commands available for you & all the files/directories located in the directory where you are. The 2588 possibilities output, means the total amount of commands/files/directories available to type. |
_codereview.110534 | I wrote a Logger which uses the destruction of temporary objects to Log their values including a scope time logger. Lets see what i can improve here to increase the performance and everything else.class Logger { public: static inline LogMessage Log(LoggerTypes type, const std::string& file, const int& i); /** \brief Get a Timer Message to mesure the time of a scope Returns a Timer message which can be shifted ostream to with <<. The Message will be written to consol and log. The message will automatically append the time since creation and leaving the local scope. So if use without the macro create a local variable for it. \code {//start a scope LogTimer t = Logger::Timer(__FILE__,__LINE__); //do some thing wou want to mesure the time of } //here it well be logged automatically! \endcode */ static inline LogTimer Timer(const std::string& file, const int& i); static inline Logger& getInstance(); static void setLogFile(const std::string& filename); void setLogLevel(const int& i); int getLogLevel() const; void inline operator<<(const std::ostringstream& message) const; private: Logger() : m_logLevel(DEBUG) { }; ~Logger(); Logger(const Logger&) = delete; Logger& operator=(const Logger& other) = delete; static std::ofstream* m_file; static Logger m_instance; static tasking::SpinLock m_lock; int m_logLevel; };//.cppstd::ofstream* Logger::m_file = nullptr;Logger Logger::m_instance;tasking::SpinLock Logger::m_lock;void Logger::setLogLevel(const int& i){ m_logLevel = i;}int Logger::getLogLevel() const{ return m_logLevel;}void Logger::setLogFile(const std::string& filename){ if (m_file != nullptr) { m_file->flush(); delete m_file; } m_file = new std::ofstream(filename, std::ios::out | std::ios::app);}Logger::~Logger(){ //clean up if(m_file != nullptr) m_file->flush(); delete m_file;}//.hppinline LogMessage Logger::Log(LoggerTypes type, const std::string& file, const int& i){ return LogMessage(type, file, i);}inline LogTimer Logger::Timer(const std::string& file, const int& i){ return LogTimer(TIMER, file, i);}inline Logger& Logger::getInstance(){ if (m_file == nullptr) //use a fixed name so it can always be used! setLogFile(startup.log); return m_instance;}inline void Logger::operator<<(const std::ostringstream& message) const{ std::lock_guard<tasking::SpinLock> lock(m_lock); std::cout << message.str() << \n; if (m_file != nullptr) { *m_file << message.str() << \n; m_file->flush(); }}And a Basic Log Message:enum LoggerTypes{ ERROR_L = 0, EXCAPTION = 1, WARNING = 2, TIMER = 3, INFO = 4, DEBUG = 5, SIZE_OF_ENUM_LOGGER_TYPES};struct LoggerTypeMap{ static const char* EnumString[]; /** @return the configValue as const char* */ static std::string get(const LoggerTypes& e);};class LogMessage{public: explicit inline LogMessage(const LoggerTypes& type, const std::string& file = , const int& i = 0); ~LogMessage(); std::string currentDateTime() const; /** returns self reference */ template <typename T> inline LogMessage& operator<<(const T& m); LogMessage(const LogMessage& other); LogMessage(LogMessage&& other); LogMessage& operator=(const LogMessage& other); LogMessage& operator=(LogMessage&& other);private: std::ostringstream m_stream; LoggerTypes m_type;};const char* LoggerTypeMap::EnumString[] ={ error, excaption, warn, timer, info, debug};//check if valid numer must be the size of the enum!static_assert(sizeof(LoggerTypeMap::EnumString) / sizeof(char*) == SIZE_OF_ENUM_LOGGER_TYPES, size dont match!);std::string LoggerTypeMap::get(const LoggerTypes& e){ return EnumString[e]; //implicit convention and move}LogMessage::~LogMessage(){ //push the message at the end if (Logger::getInstance().getLogLevel() >= m_type) Logger::getInstance() << m_stream;}// Get current date/time, format is YYYY-MM-DD.HH:mm:ssstd::string LogMessage::currentDateTime() const{ auto now = time(nullptr); struct tm tstruct; char buf[80]; tstruct = *localtime(&now); strftime(buf, sizeof(buf), [%d-%m-%Y][%X], &tstruct); return buf;}LogMessage::LogMessage(const LogMessage& other) : m_type(other.m_type){ m_stream << other.m_stream.str();}LogMessage::LogMessage(LogMessage&& other) : m_stream(std::move(other.m_stream)), m_type(other.m_type) {}LogMessage& LogMessage::operator=(const LogMessage& other){ if (this == &other) return *this; m_stream << other.m_stream.str(); m_type = other.m_type; return *this;}LogMessage& LogMessage::operator=(LogMessage&& other){ if (this == &other) return *this; m_stream = std::move(other.m_stream); m_type = other.m_type; return *this;}template <typename T>inline LogMessage& LogMessage::operator<<(const T& m){ m_stream << m; return *this;}template <>inline LogMessage& LogMessage::operator<<(const bool& b){ if (b) m_stream << true; else m_stream << false; return *this;}inline LogMessage::LogMessage(const LoggerTypes& type, const std::string& file, const int& i) : m_stream(), m_type(type){ //build the logstring m_stream << [ + LoggerTypeMap::get(type) + ][File: + file + ][Line: + std::to_string(i) + ][thread: << std::this_thread::get_id() << ] + currentDateTime() + ;}And the Scopetimer:class LogTimer : public LogMessage{public: explicit inline LogTimer(const LoggerTypes& type, const std::string& file = , const int& i = 0); ~LogTimer(); template <typename T> LogTimer& operator<<(const T& m); LogTimer(const LogTimer& other); LogTimer(LogTimer&& other); inline LogTimer& operator=(const LogTimer& other); inline LogTimer& operator=(LogTimer&& other);private: std::chrono::high_resolution_clock::time_point m_start;};LogTimer::LogTimer(const LoggerTypes& type, const std::string& file, const int& i) : LogMessage(type, file, i), m_start(std::chrono::high_resolution_clock::now()) {}template <typename T>LogTimer& LogTimer::operator<<(const T& m){ LogMessage::operator<<(m); return *this;}inline LogTimer::LogTimer(const LogTimer& other) : LogMessage(other), m_start(other.m_start) { }inline LogTimer::LogTimer(LogTimer&& other) : LogMessage(std::move(other)), m_start(std::move(other.m_start)) { }inline LogTimer& LogTimer::operator=(const LogTimer& other){ if (this == &other) return *this; LogMessage::operator =(other); m_start = other.m_start; return *this;}inline LogTimer& LogTimer::operator=(LogTimer&& other){ if (this == &other) return *this; LogMessage::operator =(std::move(other)); m_start = std::move(other.m_start); return *this;}Macros:#define __FILENAME__ (strrchr(__FILE__, '\\') ? strrchr(__FILE__, '\\') + 1 : __FILE__)#define LOG_ERROR jimdb::common::Logger::Log(jimdb::common::LoggerTypes::ERROR_L,__FILENAME__,__LINE__)#define LOG_INFO jimdb::common::Logger::Log(jimdb::common::LoggerTypes::INFO,__FILENAME__,__LINE__)#define LOG_WARN jimdb::common::Logger::Log(jimdb::common::LoggerTypes::WARNING,__FILENAME__,__LINE__)#define LOG_EXCAPT jimdb::common::Logger::Log(jimdb::common::LoggerTypes::EXCAPTION,__FILENAME__,__LINE__)#define LOG_DEBUG jimdb::common::Logger::Log(jimdb::common::LoggerTypes::DEBUG,__FILENAME__,__LINE__)#define LOG_SCOPE_TIME jimdb::common::LogTimer t___ = jimdb::common::Logger::Timer(__FILENAME__,__LINE__); t___Sample usage:int main(){ LOG_DEBUG << some Debug Logentry; { LOG_SCOPE_TIME << Some Random Scope for(auto i = 0; i < 10000; ++i) LOG_INFO << i; }}Some real Sample outputs:[warn][File:handshake.cpp][Line:20][thread:10796][01-11-2015][13:29:49] handshake Failed[debug][File:clienthandle.cpp][Line:42][thread:10796][01-11-2015][13:29:49] Client:228 closed[warn][File:handshake.cpp][Line:20][thread:11924][01-11-2015][13:29:49] handshake Failed | Threadsafe Logger with scopetime logging | c++;performance;multithreading;thread safety;logging | null |
_webapps.867 | My wife and I have been sharing a facebook account under my name, and we're finally going to get her her own account. The problem is that the facebook login email is really her email account.Is there a way to change the Facebook login email, without losing all my friends/contents/updates? If so, will it let me re-use that original email address for a brand new Facebook account? | Can I change my Facebook login email? | facebook | Go to the settings tab, add your email address to the list of email addresses, verify your address and delete the address of your wife. Voil you have a new login address.This works because you can use any of your email address associated with Facebook for login. |
_softwareengineering.140876 | I am choosing a license for my open source software and I've learned about GPL, EBMS and BSD. GPL seems to be most popular one.The problems are:Would anybody kindly name a few popular opensource licenses? Since I do not see any EBMS BSD license is popular.Are there any chart or table that have list out the advantages/disadvantages of using anyone?Why is the GPL always the license developers choose from, what are its benefits? | Options for Opensource license? | open source;licensing | null |
_webmaster.83109 | My company's domain name (a .com) was registered with Demon Internet years ago. I use their nameservers so they provide my DNS. If I ever want to change a DNS record I have to email them and hope they do it correctly - there is no control panel.I want to get full control over my domain name by transferring it to another registrar and using their nameservers so I can manage my own DNS records. I naturally want to avoid, or at least minimise downtime when it all transfers.The DNS records point to third party IP addresses for website and email etc.... i.e. Demon only hold my domain registration and DNS records. The DNS records themselves are not going to change - I simply want to gain control of them.As I see it at the moment, the way to do it is as follows:Obtain a list of all DNS records from Demon for the domainPerform a domain transferOnce transferred, change the nameservers to those of the new domain registrarRe-create the same DNS records on the new registrar's control panelAs the DNS records themselves are not going to change, merely the place where they are stored....can I avoid downtime if I do the above?I don't want to use a provider other than my domain registrar for DNS... and the new registrar I want to use cannot provide DNS until the domain is transferred.Thanks. | Minimising downtime when transferring domain name and DNS records | domains;dns;domain registrar;top level domains | You're on the right track, you should be able to avoid downtime all together - DNS management without a control panel, sounds like a nightmare!When you say 'domain transfer' I'm assuming you're talking about moving the domain from one registrar to another. In theory it doesn't matter if this is done before or after the name servers are re-delegated - this shouldn't affect your process unless your new registrar is also your DNS provider and requires you to have the domain with them in order to have DNS services managed by them.Here's how I would do it:Get a copy of a all DNS records from current providerCreate said DNS records on new provider.Test all the DNS records - you can do this by editing your machines hosts file (sudo nano /etc/hosts* on OS X) and pointing it to the new server, these tests will obviously only work on your machine but should give you enough certainty to proceed. For reference, a hosts entry would look like this: (127.0.0.1 mydomain.com)After doing this, you should be able to ping your domain name to verify it's going to where you've pointed it to in your hosts file, you can also test DNS records in your browser.Be sure to remove the entry you added from your hosts file, after you've verified your DNS records on the new provider are working correctly - otherwise you'll be getting false results on the next step.Re-delegate name servers to new provider, depending on the domain registry this can take some time, I find .com.au domains can be resolving to the new name servers in a few hours, but .com's can take up to 7 days. The general rule is 24-48 hours for resolution.After a period of time has passed (varies depending on registry, etc) you can check and see if your domain is resolving to your new server - you'll usually need to clear your computers DNS cache, and sometimes browser cache to see the changes. On a mac, open up terminal and type dscacheutil -flushcache - this will flush your DNS cache. You can thing ping the domain and see if it's resolving, if not, repeat process with added time waiting for resolution. If you have previously accessed the page in a browser, you will need to clear your browser cache to see the change as well as clearing your O/S DNS cache.Keep in mind, just because it's resolving for you, does not mean it's resolving everywhere - so try and keep both services active for a significant period of time to take care of any stragglers. After the DNS is re-delegated and everything is working nicely, transfer the domain to the new registrar, this should not affect your DNS at all. This step could be done here, or at step 2 - depending on your new providers setup. |
_softwareengineering.254290 | How would I go about a ranking system for players that play a game? Basically, looking at video games, players throughout the game make critical decisions that ultimately impact the end game result.Is there a way or how would I go about a way to translate some of those factors (leveling up certain skills, purchasing certain items, etc.) into something like a curve that can be plotted on a graph?This game that I would like to implement this is League of Legends.Example: Player is Level 1 in the beginning. Gets a kill very early in the game (he gets gold because of the kill and it increases his power curve), and purchases attack damage (gives him more damage which also increases his power curve. However, the player that he killed (Player 2), buys armor (counters attack damage). This slightly increases Player 2's own power curve, and reduces Player 1's power curve. There's many factors I would like to take into account. These relative factors (example: BECAUSE Player 2 built armor, and I am mainly attack damage, it lowers my OWN power curve) seem the hardest to implement. My question is this: Is there a certain way to approach this task? Are there similar theoretical concepts behind ranking systems that I should read up on (Maybe in game theory or data mining)? I've seen the ELO system, but it doesn't seem what I want since it simply takes into account wins and losses. | Ranking players depending on decision making during a game | design;algorithms;artificial intelligence;problem solving;games | null |
_webmaster.98766 | What I have done is:Created adsense accountCreated an ad unitPlaced that ad unit code on my websiteAfter few days I removed that adsense code from my websiteNow when I open my adsense account the following page is displayed: And when I again submit request for account approval, I receive email with status:Site does not comply with Google policiesAnd in recommendations:Dont place ads on auto-generated pages or pages with little to no original content.Now my question is:Is Google evaluting my website on the basis of adsense code? If yes how can I get the code again? or what is its alternative? How can I delete this account and create a new one?Update:I can neither go to My Ads tab nor delete my account. The only page displayed is shown above. | Forget adsense code while activating account | google adsense | null |
_webapps.79382 | I'm trying to change my Hotmail password after a breach in LastPass but it doesn't seem to work. So I'm curious if Microsoft has a limit of 16 characters for a password?If so, this is very insecure and I'm getting this warning from LastPass:Note that I am using LastPass to generate a secure password. It's just if I make it longer than 16 symbols Hotmail doesn't seem to accept it. | Is it true that Microsoft doesn't allow more than 16 characters in their Hotmail password? | outlook.com;security;passwords | Yes, the hotmail password is limited to 16 characters.A few Reason for Maximum Password Length gives some reasons as to why some providers choose a maximum length.See also Why are passwords limited to 16 characters?.Source Outlook webmail passwords restricted to 16 chars - how does that compare with Yahoo and Gmail?It seems that Outlook.com won't let you have a password of longer than 16 characters. (The same was true of Hotmail). |
_cs.47296 | I was reading the chapter of van Emde Boas in CLRS (page 547 section 20.3 3rd edition) and it says:Furthermore, the element stored in min does not appear in any of the recursive $vEB( \sqrt[\downarrow]{u})$ trees that the cluster array points to. The elements stored in a $vEB(u)$ tree V, therefore, are V.min plus all the elements recursively stored in the $vEB(\sqrt[\uparrow]{u})$ trees pointed to by V.cluster$[0..\sqrt[\uparrow]{u} - 1]$. Note that when a vEB tree contains two or more elements, we treat min and max differently: the element stored in min does not appear in any of the clusters, but the element stored in max does.However, I was not sure why that was true. Usually I put more details in my question but I don't understand why the min wouldn't appear recursively too just like in proto one. Does anyone understand the justification for that paragraph? Why do we treat the min and max differently? What is special about 2 or more? | Why do we not store the min in any of the recursive clusters in a Van Emde Boas tree? | data structures;trees;recursion;search trees | null |
_webmaster.53905 | I searched the cache details of the URL http://property.example.com/pune-properties but the Google Cache showing details for property.example.com. I don't know why it's showing like this. Not only for http://property.example.com/pune-properties but also for all the Indian city relates URL's like http://property.example.com/chennai-properties , http://property.example.com/mumbai-properties , http://property.example.com/kolkata-properties etc. Even I don't find these URLs in the Google search result. If I search Chennai properties in Google, I find property.example.com and not http://property.example.com/chennai-properties. Why its happening like this? | Google Cache showing wrong URL | url;google cache | If your site can be accessed both by domain URL and IP address then you will have such issues. That IP corresponds to sulekha.com - obvously antoher one of your domains for the same business.You have to 301 redirect accesses from the wrong IP-based URL to the correct domain-based URL, at the server level. Since you are hosted on an IIS server this is something you may need to take up with your hoster if you don't know where to do it. It may be done through the IIS console or through scripting.Using fully qualified URLs in navigation would help diminish the incidence of finding more IP-based URLs during navigation. |
_unix.349875 | I'm renewing the certificates for my VPN configuration. When I'm checking the validity:openssl verify -CAfile keys/ca.crt -verbose keys/example.org.crtC = XX, ST = XX, L = City, O = Example, OU = Manager, CN = example.org, name = EasyRSA, emailAddress = somemailerror 13 at 0 depth lookup: format error in certificate's notBefore fielderror keys/example.org.crt: verification failedBut checking with x509 shows a valid not before:openssl x509 -in keys/example.org.crt -text Certificate: Data: Version: 3 (0x2) Serial Number: 6 (0x6) Signature Algorithm: sha512WithRSAEncryption Validity Not Before: Mar 4 00:00:00 2017 Not After : Apr 1 00:00:00 2018I issued the certificated following tldp guide:openssl ca -config openssl-1.0.0.cnf -extensions server -days 375 -notext -md sha512 -in keys/example.org.csr -out keys/example.org.crt -startdate 20170304000000 -enddate 20180401000000 | format error in certificate's notBefore field but x509 -text shows a valid Not Before | openssl | When you establish the start/end date, you must set the time zone too! Here's a valid certificate:Certificate: Data: Version: 3 (0x2) [...] Not Before: Mar 5 03:01:35 2016 GMT Not After : Mar 5 03:01:35 2017 GMTThe -start/enddate options should be formatted YYMMDDHHMMSSZ, yours lack the final Z. |
_unix.375201 | I was just given a server and need to configure some Vhosts files.They have no idea where are they anymore.How do I locate them? | Locate VHost files in CentOS | vhost | Assuming it's an Apache webserver, take a look in /etc/httpd/:grep -r VirtualHost /etc/httpd/* |
_webmaster.58400 | We are organizing most of our work inside an self hosted MediaWiki Installation. Now we want to show an overview page for an category, where the visitor can see the newest pages/changes from this categorys.Like:Category: PHPNew Page: Added DocumentationChanges to PHP 5.Is there any built function or an working plugin available? I know about the possibilty to show recent changes on all kinds of content. | MediaWiki: Show newest Pages from an Category | mediawiki | Yes. You should use DynamicPageList:https://www.mediawiki.org/wiki/Extension:DynamicPageListWikinews from Wikimedia uses it and it's well documented and tested. |
_reverseengineering.10650 | not trying to sound like my question is just more important than others because I'm asking it, purely because the outcome of my work will involve electrical impulses directly into people's faces. I want to make sure I do this right.I've been looking into this hex editing and there seems to be no rhyme or rhythm to what I'm editing. I've programmed before, I can wrap my head around this stuff I just don't know where to begin. The ANSI pane is full of random numbers and letters. Is there any way to find out what hex relates to the number of impulses sent out by this machine? Or at very least, how can I approach the company that made the machine and ask intelligible enough questions to them about how to find the hex code? I want to be as efficient with my time, and their's, as I can be.Thanks. | How do I find specific sets of data when Hex Editing? (Important) | hex | null |
_cs.29381 | Given a biased $N$-sided die, how can a random number in the range $[1,N]$ be generated uniformly? The probability distribution of the die faces is not known, all that is known is that each face has a nonzero probability and that the probability distribution is the same on all throws (in particular, the throws are independent). This is the obvious generalization of Fair results with unfair die.Putting this in computer science terms, we have an oracle representing the die rolls: $D : \mathbb{N} \to [1,N]$ such that $p_i = P(D(k)=i)$ is nonzero and independent of $k$. We're looking for a deterministic algorithm $A$ which is parametrized by $D$ (i.e. $A$ may make calls to $D$) such that $P(A()=i) = 1/N$. The algorithm must terminate with probability 1, i.e. the probability that $A$ makes more than $n$ calls to $D$ must converge to $0$ as $n\to\infty$.For $N=2$ (simulate a fair coin from coin flips with a biased coin), there is a well-known algorithm:Repeat flip twice until the two throws come up with distinct outcomes ((heads, tails) or (tails, heads)). In other words, loop for $k = 0..\infty$ until $D(2k+1) \ne D(2k)$Return 0 if the last pair of flips was (heads, tails) and 1 if it was (tails, heads). In other words, return $D(2k)$ where $k$ is the index at which the loop was terminated.A simplistic way to make an unbiased die from a biased one is to use the coin flip unbiasing method to build a fair coin, and build a fair die with rejection sampling, as in Unbiasing of sequences. But is this optimal (for generic values of the probability distribution)?Specifically, my question is: what is an algorithm that requires the smallest expected number of calls to the oracle? If the set of reachable expected values is open, what is the lower bound and what is a class of algorithms that converges towards this lower bound?In case different families of algorithms are optimal for different probability distributions, let's focus on almost-fair dice: I'm looking for an algorithm or a family of algorithms that's optimal for distributions such that $\forall i, \bigl|p_i - 1/N\bigr| \lt \epsilon$ for some $\epsilon \gt 0$. | Simulate a fair die with a biased die | probability theory;randomized algorithms;random number generator | null |
_unix.291995 | I am a beginner so I wanted to do a test for the Job Control Commands. So, I ran a cat command and then made it a background job using the bgcommand after stopping it with Ctrl +Z. Now I wanted to first terminate that background process, so I used the command %kill-2%2 as the Process ID was [2] but it gave me an error saying No such job. I tried it will %kill-9%2 but same error. I checked it with fg command and that job was still running and it came on foreground Similarly, I wanted to suspend a background job, so I used the command %kill-19%2 but it gave me an error that No such Job I want to know my fault or error. | Can't terminate / suspend a background job | kill;cat;background process;job control | The command should be kill -2 %2 with proper spacing. The % sign at the beginning of your line is probably just the prompt they are using (PS1). |
_webmaster.82110 | I have a website with universal analytics in the head section and recently I setup Google tag manager to check if I would be able to resolve the issue with no success. My Tracking code looks like (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-XXXXXX-X', 'auto'); ga('send', 'pageview');I created an AJAX based event pushing the following onclick event functions in 3 different formsga('send', 'event', 'Category [Form]', 'Action Name [1,2,3]', 'Label Tag [1,2,3]',1);and in google analytics I created 3 Goals using the same Category/Action/Label for each of the 3 different Action-LabelsInside my Google Analytics Behaviour->Event->Overview Report I can see all 3 events being traked, but in the Convertion->Goals->Overview Report all 3 have cero records.Any ideas why this is happening? should I set up the goals again (after event tracking implementation)? | Events are being tracked but no corresponding Goals (Goal conversions are equals to cero) | google analytics;conversions;goal tracking;event tracking;analytics events | I found the solution to the problem myself and I would like to post the answer in here just in case someone goes through the same struggle. Basically the Goal value did not match the value in the function. The Goals configuration page looked like this I noticed the message under the Use the Event value as the Goal value for the conversion (Answered YES)If you dont have a value defined in the condition above that matches your Event tracking code, nothing will appear as the Goal Value.You can either:1) Try to match the function parameter with the same value as in the goal configuration or2) Set The Answer to NO, delete the value in both the function and the Goal configuration and assign the value for the conversion in $USD or to whatever your currency is (My Choice) |
_codereview.18223 | In ruby, is there a more concise way of expressing multiple AND conditions in an if statement?For example this is the code I have: if first_name.blank? and last_name.blank? and email.blank? and phone.blank? #do something endIs there a better way to express this in Ruby? 1.9.2 or 1.9.3 | Expressing multiple AND conditions in ruby | ruby | if [first_name, last_name, email, phone].all?(&:blank?) #do somethingendCaveat: While and/&& short-circuit the expression (that's it, only the needed operands are evaluated), an array evaluates all its items in advance. In your case it does not seem something to worry about (they seem cheap attributes to get), but if you ever need lazy evaluation and still want to use this approach, it's possible using procs, just slightly more verbose:p = procif [p{first_name}, p{last_name}, p{email}, p{phone}].all? { |p| p.call.blank? } #do somethingend |
_softwareengineering.58144 | I have been in computer business for 15 years in various roles (sysadmin, developer, researcher), and I have never encountered someone using excel for something more advanced than for formatting tables, or as an ad-hoc database that could have been maintained in a text-file.I had to do heavy data-processing and plotting and for that I used some perl scripts + gnuplot, got tiredof it, and went over to R eventually. 2D spreadsheet just didn't seem well-suited for doing statistical analyses over 5-dimensional datasets (not to mention that it produces UGLY plots).I attempted to use spreadsheet for time-tracking, and found out that I would have better been served by a relational database, so I gave up on using excel for that too. For example, it's important to consistently name tasks, and I needed to find out unique task names in a given column across several sheets (I had one timesheet for each month). How do you make such query in a program that essentially evaluates independent cells and has little notion of relations between them?So, what are spreadsheets useful for? Why do they have a bunch of mathematical stuff built into them when, AFAICT, people use them mostly as table formatters or bad substitutes for databases? | What is spreadsheet useful for? | spreadsheet;excel | In almost any industry, Excel is a fantastic tool for rapid prototyping and automation.Even in organizations that have a proper research and development team, nothing beats the ability to work alongside a business user with a spreadsheet to capture and apply important business knowledge, work-flow, and algorithms in real time.In spreadsheet software, especially one like Excel which has the back end programming support, developers have the opportunity to quickly produce tools with a familiar interface and easily identifiable logic to help users identify their needs and preferences.Once a fully functional prototype is completed, you have one of the best possible requirements packages, which can either be handed to an IT group so that a permanent, standalone system can be developed to replace it, or run through QA and released as the final solution to the business user's problem.So after years of providing useful prototypes and in-house solutions in next to no time, bare-minimum cost, and with no additional 3rd party involvement, perhaps a more interesting question would be:What isn't a spreadsheet useful for? |
_softwareengineering.211413 | I am working on a warehouse management system (WMS) that needs to support having stock in multiple locations. Could be in a different building, could be stored in n* places in a building (quick example would be stock, overflow, or fast moving slots all containing a qty of the same item).Where I am, they have always used one SKU assigned to one bin with paper notes pointing to overflow. Obviously they have grown beyond this and it's causing some serious issues. No one here can help me when I try to get what the best practice should be.I am having trouble conceptualizing how to setup the structure. I am thinking ...Warehouse (virtual or physical) Warehouses can belong to warehouses.Location - Anything really could be a pallet, box, or just a taped off area on the floor. Locations belong to warehouses.Rows - can be in locationsShelves can be in rowsslots(traditionally called a bin here) the smallest unit that can be a location.Then a SKU could be in any one or many of these locations (except the virtual warehouse - used only from grouping physical locations but allowing a sales order to process them internally as if it shipping from one space).Locations (are their children) could be given priority. So if a SKU is in 2 locations, the systems knows which slot(bin) to empty first before routing to the next.I code it so that the structures above can be handled by the warehouse since i don't care physically about them only that the process makes sense and then give them a tool to mark a sort order so that they can determine the best routes through the warehouse with batch picking an order.I guess I just wanted to get this out of my head and get someone elses eyes on it before I wrote a single line of code. Does this structure make sense? Is it a best practice? If not what is and if that question is outside the realm of the site could someone point me to it? | Inventory / Stock in multiple locations | data structures;planning | Encapsulation is going to be the key for making sure you start with the right structure.I would have a primary entity for the salable good (widget), of which the SKU is an identity type property. The salable goods or widgets are your base objects as that's what's being sold. SKUs can change although it's somewhat rare.Each widget can have zero or more locations, so I would have a collection of locations within the widget.Each location is going to have a relative weight representing the availability or location cost for that set of widgets. I would recommend making the location cost a first class object, and not just a value. Location cost is a relative term based upon where the caller is located. If I'm at one site then I want the back-of-store widgets instead of the widgets at another physical location. At a minimum, you need to hide the implementation of location cost to external callers so you can more easily adapt it in the future.To support referential integrity, I would make the widget.count() method iterate over all of the locations and count those. Otherwise you end up doubling the amount of stock keeping you have to do - once for the widget and again for the location. |
_webapps.97509 | As described in this question.Both the Vacation Responder and using a filter to send a Canned Response send the response the Return-Path field of the email, not the From field. Please don't tell me this isn't the standard, I know that. I need a workaround!I am trying to set up an SMS auto-responder from Google Voice. So I set Google Voice to forward texts to Gmail. But in Gmail there is no way to auto-reply to texts in the same messageit sends a new separate message using the Return-Path address. So, Google Voice does not receive the message because the Return-Path address is a bounce address. So Google Voice does not receive and forward the text back properly, so the auto-reply does not work.Can anyone help please? It's very frustrating that there is no way to do this. | How do I use Gmail's Vacation Responder or Canned Responses feature to respond to the From instead of the Return-Path address? | gmail;google voice;automation | null |
_unix.116695 | Assume I have the following pipe:a | b | c | dHow can I wait for the completion of c (or b) in sh or bash? This means that script d can start any time (and does not need to be waited for) but requires complete output from c to work correctly.The use case is a difftool for git that compares images. It is called by git and needs to process its input (the a | b | c part) and display the results of the comparison (the d part). The caller will delete input that is required for a and b. This means that before returning from the script, process c (or b) must terminate. On the other hand, I cannot wait for d because this means I'm waiting for user input.I know I can write the results of c to a temporary file, or perhaps use a FIFO in bash. (Not sure if the FIFO will help, though.) Is it possible to achieve this without temporary files in sh?EDITPerhaps it would be sufficient if I could find out the process ID of the c (or b) process in a reliable fashion. Then the whole pipe could be started asynchronously, and I could wait for a process ID. Something along the lines ofwait $(a | b | { c & [print PID of c] ; } | d)EDIT^2I have found a solution, comments (or still better solutions) are welcome. | Semi-asynchronous pipe | bash;shell;pipe;fifo | null |
_unix.158816 | I'm running CentOS 6.4 on vagrant and then doing a vagrant SSH into the box. I've been trying to get backspace to work correctly for a while now (as chronicled here: Centos Terminal Configuring Backspace and Ctrl-h Correctly) As a part of this, I'm trying to use loadkeys to modify the actions in the keymap - but that doesn't seem to work very well. So, as root, I did the following (as specified here):[root@localhost vagrant]# dumpkeys -f | grep -iE string...string F9 = \033[20~string F10 = \033[21~...[root@localhost vagrant]# echo 'string F10 = foo ' | loadkeys # to make F10 print foo[root@localhost vagrant]# dumpkeys -f | grep -iE string # verify that keymap is changed...string F9 = \033[20~string F10 = foo...Now type F10. This keeps giving me the ~ character instead of printing foo. This is the original behavior before loadkeys were invoked - so it looks like loadkeys has no effect at all? | loadkeys has (almost) no effect | centos;console;key mapping | loadkeys re-programs the terminal emulator that is built in to the kernel, via ioctl() requests through a kernel virtual terminal device. You aren't using that terminal emulator when you connect to the machine via ssh. Indeed, you aren't involving any terminal emulator, kernel or user space, on that machine at all.The terminal emulator on your local machine is what is mapping function key presses into control sequences. Of course loadkeys isn't reprogramming the terminal emulator that is running on a completely different machine at the local end of your ssh connection.If you hadn't run loadkeys as the superuser, you'd have received the useful error message that when run from the ssh login session loadkeys couldn't find a kernel virtual terminal to talk to, because one wasn't involved in that login session. |
_softwareengineering.302187 | I'm designing an application with DDD. I'm moving from flat POCO objects to strong domain models, so my question is:Would I have to call my basic CRUD operations (located in my repository layer) from controllers directly, without passing through the domain layer? I can't see any added value to doing that, but I'm not sure if it's inside the DDD practices make that direct call. | CRUD operations in DDD | design patterns;mvc;domain driven design;asp.net mvc;patterns and practices | The typical entry point for this in DDD is an Application Service. Application services orchestrate calls to repositories and domain objects. They also know about the current execution state and often control the overarching business transaction through a unit of work that is committed at the end of the service method.For example :Create new domain objectAdd it to RepositoryCommit UoWorGet domain object from RepositoryModify itCommit UoWetc.The application service can be called from a Controller. In some implementations it is the controller, when people don't want to bother an additional abstraction layer. But that can lead to a Fat Controller.my basic CRUD operations (located in my repository layer)While C, R and D are part of a Repository interface, U doesn't have to if you have a Unit of Work. Update of all changed domain entities in the UoW will be done automatically on UoW.Save(). |
_softwareengineering.266974 | With the below piece of thread related code, I see that author of Thread class is hiding the details about the working of start() method. What a user of Thread class need to know is, class Thread would expect a piece of his own code which will be an instance of Runnable's anonymous concrete sub-class that gets passed./* TestThread.java */public class TestThread{ public static void main(String[] args){ Thread threadObject = new Thread(new Runnable(){ public void run(){ /* your own code that has to run*/ System.out.println(a new thread); } }); threadObject.start(); System.out.println(Main thread); }}But in this below program, It looks like we partially hide the implementation details of listFiles(filter) method. Because if author has to change the logic of listFiles(filter) in future without affecting the users, he should make sure that method signature boolean accept(File dir, String name); should not get affected./* ListDirectoryWithFilter.java */import java.io.File;import java.io.FilenameFilter;public class ListDirectoryWithFilter{ public static void main(String[] args){ String path = System.getProperty(user.home) + File.separator + workspace + File.separator + JavaCode + File.separator + src; File dir = new File(path); if(dir.isDirectory()){ File[] files = dir.listFiles(new FilenameFilter(){ public boolean accept(File dir, String file){ return file.endsWith(.java); } }); for(File file : files){ System.out.println(file.getName()); } } }}Do you think, it would have been better that listFiles(filter) method just expect a regular expression(*.java) from the user and rest is the result that user expects?It looked a bit unnecessary approach for me, to implement a two argument accept(,) method and then pass it to listFiles() method.So, Are such implementations a good practice? If yes, when do we think of such implementations? | Query on hiding implementation details in java | java;object oriented design;encapsulation | null |
_codereview.85624 | Disclaimer: This question is very much like the one posted here. I gathered some opinions about my options from the answer there. Here, I just want validation about the choices I'm deciding to stick to and see what people think about the decisions specifically. Also, gather suggestions about other parts of the code I'm not specifically asking about, since I'm new to Python OOP.Scenario: I'm writing a program that will send emails. For an email, the to, from, text and subject fields will be required and other fields like cc and bcc will be optional. Also, there will be a bunch of classes that will implement the core mail functionality, so they will derive from a base class (Mailer).Following is my incomplete code snippet:class Mailer(object): __metaclass__ == abc.ABCMeta def __init__(self,key): self.key = key @abc.abstractmethod def send_email(self, mailReq): passclass MailGunMailer(Mailer): def __init__(self,key): super(MailGunMailer, self).__init__(key) def send_email(self, mailReq): from = mailReq.from to = mailReq.to subject= mailReq.subject text = mailReq.text options = getattr(mailReq,'options',None) if(options != None): if MailRequestOptions.BCC in options: #use this property pass if MailRequestOptions.CC in options: #use this property passclass MailRequest(): def __init__(self,from,to,subject,text): self.from = from self.to = to self.subject = subject self.text = text def set_options(self,options): self.options = optionsclass MailRequestOptions(): BCC = bcc CC = ccI've made the following decisions about code designs. What do you think about them?The send_email() method will take four required parameters - to, from, subject and text, and bunch of other optional parameters like cc, bcc etc. So I decided to create the MailRequest wrapper, which will take the 4 required fields in the constructor, and the other optional parameters will go in the options dict.Is this an acceptable way of doing this? Why not use **kwargs?Because if you define something like:def foo(**kwargs): passThen to call it, you can't do something like:options = []options[one] = 1options[two] = 2foo(options)Right?You'd have to do something like:foo(one=1,two=2)Now if I have 15 parameters that foo accepts, then **kwargs isn't a good way to do this right?I've created the MailRequestOptions class to contain static strings. The reason behind the existence of this class is, even if the user knows he has to pass some options in the options dict of the MailRequest object, how would he know which options can he set. This class could probably help the user know about what options can be set. This will also be helpful if the user has auto complete in an IDE or something. Do you think I'm thinking right? Or is this a somewhat unusual way of doing things? | Program for sending emails | python;object oriented;email | First thing first - PEP 8 is the de-facto style guide for Python. It's a good idea to just stick to it, and for the most part everyone else will too.This would look likeclass Mailer(object): __metaclass__ == abc.ABCMeta def __init__(self, key): self.key = key @abc.abstractmethod def send_email(self, mail_req): passclass MailGunMailer(Mailer): def __init__(self, key): super(MailGunMailer, self).__init__(key) def send_email(self, mail_req): from_ = mail_req.from_ to = mail_req.to subject= mail_req.subject text = mail_req.text options = getattr(mail_req, 'options', None) if options is not None: if MailRequestOptions.bcc in options: # use this property pass if MailRequestOptions.cc in options: # use this property passclass MailRequest: def __init__(self, from_, to, subject, text): self.from_ = from_ self.to = to self.subject = subject self.text = text def set_options(self,options): self.options = optionsclass MailRequestOptions: bcc = bcc cc = ccThere are a couple of logic changes (eg. is not None instead of != None), but nothing significant.Next thing I notice isoptions = getattr(mail_req, 'options', None)This is an absolute red flag to me. Attributes should not exist sometimes and not other times. Only in odd cases is such a thing appropriate. This is easy enough to fix withself.options = None # = {}?in MailRequest.__init__.Your set_options method is pointless - MailRequest.options is public. Since you intend to use it as a struct-like container, keep it that way.I'm not sure why you even have the options parameter though; your options have a static set of accepted attributes so just inline it. Again, this removes the use of existence as a variable. Attribute existence is a terrible way of hiding state.class MailRequest: def __init__(self, from_, to, subject, text): self.from_ = from_ self.to = to self.subject = subject self.text = text self.bcc = None self.cc = NoneThis also removes the very strange MailRequestOptions class.Then I look at the Mailer ABC. Remember that Python is duck-typed. The reason for an ABC is to standardize common interfaces that arise in the code - unless you have at least a few implementations, duck-typing is more appropriate.Which givesclass MailGunMailer: def __init__(self, key): self.key = key def send_email(self, mail_req): from_ = mail_req.from_ to = mail_req.to subject= mail_req.subject text = mail_req.text if mail_req.bcc is not None: # use this property pass if mail_req.cc is not None: # use this property passclass MailRequest: def __init__(self, from_, to, subject, text): self.from_ = from_ self.to = to self.subject = subject self.text = text self.bcc = None self.cc = NoneFundamentally this is also inappropriate. Your first question gives reasons that normal arguments might not be appropriate, but Nizam Mohamed points out some flaws in your understanding. In fact, having a lot of parameters in a signature is fine. Objects to hold options exist for several reasons:Some languages have little flexibility in argument passing and little ability to name argumentsSome languages don't let you pack up arguments and pass them around easilyThere are cases where the arguments form an object that gets reusedThe first two points are not good reasons in Python. See subprocess for an example of something that just offers a large number of arguments. The caller does not have to specify them all since most are default arguments.The last option does happen, but it is more akin to typical OO abstractions. And as usual with abstractions, premature abstraction is bad.In this case (in isolation), just offer the arguments directly.class MailGunMailer: def __init__(self, key): self.key = key def send_email(self, from_, to, subject, text, bcc=None, cc=None): if bcc is not None: # use this property pass if cc is not None: # use this property passIn response to your second question, you are adding dynamicism where naturally there is none. This doesn't help. If you want to document the options, do it the one obvious way: in the options. (This is what inlining options achieved.)Simple, see? >>> import thisThe Zen of Python, by Tim PetersBeautiful is better than ugly.Explicit is better than implicit.Simple is better than complex.Complex is better than complicated.Flat is better than nested.Sparse is better than dense.Readability counts.Special cases aren't special enough to break the rules.Although practicality beats purity.Errors should never pass silently.Unless explicitly silenced.In the face of ambiguity, refuse the temptation to guess.There should be one-- and preferably only one --obvious way to do it.Although that way may not be obvious at first unless you're Dutch.Now is better than never.Although never is often better than *right* now.If the implementation is hard to explain, it's a bad idea.If the implementation is easy to explain, it may be a good idea.Namespaces are one honking great idea -- let's do more of those! |
_unix.19513 | If I do this:iptables -nvL > output.txtoutput.txt ends up empty. If I do:iptables -nvL >> output.txtIt works fine. Appending is working, but overwriting is not. Why? | Why can I append to a file but not overwrite it? | io redirection | null |
_cs.35834 | Given boruvka's algorithm:MST T <- empty treeBegin with each vertex v as a componentWhile number of components > 1 For each component c let e = minimum edge out of component c if e is not in T add e to T //merging the two components connected by eIn each phase I'd like to reduce the graph's size, by saying that after each phase - there is actually no need to remember edges that are within each component (because some were inserted to the MST T already and others are not needed). So instead of each component I'd like to put only a single vertex. The only problem comes when I try to construct my edges - an edge between two new vertices (which were two components before) is the one with the smallest weight among all the edges between a vertex in the first component and a vertex in the second. I wanted to implement this in linear time, but I don't see how I can reduce the edges as well, all in linear time? | Borvka cleanup in linear time? | algorithms;graphs;spanning trees | null |
_softwareengineering.72621 | I joined the company currently I am working on as a fresher. Due to the limited number of skilled people in GIS software development, and since I was among one of them I was directly recruited as a Project Manager.I was quite conversant with Java and GIS, and I have done self motivated research on location based services, but not with project management and structured software development. It was one year after my graduation as a Geology special and during the previous year I was working as an academic in a University.Thanks to the interest I was having at work, an opportunity shown up, and eventually I was made responsible for the Business Intelligence department of the company as well. The company believed in me. I myself studied data warehousing and BI concepts and was successful in combining GIS with BI as well. Also I am currently working with two developers on our BI tool in C# WPF, where I also play the role of a developer at times (which I like).I tried extremely hard to adopt good software development methodologies with agile project management, but it was not very successful. Also, though I believe in well designed code as far as a product is concerned, due to the lack of technical knowledge my CEO has (who is directly above me), I normally do not get the amount of time needed to do it. The time taken is greatly enhanced by the lack of expertise we have in the specific coding language as a whole too (for instance WPF opposed to Java). Also there is no version controlling system in place as well.I am extremely fed up with the way things are going as it is not structured and I find most of my time thinking than working as to how to get things structured. I hope you guys with good professional experience will be able to help me overcome this situation. | How can I overcome a badly structured software development model? | project management;development methodologies | We had a similar problem (without the technical details, of course) in the company I work about two years ago.You just need to do it one step at a time. Don't try to adopt the agile software development in a rush. There's a lot of stuff to learn and apply.Don't let the lack of expertise to bring you down either. Build slowly (but as fast as you can :P), steadily and surely.I would recommend the next steps (to do this, you might switch from management to development for a while, but that should be fine)Learn a good version control system, and learn it well. Personally I would recommend git or mercurial. There is a lot of documentation on both.Build a solid core on practices and patterns. Read books, read blogs, watch screencasts with the team members. This will give a new air to the development.Learn TDD/BDD and try to apply it in the new code, as well as in the old code that you might touch when doing a new feature.Do pair programming. Two heads think better than one, and also 4 eyes are better than 2 :).Find about the latest and most common used tools in the community of the language you are currently developing. Learn about them and try to include some of them in the project. See how these were built and learn.Use scrum. Iterations, stories, story points, impediments are all concepts you should get familiar with. For me, scrum has proven to be the best workflow for software development and management. Apply it and learn from each day experience.Teach by example. Most of beginner developers are eager to learn new stuff, but also some of them are very lazy. Anyways, show them the new stuff you've been learning and applying and hopefully that will tickle their brains.Also, if possible, hire a consultant just so that he can check out the process and give better advice.Don't get lazy or discouraged. Just learn from your mistakes and try different approaches. This is just the beginning!Edit:Here are some of the links and books that I've been reading/using lately...Learning git: Pro GitThese are some of the blogs that I would recommend (most of them are .NET oriented):Java BlogsKarl Seguin's Blog and his Foundations of Programming series.CodeBetter.comCode ThinkedLos TechiesClean CoderAnd also this link to a stack overflow question.For books, you can see Buiding A Solid Programming Core list on amazon.I would also recommend these:Clean CodeAgile Software Development, Principles, Patterns and PracticesAlmost any book from The Pragmatic Bookshelf |
_opensource.1724 | I have just wrote a program and would like to release it dual-license (probably AGPL and proprietary, but I also would like to retain the ability to release it under other licenses in the future).I want to welcome patches from the future community, but merging should not compromise the goal above, so if I understand correctly I need a kind of contributor agreement.The Harmony Agreement Selector looks like a tool designed for me, but I am stuck at the first question:As seen in the screenshot above, there are two options:Copyright License (CLA)Copyright Assignment (CAA)What is the difference between the two? | Difference between Copyright License (CLA) Copyright Assignment (CAA) | copyright;contributor agreements | null |
_cogsci.262 | Although adult brains are malleable and even undergo limited neuorgenesis, the extent of the neuroplasticiy is much lower than in children. This is most obvious in language acquisition, and recovery from brain trauma.Are there formal models (computational or mathematical) that explain why our brains so drastically reduce in plasticity with age?If a highly malleable brain is supposed to help us adapt and deal with a constantly changing environment, then naively one would expect it to be advantageous to maintain a malleable brain for your whole life.Background from critical-period in language acquisitionThis is an example of formal models that I am already familiar with that answer a related question (critical-period in language acquisition). I am interested in answers in this spirit, but that can address not just language-acquisition but the general decrease in neuroplasticity.In the case of the critical period for language acquisition there are evolutionary models by Hurford (1991) and Komarova & Nowak (2001). However, neither model generalizes easily to the case of neuralplasticity. Hurford's model uses neutral drift to explain the upper bound on the critical period of language acquisition because of the need for second language acquisition in life is largely unnecessary. However, the need to adapt to your environment is necessary throughout life, so plasticity should not be under neutral drift. In the case of Komarova & Nowak, the upper bound is due to a trade-off between the cost of learning (driving critical period down) and the importance of learning a language accurately (driving critical period up). This balances out and allows for an ESS due to dimishing returns: once you've learned a language pretty-well it becomes more costly to invest in learning further than the returns from better learning. However, adapting to a constantly changing environment is not a single static task, and thus it is not clear why your returns would diminish. Further, it is not clear how keeping high plasticity is more costly than maintaining lower plasticity.NotesThis is a question of why, not how. Although it is very interesting to know how the plasticity of adult brains decreases, in this question I am interested in why this is the case over the hypothetical keep as malleable as a baby alternative.Both Hurford (1991) and Komarova & Nowak (2001) provide formal evolutionary models that I do not describe in detail. I am interested in formal models like this, although they need not be evolutionary. An answer on the level of rhetoric (especially if it is evolutionary rhetoric) is not nearly as interesting to me as a formal model.Hurford (1991) and Komarova & Nowak (2001) are meant as examples of work that answer the potentially easier question of critical period of language-acquisition. I am interested in the more general question of decrease in neuroplasticity.ReferencesHurford, J. R. (1991). The evolution of critical period for language acquisition. Cognition, 40, 159-201. FREE PDFKomarova, N. L. & Nowak, M. A. (2001). Natural selection of the critical period for language acquisition. Proc. R. Soc. London. B, 268(1472), 1189-1196. FREE PDF | Why does neuroplasticity decrease in adults? | neurobiology;developmental psychology;computational modeling;evolution;plasticity | null |
_unix.159167 | I have uninstalled acroread 9.5.5-1precise1 in software center, but those under /opt/Adobe/Reader9 seem intact:/opt/Adobe/Reader9$ ls *bin:acroreadBrowser:HowTo install_browser_plugin intellinuxReader:AcroVersion Cert GlobalPrefs help IDTemplates intellinux JavaScripts Legal PDFSigQFormalRep.pdf pmd.cer TrackerResource:CMap Font Icons Linguistics Shell Support TypeSupportI don't remember how I installed adobe acrobat reader (by software center, or from some deb package?)Can I remove the files in /opt/Adobe/Reader9 safely?How do you uninstall software installed under /opt/ in general? Thanks.$ dpkg -S acroreadoxygen-icon-theme: /usr/share/icons/oxygen/64x64/apps/acroread.pngacroread-bin: /usr/share/man/man1/acroread.1.gzgnome-orca: /usr/lib/python2.7/dist-packages/orca/scripts/apps/acroread/script.pyacroread-bin: /usr/bin/acroreadoxygen-icon-theme: /usr/share/icons/oxygen/32x32/apps/acroread.pnggnome-orca: /usr/share/pyshared/orca/scripts/apps/acroread/__init__.pyoxygen-icon-theme: /usr/share/icons/oxygen/48x48/apps/acroread.pngoxygen-icon-theme: /usr/share/icons/oxygen/16x16/apps/acroread.pngacroread-bin: /opt/Adobe/Reader9/Resource/Shell/acroread.1.gzacroread-bin: /usr/share/applications/acroread.desktopgnome-orca: /usr/lib/python2.7/dist-packages/orca/scripts/apps/acroread/__init__.pyzsh: /usr/share/zsh/functions/Completion/X/_acroreadacroread-bin: /opt/Adobe/Reader9/Resource/Shell/acroread_tabacroread-bin: /opt/Adobe/Reader9/Reader/intellinux/bin/acroreadgnome-orca: /usr/lib/python2.7/dist-packages/orca/scripts/apps/acroreadgnome-orca: /usr/share/pyshared/orca/scripts/apps/acroreadacroread-bin: /usr/share/doc/acroread-bin/copyrightacroread-bin: /usr/share/doc/acroread-binacroread-bin: /usr/share/doc/acroread-bin/changelog.Debian.gzacroread-bin: /usr/share/lintian/overrides/acroread-binacroread-bin: /opt/Adobe/Reader9/bin/acroreadgnome-orca: /usr/share/pyshared/orca/scripts/apps/acroread/script.pyoxygen-icon-theme: /usr/share/icons/oxygen/128x128/apps/acroread.pngoxygen-icon-theme: /usr/share/icons/oxygen/22x22/apps/acroread.pngacroread-bin: /etc/bash_completion.d/acroread.sh | Uninstall application under /opt? | ubuntu;software installation | null |
_unix.99123 | I have a text file, i want to search for tags such as the following:<category=SpecificDisease>Type II human complement C2 deficiency</category><category=Modifier>Huntington disease</category><category=CompositeMention>hereditary breast and ovarian cancer</category><category=DiseaseClass>myopathy</category>and produce the following and write them to a new text file.Type II human complement C2 deficiencyHuntington diseasehereditary breast and ovarian cancermyopathy | Extract information from a text file | text processing | null |
_codereview.169519 | I recently took a class on Angular 2+ at Code School.I then rebuilt my homepage using Angular 4. I would greatly appreciate if any developer experienced with Angular 2+ can check out my code on git and offer constructive feedback.This site features loading content from JSON services into dynamic templates, unit tests, and advanced CSS.Full code here:https://github.com/garyv/garyvProjects page here: http://garyvonschilling.com/work<!-- projects.component.html --><div class='big-margin-bottom'> <h3>Technology used:</h3> <ul class='tags row'> <li *ngFor=let tag of projectTags.tags class=col tag [class.active]=projectTags.isActive(tag) (click)=toggleTag(tag)> {{tag}} <i [class.fa]=true [class.fa-check]=projectTags.isActive(tag) [class.fa-minus-circle]=!projectTags.isActive(tag)></i> </li> </ul></div><div class='row' [class.fade-in]=!skipFade> <div *ngIf=!projectTags.activeProjects.length> <h4> Click a tag above to see projects <i class='fa fa-level-up'></i> </h4> </div> <div *ngFor=let project of projectTags.activeProjects class='col project grid-4 half big-margin-bottom medium-padding' [class.active]=project.active [class.hidden]=!project.active> <h4> <a [routerLink]=[project.friendlyId]>{{project.title}}</a> </h4> <a *ngIf=project.image?.src [routerLink]=[project.friendlyId]> <img alt='' [src]=project.image.src [srcset]=project.image.srcset sizes=(min-width: 37.5em) 30vw, 48vw /> </a> <div class='tags'> Tags: <span [innerHTML]=projectTags.activeProjectTags(project)></span> </div> </div></div>// projects.component.tsimport { Component, Input, OnInit } from '@angular/core';import { Project } from './project/project.model';import { ProjectTags } from './project-tags.model';import { ProjectsService } from './projects.service';import { StateService } from '../state/state.service';@Component({ selector: 'app-projects', templateUrl: './projects.component.html', styleUrls: ['./projects.component.css']})export class ProjectsComponent implements OnInit { projects: Project[]; projectTags: ProjectTags; @Input() skipFade: boolean; constructor(private projectsService: ProjectsService) { } ngOnInit() { this.projectTags = new ProjectTags(); this.projectsService.getProjects() .subscribe( (projects) => { this.projects = projects; this.projectTags.populateTags(projects); }); } toggleTag(tag) { this.projectTags.toggleTag(tag, this.projects); }}// projects.component.spec.tsimport { async, ComponentFixture, TestBed } from '@angular/core/testing';import { By } from '@angular/platform-browser';import { DebugElement } from '@angular/core';import { HttpModule } from '@angular/http';import { Observable } from 'rxjs/Observable';import 'rxjs/add/observable/of';import { ProjectsComponent } from './projects.component';import { Project } from './project/project.model';import { ProjectComponent } from './project/project.component';import { ProjectTags } from './project-tags.model';import { ProjectsService } from './projects.service';import { StateService } from '../state/state.service';import { Router, RouterModule } from '@angular/router';import { RouterTestingModule } from '@angular/router/testing';describe('ProjectsComponent', () => { let component: ProjectsComponent; let fixture: ComponentFixture<ProjectsComponent>; let debugElement: DebugElement; let projectsService: ProjectsService; let mockProjects: Project[] = [ { friendlyId: 'example-title', image: { src: 'https://www.fillmurray.com/400/300/', }, link: { address: '//example.com', text: 'example text' }, text: '<p>This is an example project.</p>', title: 'Example Title', tags: ['Example Tag', 'Same'] }, { friendlyId: 'lorem-ipsum-title', image: { src: 'https://www.fillmurray.com/400/300/g', }, link: { address: '//lorempixel.com', text: 'lorem ipsum' }, text: '<p>Lorem ipsum dolor sit amet.</p>', title: 'Lorem Ipsum Title', tags: ['Lorem', 'Same'] } ]; beforeEach(async(() => { TestBed.configureTestingModule({ imports: [ RouterModule, HttpModule, RouterTestingModule.withRoutes( [{path: 'work/:friendly-id', component: ProjectComponent}] ) ], declarations: [ ProjectsComponent, ProjectComponent ], providers: [ ProjectsService, StateService ] }) .compileComponents(); })); beforeEach(() => { fixture = TestBed.createComponent(ProjectsComponent); component = fixture.componentInstance; debugElement = fixture.debugElement; projectsService = fixture.debugElement.injector.get(ProjectsService); spyOn(projectsService, 'getProjects') .and.returnValue(Observable.of(mockProjects)); StateService.set('tags', '[Same]'); fixture.detectChanges(); }); it('should be created', () => { expect(component).toBeTruthy(); }); it('should list tags', () => { let tagsElement = debugElement.query(By.css('.tags li')); expect(tagsElement.nativeElement.textContent).toContain('Example Tag'); }); it('should list projects', () => { let projectElements = debugElement.query(By.css('.project')); expect(projectElements.nativeElement.textContent).toContain('Example Title'); }); it('should link to individual project page', () => { let projectLink = debugElement.query(By.css('a[href$=example-title]')); expect(projectLink).toBeTruthy(); }); it('should hide inactive projects', () => { component.projectTags.activeProjects.splice(0, 1); fixture.detectChanges(); let projectElements = debugElement.query(By.css('.project')); expect(projectElements.nativeElement.textContent).toContain('Lorem Ipsum'); });});// projects.service.tsimport { Injectable } from '@angular/core';import { Http } from '@angular/http';import 'rxjs/add/operator/map';import { Project } from './project/project.model';@Injectable()export class ProjectsService { constructor(private http:Http) {} getProjects() { return this.http.get('app/projects/projects.json') .map( (response) => { let projects = <Project[]>response.json().projects; for (let project of projects) { if (project.title && !project.friendlyId) { project.friendlyId = ProjectsService.getFriendlyId(project.title); } } return projects; }); } static getFriendlyId(title: string): string { return title.toLowerCase() .replace(/\W+/g, '-') .replace(/^-|-$/g, ''); }}// projects.service.spec.tsimport { TestBed, async, inject } from '@angular/core/testing';import { HttpModule, Http, Response, ResponseOptions, XHRBackend} from '@angular/http';import { MockBackend } from '@angular/http/testing';import { ProjectsService } from './projects.service';import { Project } from './project/project.model';describe('ProjectsService', () => { let projectsService: ProjectsService; beforeEach(() => { TestBed.configureTestingModule({ imports: [HttpModule], providers: [ ProjectsService, { provide: XHRBackend, useClass: MockBackend } ] }); }); describe('getProjects()', () => { it('should return an Observable<Project[]>', inject([ProjectsService, XHRBackend], (projectsService, mockBackend) => { const mockResponse = { projects: [ { //friendlyId: 'example-title', image: { src: 'https://www.fillmurray.com/400/300/', }, link: { address: '//example.com', text: 'example text' }, text: '<p>This is an example project.</p>', title: 'Example Title', tags: ['Example Tag', 'Same'] }, { //friendlyId: 'lorem-ipsum', image: { src: 'http://lorempixel.com/400/300/', }, link: { address: '//lipsum.com', text: 'lorem ipsum' }, text: '<p>Lorem ipsum dolor sit amet.</p>', title: 'Lorem Ipsum', tags: ['Lorem', 'Same'] } ] }; mockBackend.connections.subscribe( (connection) => { let response = new ResponseOptions({body: JSON.stringify(mockResponse)}); connection.mockRespond(new Response(response)); }); projectsService.getProjects().subscribe( (projects) => { expect(projects.length).toEqual(2); expect(projects[0].title).toEqual('Example Title'); expect(projects[1].title).toEqual('Lorem Ipsum'); expect(projects[1].friendlyId).toEqual('lorem-ipsum'); expect(projects[1].image).toEqual({src: 'http://lorempixel.com/400/300/'}); expect(projects[1].link).toEqual({address: '//lipsum.com', text: 'lorem ipsum'}); expect(projects[1].text).toEqual('<p>Lorem ipsum dolor sit amet.</p>'); expect(projects[1].tags).toEqual(['Lorem', 'Same']); }); }) ); describe('getFriendlyId()', () => { it('should make titles url friendly', () => { let friendlyId = ProjectsService.getFriendlyId( HellO World # !! ); expect(friendlyId).toEqual('hello-world'); }); }); });});// projects-tags.model.tsimport { Input, OnInit } from '@angular/core';import { Project } from './project/project.model';import { StateService } from '../state/state.service';const defaultTags:string[] = ['JavaScript', 'Ruby on Rails'];const storageKey:string = 'tags';export class ProjectTags { tags: string[] = []; activeTags: string[] = []; activeProjects: Project[] = []; @Input() highlightTag: string; constructor() { this.activeTags = JSON.parse(StateService.get(storageKey)); if (!this.activeTags || !this.activeTags.length) { this.activeTags = defaultTags; } } isActive(tag:string):boolean { return this.activeTags.indexOf(tag) != -1; } populateTags(projects:Project[]):void { this.activeProjects = []; for (let project of projects) { project.active = false; for (let tag of project.tags) { if (this.tags.indexOf(tag) == -1) { this.tags.push(tag); } if (!project.active && this.activeTags.indexOf(tag) != -1) { project.active = true; } } if (project.active) { this.activeProjects.push(project); } } this.tags = this.tags.sort(); StateService.set(storageKey, JSON.stringify(this.activeTags)); } toggleTag(tag:string, projects:Project[]):void { let index = this.activeTags.indexOf(tag); if (index == -1) { this.highlightTag = tag; this.activeTags.push(tag); } else { this.highlightTag = null; this.activeTags.splice(index, 1); } this.populateTags(projects); } activeProjectTags(project:Project):string { let tags:string[] = []; for (let tag of project.tags) { if (this.isActive(tag)) { if (tag == this.highlightTag) { tags.push(`<span class='highlight'>${tag}</span>`); } else { tags.push(`<span>${tag}</span>`); } } } return tags.join(', '); }} | Web developer profile page in Angular 2+ | typescript;angular 2+ | null |
_ai.1731 | What regulations are already in place regarding Artificial General Intelligences? What reports or recommendations prepared by official government authorities were already published?So far I know of Sir David King's report done for UK government. | AGI official government reports or regulations already in place | agi;legal | null |
_softwareengineering.191419 | I want to create a configuration file (text file preferred) that can be modified by user. A windows service will read from that file and behave accordingly. Can you recommend a file format for this kind of process? What I have in my is a config file like the game config files, but could not imagine how to read from it.My question is very similar to INI files or Registry or personal files?, but my question is different because I need to consider user editing. | Configuration file that can be modified by user in C# | c#;configuration | null |
_softwareengineering.336701 | Where is the figurative line drawn for using static services in a project? I am a coop student working and learning how to write .net MVC projects. I've been developing trying to stick to TDD. In my project I'm using ninject for dependency injection. I have written an abstract and a class implementing that for making API service calls to our internal API. There is no class library for these APIs yet so I'm writing my own stuff. In this case I'm finding myself wondering why I am using non-static classes for this API service. There is no state information stored for API calls they are IP validated. All the addresses for call locations are stored in the config file. I can't see how using a static class here would make testability harder. You can unit test the static class to see that it works. If you find yourself needing mock data you could simply mock the object the API is supposed to return. If that is impossible because of your method design it is highly likely the method you are testing is trying to do to much. I feel that almost all situations, in this particular case, where the fact the service is static would affect testing would arise when trying to manipulate the returned objects; which should be abstracted out to an extension or utility method that itself could be tested, so there should be no issue with the static class affecting testing.Is there a good reason to not go static that I am just missing, or something about unit testing and injection that I am just missing? I would like to be able to confidently say using static is a sound idea in this case, but feel am I lacking the knowledge to do so.Another question was linked 'Is static universally evil' that is not my question. I know static isn't universally evil and has its place. My question is where is it prudent to use static and under what circumstances might an abstract service be better written as a static one, while not causing testability issues. | Static services and testability | object oriented;unit testing;dependency injection | Static methods are problematic because they prevent dependency injection for users of that static method.First, let me be clear that this is not always an issue. If a function is pure (maintains no state across calls, and does not perform any I/O), then you can just test it directly once, and then use it in other code knowing that it will work. This usually only applies to small utility methods, or when you're doing strict functional programming.If the service provided by that static method is more complex, using dependency injection techniques becomes desirable:If a method maintains state, you do not want to keep that state between different tests that would be a kind of coupling that could obscure bugs. At the very least, you will want a mechanism to (at least temporarily) reset the state for each test. The easiest way to do that is to create a new instance of the enclosing class, which is not possible with static methods and static classes.If the service interacts with the outside world or performs any I/O (aside from logging), you don't want to do that during an unit test.Let's consider a service to send email notifications. The system under test should send emails under certain conditions. How do you test this? Set up a test email account, and verify that you got the expected message? Yes, but that would be an integration test. For unit tests, we would just want to make sure that the email service was invoked with the correct configuration.If you can intercept the call through some other mechanism than replacing the service through dependency injection, that's probably OK as well.If the service contains substantial business logic that might change frequently, you do not want to have unrelated tests fail because of changes to that logic. In the scope of a test, replacing unrelated business logic with stubs will make your tests more robust assuming it is of course tested elsewhere.Static classes do have their uses, but they should not be your default choice. Only when it's clearly not desirable to have an instance of that class should you make it static, which pretty much only happens when it's less of a class and more of a namespace for static methods. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.