id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_webapps.50235
I receive lots of emails in my Gmail account that are sent to a group [email protected] and look like this:Author: ...Date: ...New Revision: ...Added (or Modified or Removed): ...Log: ...So I've created a simple filter like this:To: [email protected],Includes the words:Author, Date, New Revision, Added | Modified | Removed, LogI've checked and it's working fine with emails where the subject field is up to about 120 characters, but sometimes the emails' subject have about 900 characters, and in that case, the filter is just not working!Does anyone have an idea?
Gmail filter not working when subject field is too long
gmail;gmail filters
null
_codereview.8929
It compiles, works well, solves my problem of zipping old or big log files and removes them from a hard drive. However, following this answer, I would like to know what was wrong with the code.Dir.foreach(FileUtils.pwd()) do |f| if f.end_with?('log') File.open(f) do |file| if File.size(f) > MAX_FILE_SIZE puts f puts file.ctime puts file.mtime # zipping the file orig = f Zlib::GzipWriter.open('arch_log.gz') do |gz| gz.mtime = File.mtime(orig) gz.orig_name = orig gz.write IO.binread(orig) puts File has been archived end #deleting the file begin File.delete(f) puts File has been deleted rescue Exception => e puts File #{f} can not be deleted puts Error #{e.message} puts ======= Please remove file manually ========== end end end endend
GZip archiver for log files
ruby;file system;file;logging;compression
My proposal, please see also the comments in the code.require 'zlib'MAX_FILE_SIZE = 1024 #One KB#Use a glob to get all log-filesDir[#{Dir.pwd}/*.log].each do |f| #skip file, if file is small next unless File.size(f) > MAX_FILE_SIZE #Make a one line info puts #{f}: #{File.size(f)/1024}KB, created #{File.ctime(f)} modified #{File.mtime(f)} # zipping the file - each log in a file Zlib::GzipWriter.open(arch_#{File.basename(f)}.gz) do |gz| gz.mtime = File.mtime(f) gz.orig_name = File.basename(f) #filename without path #File.read should be fine for log files. Or is there a reason for IO.binread gz.write File.read(f) puts File #{f} has been archived end #deleting the file begin #~ File.delete(f) puts File #{f} has been deleted rescue Exception => e puts File #{f} can not be deleted puts Error #{e.message} puts ======= Please remove file manually ========== endendRemark:This solution creates one gz-file per log file. Your old solution created one arch_log.gz. If there were two log-files, the 2nd would overwrite the 1st.
_cs.66891
I have a GA code that takes a very long time to converge(like 3-4 days), I want to know what is the bestparameter tuning method for it?I simply want to just tune population size, crossover and mutation rate.can I use classical parameter tuning methods such as DOE, EVOP, RSM etc but in a small number of generations and find the best levels of the parameters and then run the algorithm with those levels till convergence. Is it a reasonable method?Also can someone please tell me about 'Racing' method?Thank you in advance for any help.
How is parameter tuning done in real-world problems with very long run times?
genetic algorithms
null
_webmaster.108741
I just wondering how much will my website sell for? The website has been live for over 4 years, it is PR3 it has over 280000+ backlinks. Daily traffic is around 14000 visitors a day, revenue around $150,000 per Year.I have contacted a website broker and he told me they can sell the business for $350,000What do you think about this offer?
How much is my website worth?
service;online
null
_unix.227941
How do I join two files vertically without any separator? I tried to use paste -d a b, but this just gives me a.Sample file:000 0 0 00001000200030004 10 20 30 40 2000 4000 .123 12.11234234534564567
paste files without delimiter
text processing;files;paste
paste use \0 for null delimiter as defined by POSIX:paste -d'\0' file1 file2Using -d a b is the same as -d a b: the paste program sees three arguments -d, a and b, which makes a the delimiter and b the name of the sole file to paste.If you're on a GNU system (non-embedded Linux, Cygwin, ), you can use:paste -d file1 file2The form -d is unspecified by POSIX and can produce errors in other platforms. At least BSD and heirloom paste will report no delimiters error.
_datascience.2448
We have a ruby-on-rails platform (w/ postgreSQL db) for people to upload various products to trade. Of course, many of these products listed are the same, while they are described differently by the consumer (either through spelling, case etc.) lots of duplicatesFor the purposes of analytics and a better UX, we're aiming to create an evolving master product list, or whitelist, if you will, that will have users select from an existing list of products they are uploading, OR request to add a new one. We also plan to enrich each product entry with additional information from the web, that would be tied to the master product.Here are some methods we're proposing to solve this problem:A) Take all the items listed in the website (~90,000), de-dupe as much as possible by running select distinct queries (while maintaining a key-map back to original data by generating an array of item keys from each distinct listing in a group-by.)THENA1) Running this data through mechanical turk, and asking each turk user to list data in a uniform format.ORA2) Running each product entry through the Amazon products API and asking the user to identify a match.orA3) A better method?
Method to create master product database to validate entries, and enrich data set
data cleaning;sql
null
_cstheory.6937
I would like to have a bound on the cardinality of the set of unit disk graphs with $N$ vertices. It is known that checking whether a graph is a member of this set is NP-hard. Does this lead to any lower bound on the cardinality, assuming P $\neq$ NP? For example, suppose there is an ordering on all graphs with $N$ vertices. Would NP-hardness then imply the cardinality exceeds $2^N$, in that otherwise you could test for membership in polynomial time by doing a binary search through the set? I think this would assume that you have somehow stored the set in memory... Is this allowed? Defintion: A graph is a unit disk graph if each vertex can be associated with a unit disk in the plane, such that vertices are connected whenever their disks intersect.Here is a reference on NP-hardness of membership testing for unit disk graphs:http://disco.ethz.ch/members/pascal/refs/pos_1998_breu.pdf
Can I bound the cardinality of a set if testing for membership in it is known to be NP-complete?
cc.complexity theory;graph theory
I'm not sure if you're asking this question for the technique or for the answer, but there is a recent paper by McDiarmid and Mueller where they show the number of (labeled) unit-disk graphs on $n$ vertices is $2^{(2 + o(1))n}$; see http://homepages.cwi.nl/~mueller/Papers/countingDGs.pdf .
_codereview.62516
The following question was taken from Absolute Java 5th ed. by Walter Savitch:Write a program that starts with a line of text and then outputs that line of text with the first occurrence of hate changed to love . For example, a possible sample output might be:The line of text to be changed is: I hate you. I have rephrased that line to read: I love you. You can assume that the word hate occurs in the input. If the word hate occurs more than once in the line, your program will replace only the first occurrence of hate . Use a defined constant for the string to be changed. To make your program work for another string, you should only need to change the definition of this defined constant. This is the code that I have written: public class Question5 { private static final String STRING_TO_BE_CHANGED = hate; public static void main(String[] args) { System.out.println(The line to be changed is:); System.out.println(originalLine()); System.out.println(I have rephrased that line to read:); System.out.println(newLine()); } private static String originalLine() { return I hate you; } private static String newLine() { return originalLine().replaceFirst(STRING_TO_BE_CHANGED, love); }}
Changing the first occurrence of a text in a sentence
java;beginner
null
_webmaster.27362
I used cpanel on my hosted site to set up a password protected directory to allow downloads of specific files.I send people a link to the file by email and include their username and password so they can authenticate and download the file.When people use IE there is a warning message:Warning this server is requesting your username and password be sent in an insecure manner...This server is Apache. How can I stop this messsage appearing? Will SSL stop it? I would prefer to not have to use SSL.
How to avoid basic authentication warning when using protected directory?
apache;https;authentication
Yes, SSL is the answer. Without it their login and password are sent in plain text which is insecure. SSL encrypts their login information so it is secure from eavesdroppers.
_cogsci.15339
What is the prevalence in the general population?
What is the prevalence of lifelong, early onset, chronic depression?
depression
null
_scicomp.19050
Let $n_1,n_2 \in \mathbb{N}$ and $n=n_1n_2$ and $b\in \mathbb{R}^n$. I have a SPD-matrix $A=(a_{i,j})\in \mathbb{R}^{n \times n}$ with $a_{i,j}=0$ if $|i-j| \notin \{0,1,n_1\}$. Can we solve the system $Ax=b$ in time $\mathcal{O}(n)$? I work with Matlab. The Backslash operator does not seem to scale linearly in $n$. What about if $A=(a_{i,j})\in \left(\mathbb{R}^{d \times d}\right)^{n \times n}$ where $d\in \mathbb{N}$ is a fixed integer?The problem arises from an image analysis problem with an image of size $n_1 \times n_2$ resp. $n_1 \times n_2 \times d$. The hessian of a certain functional has nonzero entries only for neighboring pixels. That's why my matrix has this special structure.
Sparse linear system of certain type
linear algebra;matlab;sparse
You have the equivalent of the 5-point stencil in finite differences. The general answer is that you cannot solve this in $O(n)$ without making use of other properties of the matrix and/or its entries. Using the sparsity structure, by itself, is not enough. For example, if the matrix arises from the Laplace or another symmetric and elliptic equation, then you can use multigrid solvers to obtain $O(n)$ complexity. On the other hand, you get the same sparsity pattern if you discretize the high-frequency Helmholtz equation (with the bad sign) using the five-point stencil, and for that equation multigrid does not help.
_unix.72873
My PC has two processors, and I know that each one runs at 1.86 GHz. I have just read about clock cycles and I only have a (very) rough understanding, so I apologize beforehand if my question is so stupid.As I said, I want to measure the clock pulse of my PC processor/s manually, and my idea is just to compute the quotient between the number of assembler lines a program have, and the time my computer spends to execute it, so then I have the number of assembly instructions per time processed by the CPU (this is what I understood a 'clock cycle' is). I thought to do it in the following way:I write a C program and I convert it into assembly code.I do: $gcc -S my_program.c , which tells to gcc compiler to do the whole compiling process except the last step: to transform my_program.c into a binary object. Thus, I have a file named my_program.s that contains the source of my C program translated into assembler code.I count the lines my program have (let's call this number N). I did: $ nl -l my_program.s | tail -n 1 and I obtained the following:1000015 .section .note.GNU-stack,,@progbitsIt is to say, the program has a million of lines of code. I do: $gcc my_program.c so that I can execute it.I do: $time ./a.out (a.out is the name of the binary object of my_program.c) for obtaining the time (let's call it T) it is spent for running the program and I obtain:real 0m0.059suser 0m0.000ssys 0m0.004sIt is supposed that the time T I'm searching for is the first one represented in the list: the real, because the other ones refer on other resources that are running in my system at the same right moment I execute ./a.out.So I have that N=1000015 lines and T=0.059 seconds. If I perform N/T division I obtain that the frequency is near to 17 MHz, which is obviously not correct. Then I thought that maybe the fact that there are other programs running on my computer and consuming hardware resources (without going any further, the operating system itself) makes that the processor splits its processing power and it does the clock pulse goes slower, but I'm not sure. But I thought that if this is right, I should also find the percentage of CPU resources (or memory) my program consumes, because then I could really aspire to obtain a (well) approximated result about my real CPU speed. And this leads me to the issue of how to find out that 'resource consumption value' of my program. I thought about the $ top command, but it's immediately discarded due to the short time my program spends to be executed (0.059 seconds); it's not possible to distinguish by simple sight any peak on the memory usage during this little time.So what do you think about this? Or what do you recommend me to do? I'm just learning and as you can see I'm very far away of being an expert about these issues. And I know there are programs that do this work I try to do, but I prefer to do it by using the raw bash because I'm interested on doing it through the most universal way possible (seems like more reliable).
How to measure the clock pulse of my computer manually?
kernel;memory;cpu;time;cpu frequency
That won't work. The number of clock cycles each instruction takes to execute ( they take quite a few, not just one ) depends heavily on the exact mix of instructions that surround it, and varies by exact cpu model. You also have interrupts coming in and the kernel and other tasks having instructions executed mixed in with yours. On top of that, the frequency changes dynamically in response to load and temperature.Modern CPUs have model specific registers that count the exact number of clock cycles. You can read this, and using a high resolution timer, read it again a fixed period later, and compare the two to find out what the (average) frequency was over that period.
_unix.367624
I have samba set up on ubuntu server, and trying to share folders inside sharedfolders at the root directory. However, I get this error 0x80070043 in windows.root@ubuntu:~# ls -lh sharedfolders/total 12Kdrwxrwxrwx 2 root root 4.0K May 26 16:10 f1drwxrwxrwx 2 root root 4.0K May 26 16:10 f2drwxrwxrwx 2 root root 4.0K May 26 16:11 f3and here is my samba config file:[global]workgroup = KIWIserver string = %h server (Samba, Ubuntu)wins support = yesdns proxy = noname resolve order = lmhosts host wins bcastlog file = /var/log/samba/log.%mmax log size = 1000syslog = 0panic action = /usr/share/samba/panic-action %dsecurity = userserver role = standalone serverpassdb backend = tdbsamobey pam restrictions = yesunix password sync = yespasswd program = /usr/bin/passwd %upasswd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .pam password change = yesmap to guest = bad user usershare allow guests = yes[printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700[print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no[Shared] comment = Shared Files path = sharedfolders/f1 browseable = yes read only = no[Home Files] comment = Home Files path = sharedfolders/f2 browseable = yes read only = no[Work Files] comment = Work Files path = sharedfolders/f3 browseable = yes read only = nowhen I try to connect to one of the above folders (f1,f2,f3), I provide the user pass defined in samba, but I get this strange error.Could somebody help me solve this problem?and here is my samba status:root@ubuntu:~# smbstatusSamba version 4.3.11-UbuntuPID Username Group Machine Protocol Version------------------------------------------------------------------------------Service pid machine Connected at-------------------------------------------------------No locked filesupdate:here is the permissions for the root folder where my three shared folders are:root@ubuntu:~# ls -lhtotal 4.0Kdrwxrwxrwx 5 root root 4.0K May 26 16:11 sharedfoldersImage annex:windows:ubuntu:
samba file sharing gives 0x80070043 in windows
linux;ubuntu;networking;samba;file server
null
_webmaster.7168
Possible Duplicate:Should I use a file extension or not? I know this question has been asked before on Stack Overflow, but what I have not been able to find in the posts I've read are concrete references as to WHY one is better than the other (something I can take to my boss).So I'm working on an MVC 3 application that is basically a rewrite of the existing production application (web forms) using MVC. The current site uses a URL rewriter to rewrite friendly urls with HTML extensions to their ASPX counterpart.i.e. http://www.site.com/products/18554-widget.html gets rewritten to http://www.site.com/products.aspx?id=18554We're moving away from this with the MVC site, but the powers that be still want the HTML extension on the URLs. As a developer, that just feels wrong on an MVC site. I've written a quick and dirty HttpModule that will perform a 301 redirect from the .html URL to the same URL without the .html extension and it works fine, but I need to convince management that removing the .html extension is not going to hurt SEO. I'd prefer to have this sort of friendly URL:http://www.site.com/products/18554-widgetCan anyone provide information to back up my position or am I actually trying to do something that WOULD hurt SEO, in which case can you provide references on that?
.html extension or no for SEO purposes
seo
I'm no SEO expert so don't take my word for it, but: SEO is about coupling search terms to your pages so that your pages will show up as high as possible in the search results. By repeating a term in both the url, the title, an h1 tag, the body text and possibly in meta tags, your page is supposed to score higher than one that only has the term in one of these places. Now how would having .html in the url change this? Unless of course html is the search term. And what about those sites that use another extension such as .php, .aspx etc.? If pages were scored differently based on the extension, I would think it would have been quite a big of deal (i mean wouldn't that potentially favor one platform/framework over another)? And take a look at some other web sites. What does stackoverflow do? What does google themselves do? I don't see any .html extesions there. To me, that says it all.Just my 5 cents, anyway.
_codereview.30273
I wondered if there was any better way of doing so.I had a hard time figuring it out (it seemed nobody else already tried to do so).import reimport stringclass BetterFormat(string.Formatter): Introduce better formatting def parse(self, format_string): Receive the string to be transformed and split its elements to facilitate the actual replacing # Automatically counts how many tabs are before the {} then automatically adds the new rule (defined in format_field) return [(before, identifiant, str(len(re.search('\t*$', before).group(0))) + '\t' + (param if param is not None else ''), modif) for before, identifiant, param, modif in super().parse(format_string)] def format_field(self, v, pattern): Receive the string to be transformed and the pattern according which it is supposed to be modified # Hacky way to remove a dynamic sequence of characters ([0-9]+\t) and returning it sharedData = {'numberOfTabs': 0} def extractTabs(pattern): sharedData['numberOfTabs'] = int(pattern.group(0)[:-1]) # Save the data that will be erased return # This will erase it pattern = re.sub('[0-9]+\t', extractTabs, pattern) if sharedData['numberOfTabs']: # If there are tabs to be added v = (sharedData['numberOfTabs'] * '\t').join(v.splitlines(True)) return super().format_field(v, pattern)css = /* Some multi-lines CSSAll automatically indented */print(BetterFormat.format(<html> <head> <style type=text/css> {css} </style> </head> <body> <!-- Whatever --> </body></html>, css=css )) # Output <html> <head> <style type=text/css> /* Some multi-line CSS All automatically indented */ </style> </head> <body> <!-- Whatever --> </body></html>Otherwise, previously I had to do this:# []print( [] {css}[].format(css='\t\t\t'.join(css.splitlines(True))))And just so you better understand, I first only implemented BetterFormat.format_field, then I was able to do that, but I was still required to count the number of tabulations myself:# []print( [] {css:3\t}[].format(css=css))
Automatically indent a whole block of text into str.format
python;strings;python 3.x
null
_softwareengineering.142966
So, the title is a little awkward. I'll give some background, and then ask my question.Background: I work as a web GIS application developer, but in my spare time I've been playing with map rendering and improving data interchange formats. I work only in 2D space. One interesting issue I've encountered is that when you're rendering a polygon at a small scale (zoomed way out), many of the vertices are redundant. An extreme case would be that you have a polygon with 500,000 vertices that only takes up a single pixel. If you're sending this data to the browser, it would make sense to omit ~499,999 of those vertices. One way we achieve that is by rendering an image on a server and and sending it as a PNG: voila, it's a point. Sometimes, though, we want data sent to the browser where it can be rendered with SVG (or canvas, or webgl) so that it can be interactive. The problem: It turns out that, using modern geographic data sets, it's very easy to overload SVG's rendering abilities. In an effort to cope with those limitations, I'm trying to figure out how to visually losslessly reduce a data set for a given scale and map extent (and, if necessary, for a known map pixel width and height). I got a great reduction in data size just using the Douglas-Peucker algorithm, and I believe I was able to get it to keep the polygons true to within one pixel. Unfortunately, Douglas-Peucker doesn't preserve topology, so it changed how borders between polygons got rendered. I couldn't readily find other algorithms to try out and adapt to the purpose, but I don't have much CS/algorithm background and might not recognize them if I saw them.
How do graphics programmers deal with rendering vertices that don't change the image?
web development;algorithms;graphics;geometry
What you are looking for is 2d level of detail algorithms.There is much documented about this on google if you search for those highlighted terms.This question on stackoverflow has the information you are looking for on 2D Level of Detail rendering.
_softwareengineering.275156
If there's a work that need frontend and backend developers to work together, should they: Pair programming all the times? When it's backend work, frontend developer is navigator, backend developer is driver and vice-versa. This would most time-consume and I don't really sure it's has benefit enough for all time wasted.Pair programming with only some part like only API part that need to communicate the data with each other. Frontend may need to know some database structure to be able to pair with backend too.Just do the API Documentation together and after that, just do their own work base on the API Documentation. (Fastest way but I don't know if there will be any drawback).P.S. They work at the same place and can ask the others anytime when they do not understand something.
Is Pair programming works good when it come to backend developer and frontend developer working together?
pair programming
null
_cs.14502
I am looking for a library which provides a serializer ADT for synchronization. (http://courses.cs.vt.edu/~cs5204/fall99/Summaries/Concurrency/serializers2.html)Googling leads me to nowhere, unlike monitors,semaphores and other constructs, serializers are hardly discussed over the web in depth.Any other pointers are welcome. :)
Good serializer ADT implementations for Synchronization
operating systems;synchronization
null
_codereview.104991
Here is an implementation of a red black tree that I made. I am pretty sure it's working fine though I may have overlook something. How can I improve it in any way possible?Interface:#include <functional>#include <utility>#include <iostream>#include <stack>template <typename K , typename D, typename Func = std::less<K>>class RBTree {public: RBTree(); RBTree(Func); ~RBTree(); bool add(const K&, const D&); bool add(K&&, const D&); bool add(const K&, D&&); bool add(K&&, D&&); bool remove(const K& key, D&); std::pair<K,D> get_min() const; std::pair<K,D> get_max() const; bool get(const K&, D&) const; template<typename Accion> void in_order_walk(Accion) const; unsigned cardinality() const; bool empty() const; //Test bool isRedBlackTree() const { bool isRedBlack=true; blackH(root, isRedBlack); return assertPropery3(root) && isRedBlack && !root->isRed; } private: Func cmp; unsigned size; struct Node { D data; K key; bool isRed; Node *left; Node *rigth; Node *p; }; Node *root; Node *nil; Node* find_node(const K&) const; bool is_left_child(Node* x) const{ return x == x->p->left;} bool is_rigth_child(Node* x)const { return x == x->p->rigth;} void insert_move_up(Node *&x, Node* uncle); template <typename KK, typename DD> Node* BST_add(KK&& key, DD&& data); template <typename KK, typename DD> Node* BST_add_recursive(KK&& key, DD&& data, Node* p , Node*& node); void fixed_add(Node* x); void fixed_remove(Node* x); template <typename KK, typename DD> Node* create_node(KK&& key, DD&& data); void delete_node(Node*); void delete_node_v2(Node*); Node* minimun(Node* x) const; Node* maximun(Node* x) const; void transplant(Node * x, Node * y); template <typename ChildA, typename ChildB > void generic_fixed_delete(Node*& , ChildA, ChildB); template <typename ChildA, typename ChildB> void generic_fixed_add(Node*& ,ChildA, ChildB); template <typename ChildA, typename ChildB > Node* generic_rotate(Node*,ChildA, ChildB); template<typename Accion> void recur_in_order_walk(Node*, Accion) const; template<typename Accion> void iter_in_order_walk(Node*, Accion) const; template<typename Accion> void stack_in_order_walk(Node*, Accion) const; template <typename KK, typename DD> bool generic_add(KK&&, DD&&) ; void destroy_tree(Node*); static Node*& left(Node* x){ return x->left;}; static Node*& rigth(Node* x){ return x->rigth; }; //Test int blackH(Node*, bool& isRedBlack) const; bool assertPropery3(Node*) const;};Implementation:template<typename K, typename D, typename Func>bool RBTree<K,D,Func>::empty() const{ return root == nil;}template<typename K, typename D, typename Func>RBTree<K,D,Func>::~RBTree(){ destroy_tree(root); delete nil;}template<typename K, typename D, typename Func>void RBTree<K,D,Func>::destroy_tree(Node* node){ if(node == nil) return; destroy_tree(node->left); destroy_tree(node->rigth); delete node;}template<typename K, typename D, typename Func>RBTree<K,D,Func>::RBTree(Func pcmp): cmp(pcmp), nil(new Node{D(),K(), false , nullptr , nullptr, nullptr }){ root = nil;}template<typename K, typename D, typename Func>RBTree<K,D,Func>::RBTree(): RBTree(Func()){}template<typename K, typename D, typename Func>unsigned RBTree<K,D,Func>::cardinality() const{ return size;}template<typename K, typename D, typename Func>template <typename ChildA, typename ChildB >typename RBTree<K,D,Func>::Node* RBTree<K,D,Func>::generic_rotate(Node* x,ChildA childA, ChildB childB){ Node *y= childB(x); childB(x) = childA(y); if(childA(y)!=nil) childA(y)->p= x; if(x->p== nil) root = y; else if(x == childA(x->p)) childA(x->p) = y; else childB(x->p) = y; y->p =x->p; childA(y) = x; x->p= y; return y;}template <typename K, typename D, typename Func>template<typename KK, typename DD>bool RBTree<K,D, Func>::generic_add(KK&& key, DD&& data){ Node *newN= BST_add_recursive ( std::forward<KK>(key), std::forward<DD>(data), nil, root ); bool isAdded = newN!=nullptr; if(isAdded) fixed_add(newN); return isAdded;}template <typename K, typename D, typename Func>bool RBTree<K,D, Func>::add(const K& key, const D& data){ return generic_add(const_cast<K&>(key), const_cast<K&>(data));}template <typename K, typename D, typename Func>bool RBTree<K,D, Func>::add(K&& key, const D& data){ return generic_add(std::move(key), const_cast<K&>(data));}template <typename K, typename D, typename Func>bool RBTree<K,D, Func>::add(const K& key, D&& data){ return generic_add(const_cast<K&>(key), std::move(data));}template <typename K, typename D, typename Func>bool RBTree<K,D, Func>::add(K&& key, D&& data){ return generic_add(std::move(key), std::move(data));}template <typename K, typename D, typename Func>std::pair<K,D> RBTree<K,D, Func>::get_min() const{ if(empty()) throw std::underflow_error(underflow); auto min = minimun(root); return std::pair<K,D>( min->key, min->data);}template <typename K, typename D, typename Func>std::pair<K,D> RBTree<K,D, Func>::get_max() const{ if(empty()) throw std::underflow_error(underflow); auto min = maximun(root); return std::pair<K,D>( min->key, min->data);}template <typename K, typename D, typename Func>bool RBTree<K,D, Func>::get(const K& key, D& result) const{ Node* resultN = find_node(key); bool found = resultN!=nullptr; if(found) result = resultN->data; return found;}template <typename K, typename D, typename Func>template <typename KK, typename DD>typename RBTree<K,D, Func>::Node* RBTree<K,D, Func>::BST_add_recursive(KK&& key, DD&& data, Node* p , Node*& node){ if(node == nil) { node= create_node(std::forward<KK>(key) , std::forward<DD>(data)); node->p = p; ++size; return node; } if(cmp(key, node->key)) return BST_add_recursive(key, data, node , node->left); else if(cmp(node->key , key)) return BST_add_recursive(key, data, node , node->rigth); return nullptr;}template <typename K, typename D, typename Func>template <typename KK, typename DD>typename RBTree<K,D, Func>::Node* RBTree<K,D, Func>::BST_add(KK&& key, DD&& data){ Node *node= root, *p= nil; while(node !=nil) { p= node; if(cmp(node->key, key)) node= node->rigth; else if(cmp(key,node->key)) node = node->left; else return nullptr; } auto newN = create_node(std::forward<KK>(key) , std::forward<DD>(data)); if(root == nil) root = newN; else if(cmp(p->key, key)) p->rigth= newN; else p->left= newN; newN->p = p; ++size; return newN;}template <typename K, typename D, typename Func>void RBTree<K,D, Func>::fixed_add(Node* x) { while(x->p->isRed) { if(is_left_child(x->p)) generic_fixed_add(x, left,rigth); else generic_fixed_add(x, rigth, left); } root->isRed= false;}template <typename K, typename D, typename Func>template <typename KK, typename DD>typename RBTree<K,D, Func>::Node* RBTree<K,D, Func>::create_node(KK&& key, DD&& data){ return new Node{ std::forward<KK>(data), std::forward<DD>(key), true, nil, nil, nil};}template <typename K, typename D, typename Func>void RBTree<K,D, Func>::insert_move_up(Node *&x, Node* uncle){ x->p->p->isRed= true; uncle->isRed = false; x->p->isRed = false; x = x->p->p;}template <typename K, typename D, typename Func>typename RBTree<K,D, Func>::Node* RBTree<K,D, Func>::minimun(Node* x) const{ while(x->left!=nil) x = x->left; return x;}template <typename K, typename D, typename Func>typename RBTree<K,D, Func>::Node* RBTree<K,D, Func>::maximun(Node* x) const{ while(x->rigth!=nil) x = x->rigth; return x;}template <typename K, typename D, typename Func>template<typename Accion>void RBTree<K,D, Func>::in_order_walk(Accion accion) const{ iter_in_order_walk(root,accion);}template <typename K, typename D, typename Func>template<typename Accion>void RBTree<K,D, Func>::recur_in_order_walk(Node* node, Accion accion) const{ if(node == nil) return; recur_in_order_walk(node->left, accion); accion(node->key , node->data); recur_in_order_walk(node->rigth, accion);}template <typename K, typename D, typename Func>template<typename Accion>void RBTree<K,D, Func>::iter_in_order_walk(Node* node, Accion accion) const{ Node* min = minimun(node); while(min != node->p) { accion(min->key, min->data); if(min->rigth!= nil) min = minimun(min->rigth); else { while(min->p && is_rigth_child(min)) min = min->p; min= min->p; } }} template <typename K, typename D, typename Func>template<typename Accion>void RBTree<K,D, Func>::stack_in_order_walk(Node* n, Accion accion) const{ std::stack<Node*> s; s.push(nullptr); s.push(nullptr); while(!s.empty()) { while(n!=nil) { s.push(n->rigth); s.push(n); n = n->left; } if((s.top()!=nullptr)) accion(s.top()->key, s.top()->data); s.pop(); n = s.top(); s.pop(); }} template <typename K, typename D, typename Func>void RBTree<K,D, Func>::delete_node_v2(Node* z){ auto x = z->rigth, y=z; if(z->left == nil) transplant(z, x); else if(z->rigth == nil) transplant(z, x = z->left); else { y= minimun(z->rigth); x= y->rigth; z->data = std::move(y->data); z->key = std::move(y->key); transplant(y, x); } if(!y->isRed) fixed_remove(x); --size; delete y;}template <typename K, typename D, typename Func>void RBTree<K,D, Func>::delete_node(Node* z) { auto x = z->rigth, y=z; bool originalColor = z->isRed; if(z->left == nil) transplant(z, x); else if(z->rigth == nil) transplant(z, x = z->left); else { y= minimun(z->rigth); originalColor = y->isRed; x= y->rigth; if(y->p == z) x->p = y; else { transplant(y,y->rigth); y->rigth = z->rigth; y->rigth->p = y; } transplant(z,y); y->left= z->left; y->left->p= y; y->isRed= z->isRed; } if(!originalColor) fixed_remove(x); --size; delete z;}template <typename K, typename D, typename Func>void RBTree<K,D, Func>::transplant(Node * x, Node * y){ if(x->p == nil) root= y; else if(is_left_child(x)) x->p->left= y; else x->p->rigth= y; y->p= x->p;}template <typename K, typename D, typename Func>void RBTree<K,D, Func>::fixed_remove(Node* x){ while(x!= root && !x->isRed) { if(is_left_child(x)) generic_fixed_delete(x, left, rigth); else generic_fixed_delete(x, rigth, left); } x->isRed = false;}template <typename K, typename D, typename Func>template <typename ChildA, typename ChildB >void RBTree<K,D, Func>::generic_fixed_delete(Node*& x, ChildA childA, ChildB childB){ Node *w= childB(x->p); if(w->isRed) { std::swap( w->isRed, x->p->isRed); generic_rotate(x->p, childA, childB); w = childB(x->p); } if(!w->left->isRed && !w->rigth->isRed) { w->isRed = true; x = x->p; } else { if (!childB(w)->isRed) { std::swap(w->isRed, childA(w)->isRed); w = generic_rotate(w, childB, childA); } w->isRed= x->p->isRed; x->p->isRed = false; childB(w)->isRed = false; generic_rotate(x->p, childA, childB); x =root; }}template <typename K, typename D, typename Func>typename RBTree<K,D, Func>::Node* RBTree<K,D, Func>::find_node(const K& key) const{ Node *node = root; while(node!= nil) { if(cmp(key, node->key)) node= node->left; else if (cmp(node->key, key)) node = node->rigth; else return node; } return node;}template <typename K, typename D, typename Func>bool RBTree<K,D, Func>::remove(const K& key, D& data){ auto node= find_node(key); bool exist = node !=nil; if(exist) { data = node->data; delete_node_v2(node); } return exist;}template <typename K, typename D, typename Func>template <typename ChildA, typename ChildB >void RBTree<K,D, Func>::generic_fixed_add(Node*& x ,ChildA childA ,ChildB childB){ Node* uncle = childB(x->p->p); if(uncle->isRed) insert_move_up(x, uncle); else { if(x == childB(x->p)) generic_rotate(x = x->p, childA, childB); x->p->p->isRed = true; generic_rotate(x->p->p, childB, childA)->isRed =false; }}//testtemplate <typename K, typename D, typename Func>bool RBTree<K,D, Func>::assertPropery3(Node *node) const{ if(node == nil) return true; return !node->isRed || (node->left->isRed && node->rigth->isRed) && assertPropery3(node->left) && assertPropery3(node->rigth);}//testtemplate<typename K, typename D, typename Func>int RBTree<K,D, Func>::blackH(Node* node, bool &isRedBlack) const{ if(node == nil) return 1; auto left = blackH(node->left, isRedBlack); auto rigth = blackH(node->rigth, isRedBlack); isRedBlack= isRedBlack && left == rigth; return (node->isRed? 0 : 1) + std::max(left, rigth);}
Red-Black Tree (2-3-4 node tree) map implementation
c++;performance;c++11;tree
null
_softwareengineering.93681
I wondered if anyone could advise on the best way of storing a users acceptance of the Terms and Conditions in the database. I am in the UK if this changes anything.I have had the T&Cs drawn up by a UK Lawyer so don't need any advice on that part!At the moment I am thinking at the time of signup having a checkbox saying I agree to the [linked] terms and conditions and making sure this is checked to sign them up. In the database I will have a boolean saying True and also a Timestamp along with the email address they used to signup.Is this enough, if a user ever decided they wanted to challenge their acceptance is this recognised as proof? I have been able to find very little, if no, information about this on the web.
Storing T&C acceptance in database - best practice
legal;terms of service
null
_unix.361373
I am trying to use xsel clipboard to pipe search term to grep to search in folder full of txt files. Can anybody suggest a method to do it.
Piping search term from clipboard (not filename) to grep to search a folder
grep;clipboard;xsel
With grep implementations that support of -r option for recursive grep:grep -rFe $(xsel -b -o) /path/to/your/folderFor other grep implementations, use find to look up the files:find /path/to/your/folder -type f -exec \ grep -Fe $(xsel -b -o) /dev/null {} +The /dev/null is to make sure at least 2 file names are passed to grep so grep always prints the name of the files the strings are found in.Note that if the CLIPBOARD selection contains more than one line, each line will be searched separately. For instance, if the selection contains a<newline>b, it will report lines that contain a or b (or both).To match on a<newline>b instead, you could use pcregrep with its multiline mode:pcregrep -rM \Q$(xsel -b -o | sed 's/\\E/&\\&\\Q/g'; printf '\\E') /path/to/folder
_webapps.103044
I have 6 columns for recording scores of clients.Column N: IntakeColumn O: 3 MonthsColumn P: 6 MonthsColumn Q: >6 MonthsColumn R: Discharge/DropColumn S: Comparison Score*There will always be an Intake score. I seek help with a formula to make the value in Column S default to the latest available score. I'm using that column along with the Intake column score to calculate the percent change in another column.For example, lets say I have a new client and only an Intake score. I then want the Intake score value in Column N to also populate in that rows Column S. In another example, lets say I have a rating for all columns including discharge/drop, then I want Column R's value to populate in Column S. Could someone help me figure out a formula to do this? Currently if a score is not available, that cell is blank.
Google Sheets - Nested If/Not Blank to Fill a Value
google spreadsheets;formulas
null
_datascience.19348
I have been tasked with comparing the capabilities of different startups offering AI-assisted data preprocessing. Due to legal reasons I cannot offer company data for the benchmarking, not even simulated using the same structures and patterns, which means that I either create a dataset with not-company-related simulated data myself or find one on the internet.To clarify things, by data I mean timeseries. Some simple examples of what I would expect: centering/aligning the timeseries using e.g. cross-correlation, denoising them, finding out correlations, outlier detection, etc.Are there any (free) datasets designed to be hard to preprocess the data? I could not find any adequate. If not, what is the best way of constructing an own challenging, simulated dataset for this purpose?The purpose of the preprocessing step is to clean the timeseries data before it is fed into a neural network in order to be classified and tagged.
Creating a dataset for benchmarking of timeseries preprocessing capabilities
dataset;time series;data cleaning;correlation;preprocessing
null
_unix.113626
Is there a maximum password length on unix systems?If so, what is that limit and is it distribution dependent?
Maximum password length
security;password
Depends on which particular crypt() algorithm one is using:modified DES: 8 ASCII charactersMD5: Unlimited lengthBlowfish: 56 bytesNT Hash: Please don't use this oneSHA256/512: Unlimited lengthMore information: http://en.wikipedia.org/wiki/Crypt_%28C%29
_webmaster.35020
I think I understand the basic case for using rel=canonical: to tell Google which is the preferred URI when the same page/content may be accessed via more than one URI. This helps you avoid duplicate content penalties. But what else does it do?Does it also affect search ranking? i.e. will the page I specify in the canonical be ranked higher than the others? (if all else equal). And in the case of PDF documents, I understand that you can now specify rel=canonical for them too, using HTTP headers (i.e. in htaccess). Again, this would obviously help avoid duplicate content penalties if the PDF content is the same as the HTML page or if it can be accessed in more than one place. But does it affect ranking? or are there any other benefits to doing this.
rel=Canonical: Ranking Benefits ? & specifying for PDF?
seo;google;duplicate content;canonical url;pdf
null
_unix.53238
When installing OpenBSD 5.1, I got the question: Do you expect to run the X Windows System?What change does the installer make to my system if I say yes? I know what X Windows is, but I don't know why the installer wants to know if I plan to use it. Does it enable/disable X somehow based on my answer?
What does Do you expect to run the X Windows System? do when installing OpenBSD?
security;x11;openbsd
Random832's answer is the correct one but I'll give you an easier answer.The only part of a OS with direct access to the hardware is the kernel. In traditional unix systems, the X server (XFree86/Xorg) needs direct access to the graphics hardware, i.e. a userland process needs to bypass the kernel. This is a big security problem, so OpenBSD ask you for confirmation.If you answer yes, the installer change the sysctl entry (kernel configuration parameter that can be set at runtime) machdep.allowaperture=0 to machdep.allowaperture=2.The new graphic stack of xorg (KMS) will fix this problem but it's necessary to port KMS to OpenBSD.
_unix.122524
I want to know the commands that a specific Debian package offers me.For example lets say I installed a package called x.deb. This package surely contains some commands that i can use.How to list these commands.I know I can use compgen bash command to generate list of all available commands in my system but what i need is just for specific package.I tried the solutions:dpkg -L postgresql-9.3 | egrep '(bin|games)/'/usr/lib/postgresql/9.3/bin/pg_upgrade/usr/lib/postgresql/9.3/bin/pg_ctl/usr/lib/postgresql/9.3/bin/pg_resetxlog/usr/lib/postgresql/9.3/bin/postgres/usr/lib/postgresql/9.3/bin/pg_xlogdump/usr/lib/postgresql/9.3/bin/initdb/usr/lib/postgresql/9.3/bin/pg_controldata/usr/lib/postgresql/9.3/bin/postmasterI tried the command postgresuser@userPc:~$ postgresNo command 'postgres' found, did you mean: Command 'postgrey' from package 'postgrey' (universe)postgres: command not found
List all commands of a specific Debian package
debian;ubuntu;command line
Use dpkg -L pkgname and pipe it to a grep command searching for bin/ and games/:$ dpkg -L bash | grep -E '(bin|games)/'/bin/bash/usr/bin/bashbug/usr/bin/clear_console/bin/rbashIf you want to check for all binaries regardless if they are in your $PATH try this bash function:find_binaries (){ dpkg -L $@ | while read; do [ -f $REPLY -a -x $REPLY ] && echo $REPLY done}Invoke like so:$ find_binaries postfix...SNIP.../usr/lib/postfix/postfix-files/usr/lib/postfix/pipe/usr/lib/postfix/proxymap/usr/lib/postfix/local/usr/lib/postfix/discard...SNIP...
_unix.343525
Subversion 1.6.11 using AD authentication.The admin of a subversion repository has left the company. His AD login has been disabled. How can I move the subversion admin privileges from his account to me?
Change SVN admin user when SVN is using AD authentication
administration;subversion
null
_softwareengineering.247309
What are your preferences for referencing official documentation (like a white paper) for an algorithm from comments inthe code that implements it? I'm trying to decide between two methods, as illustrated by the examples below, but amopen to other suggestions. RFC 1321, for instance, describes the MD5 hash algorithm. If I were implementing MD5in Python using the paper as a guide, and documented my code as I went, which of the following implementations of anexample function pad_string(str) would you recommend, if not something entirely different?option 1def pad_string(str): Pads the string to be hashed to an appropriate length. See RFC 1321 3.1. where the relevant section of the RFC is:3.1 Step 1. Append Padding Bits The message is padded (extended) so that its length (in bits) is congruent to 448, modulo 512. That is, the message is extended so that it is just 64 bits shy of being a multiple of 512 bits long. Padding is always performed, even if the length of the message is already congruent to 448, modulo 512.oroption 2def pad_string(str): Pads the argument string such that its length is congruent to 448, modulo 512. Here, option 1 simply references relevant parts of the official MD5 documentation by section number, whereoption 2 doesn't make any mention of the paper and basically reiterates its contents where necessary. Theadvantage of the first is that the documentation adheres somewhat to DRY (Don't Repeat Yourself, in this case withrespect to the RFC), but its disadvantage is that it relies on a third-party resource: if it's an obscure technicalspecification hosted in one place and the link provided inside the code documentation dies, then you're out of luck.The advantage of the second approach is that your source is self-contained: you don't need any other information whenreading through it, and our example MD5 script would become as much a description of the algorithm as itsimplementation; this, of course, requires more writing on the code author's part, can bloat documentation, and, moreimportantly, may end up with mis-copied information and other errors. Is there any consensus on which is better?
Referencing official documentation in source-code documentation
documentation
It depends on type of documentation. If it is a publicly facing function you probably should provide all information necessary to use API. For example if I want to use SMTP library to send an email I don't want to read RFC 5321, RFC 2045, RFC 2046, RFC 2047, RFC 4288, RFC 4289, RFC 2049 and others as most parts are only vaguely relevant to what I want to achieve - in most cases I just want to send a simple message filling the To/From/Subject fileds and message - maybe add attachment if I feel adventurous. However you might want to reference it if RFC is simple and protocol is not well-known to provide additional information (or webpage describing protocol/format if it exists).For internal documentation the more important is why rather then how (this part is covered by code) or what. Therefore referencing the specification is useful. It will also allow the person modified the code to see what (s)he can change or at least what part of spec (s)he needs to consider.For example in your example it is probably clear from the code that pad_string pads string so result length is congruent to 448, modulo 512. However the MD5 spec provides reference why it is done.Other method is to nearly copy and paste the paper/RFC into code if flow is non-trivial and you cannot avoid it by making the code nicer in one way or another. For example lets say the pseudocode is:Ri <- head from stackMerge Ri and Rjif Ri' is not well balanced then balance(Ri')push Ri' on stackYou can do something like:// Ri <- head from stackNode *Ri = stack_pop(stack);// Merge Ri and RjNode *Rj = NULL;while(!empty(Ri) && !empty(Rj)) { Node *next; if (foo(peek(Ri), peek(Rj))) { next = get(Ri); } else { next = get(Rj); } add(Rj, next);}// if Ri' is not well balanced then// balance(Ri')...Also, of course, in such case you need to check the copyright law first (this is not a legal advice that it is permitted, especially that copyright law vary from jurisdiction to jurisdiction).
_codereview.70003
This is my RubyWarrior solution. I think RubyWarrior is well known to Ruby coders. I am starting to learn Ruby and I would appreciate some help from you to improve the organization or the mistakes that I could have. I don't need another example as the Internet is full of them, but what I need is some advice that I can use as a beginner.class Player attr_accessor :health, :direction, :n, :view, :warrior def initialize @health, @direction, @warrior = health , direction , warrior @direction = [:forward , :backward] @health, @view = [] , [] end def play_turn(warrior) @health << warrior.health @view = warrior.look @backview = warrior.look :backward senses = warrior.feel @direction[@n] if ( senses.wall? && next_empty?(warrior) ) || senses.wall? warrior.pivot! elsif @backview[1].to_s.include?(Captive) warrior.walk! :backward elsif @backview[0].to_s.include?(Captive) warrior.rescue! :backward elsif next_empty?(warrior) && senses.stairs? || safe_step? warrior.walk! puts Hero :D Finally I made it! elsif senses.captive? warrior.rescue! puts Hero ;) feel free to face your destiny elsif seek_and_shoot? warrior.shoot! puts Hero >< BAAANNG!! Die already ranged scumbag elsif warrior.health < 15 && safe?(warrior) warrior.rest! puts Hero :( I need to heal my wounds elsif next_empty?(warrior) warrior.walk! puts Hero :| go go go! else warrior.attack! puts Hero :x I will kill you evil #{warrior.feel.to_s.upcase}! end life_bar_on_screen puts @view puts @backview end def loosing_life? warrior @health[-1] < @health[-2] ? true : false end def next_empty? warrior warrior.feel.empty? end def safe? warrior next_empty?(warrior) && !loosing_life?(warrior) end def life_bar_on_screen puts _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ LIFE #{@health.last} ([email protected]).each { |x| print } puts end def seek_and_shoot? foe = [ Wizard , Archer, Sludge] o, oo, ooo = @view[0].to_s , @view[1].to_s , @view[2].to_s case when o == nothing && foe.include?(oo) then true #when o == nothing && ranged_foe.include?(oo) then true #when o == nothing && oo == friend then false when o == nothing && oo == nothing && foe.include?(ooo) then true else false end end def safe_step? friend = Captive o, oo, ooo = @view[0].to_s , @view[1].to_s , @view[2].to_s case when o == nothing && oo == friend then true when o == nothing && oo == nothing && ooo == friend then true else false end endendShell outputCONGRATULATIONS! You have climbed to the top of the tower and rescued the fair maiden Ruby.Level Score: 74Time Bonus: 6Clear Bonus: 16Level Grade: ATotal Score: 503 + 96 = 599Your average grade for this tower is: A Level 1: S Level 2: A Level 3: S Level 4: A Level 5: A Level 6: A Level 7: A Level 8: S Level 9: A
RubyWarrior beginner epic solution
beginner;ruby;adventure game
null
_codereview.10887
I am very new to Objective-C as well as objected oriented programming, and in a book I am studying from there is an exercise in which I was supposed to create a class called Rational, that has hidden data members called numerator and denominator, and methods to add, multiply, subtract and divide the Rational objects (the objects are just fractions) together. For some reason when I run the program it becomes extremely slow when calculating. I am using ARC on X-Code, and I am wondering if it has to do with memory management issues.Here is the .m file of the class:#import Rational.h@interface Rational (privateMethods)-(int) gcd:(int) a: (int) b;-(Rational*) simplifyFraction:(Rational*)fraction;@end@implementation Rational@synthesize numerator, denominator;-(Rational*) multiplyFraction:(Rational *)fraction1 :(Rational *)fraction2{ fraction1.numerator = fraction1.numerator * fraction2.numerator; fraction1.denominator = fraction1.denominator * fraction2.denominator; fraction1 = [self simplifyFraction:fraction1]; return fraction1;}-(Rational*) addFraction:(Rational *)fraction1 :(Rational *)fraction2{ Rational * returnFraction = [[Rational alloc] init]; fraction1.numerator = fraction1.numerator*fraction2.denominator; fraction2.numerator = fraction2.numerator *fraction1.denominator; fraction1.denominator = fraction1.denominator*fraction2.denominator; fraction2.denominator = fraction1.denominator; returnFraction.numerator = fraction1.numerator + fraction2.numerator; returnFraction.denominator = fraction1.denominator; returnFraction = [self simplifyFraction:returnFraction]; return returnFraction;}-(Rational*) subtractFraction:(Rational *)fraction1 :(Rational *)fraction2{ Rational * returnFraction = [[Rational alloc] init]; fraction1.numerator = fraction1.numerator*fraction2.denominator; fraction2.numerator = fraction2.numerator *fraction1.denominator; fraction1.denominator = fraction1.denominator*fraction2.denominator; fraction2.denominator = fraction1.denominator; returnFraction.numerator = fraction1.numerator - fraction2.numerator; returnFraction.denominator = fraction1.denominator; returnFraction = [self simplifyFraction:returnFraction]; return returnFraction;}-(Rational*) divideFraction:(Rational *)fraction1 :(Rational *)fraction2{ Rational * returnFraction = [[Rational alloc] init]; const int temp = fraction2.denominator; fraction2.denominator = fraction2.numerator; fraction2.numerator = temp; returnFraction.numerator = fraction1.numerator * fraction2.numerator; returnFraction.denominator = fraction2.denominator * fraction1.denominator; returnFraction = [self simplifyFraction:returnFraction]; return returnFraction;}-(void) printObject:(Rational *)fraction{ printf(%i/%i,fraction.numerator, fraction.denominator); printf(\n);}-(void) printRoundedFloat:(Rational *)fraction{ float number = (float)fraction.numerator/fraction.denominator; printf(%f, number); printf(\n);}-(int)gcd:(int)a :(int)b{ if (b==0) { return a; } else return [self gcd:b :a%b];}-(Rational*) simplifyFraction:(Rational *)fraction{ if (fraction.denominator == 0) { NSLog(@ERROR: YOU CAN NOT HAVE ZERO IN THE DENOMINATOR);} else{ int i = fraction.numerator > fraction.denominator ? fraction.numerator:fraction.denominator; while (i>1) { if (fraction.numerator % i == 0 && fraction.denominator%i==0) { fraction.numerator/=i; fraction.denominator/=i; } --i; } } return fraction;}-(void) dealloc{}@endHere is the Main.m file of the program:import <Foundation/Foundation.h>import Rational.hint main (int argc, const char * argv[]){ @autoreleasepool { Rational * newFraction = [[Rational alloc] init]; Rational * otherFraction = [[Rational alloc] init]; newFraction.numerator = 1; newFraction.denominator = 25; printf(Fraction 1 is: ); [newFraction printObject:newFraction]; otherFraction.numerator = 1; otherFraction.denominator = 5; printf(Fraction 2 is: ); [otherFraction printObject:otherFraction]; printf(\nThe Fractions Added togeher are: ); id number = [newFraction addFraction:newFraction :otherFraction]; [number printObject:number]; printf(Rounded: ); [number printRoundedFloat:number]; printf(\nThe Fractions subtracted are: ); number = [number subtractFraction:newFraction :otherFraction]; [number printObject:number]; printf(Rounded: ); [number printRoundedFloat:number]; printf(\nThe Fractions multiplied are: ); number = [number multiplyFraction:newFraction :otherFraction]; [number printObject:number]; printf(Rounded: ); [number printRoundedFloat:number]; printf(\nThe Fractions divided are: ); number = [number divideFraction:newFraction :otherFraction]; [number printObject:number]; printf(Rounded: ); [number printRoundedFloat:number]; } return 0;
Rational class to handle fractions
performance;beginner;object oriented;objective c;rational numbers
null
_unix.364240
Basically, I have removed python and now the OS is unusable.I would like to reinstall Gnome but I am worried that all my documents on the boot partition will be lost. I don't care about the programs just for the data that I stored.
Would a reinstall keep my /home with all documents on the Desktop
debian;gnome;home
It's probably easier to repair the existing install, at least if the damage was caused by apt-get remove python or similar. But if you want to reinstall:FIRST. You really ought to take a backup. The easiest way (since you can't boot the system) is probably a Debian Live DVD/USB stick/etc. Copy all your important files to, e.g., a USB hard disk. The Live disc gives you a normal desktop environment, so you can do that with the familiar file manager interface.Do not proceed without a backup. It's far too easy to accidentally destroy your files.If you have /home on a separate partition and make sure not to reformat /home when reinstalling, then your files will be preserved. Whether to format or not is an option in the installer.Note that if you're running packages that manage their own data (for example, a database like MySQL or PostgreSQL, a mail server, a web or FTP server, etc.), that data may be stored in /var or /srv. In addition, things like cron store your user crontab in /var. If everything is on one partition, then it's possible to tell the installer not to format itbut the install will fail, unless you've already cleaned up (e.g., via rm -Rf) all the system files. That'd basically be everything other than /home, and the exceptions mentioned above.
_softwareengineering.206354
As far as I remember myself programming I was taught not to compare floating point numbers for equality. Now, while reading Programming in Lua about Lua number type, I found following:The number type represents real (double-precision floating-point) numbers. Lua has no integer type, as it does not need it. There is a widespread misconception about floating-point arithmetic errors and some people fear that even a simple increment can go weird with floating-point numbers. The fact is that, when you use a double to represent an integer, there is no rounding error at all (unless the number is greater than 100,000,000,000,000). Specifically, a Lua number can represent any long integer without rounding problems. Moreover, most modern CPUs do floating-point arithmetic as fast as (or even faster than) integer arithmetic.Is that true for all languages? Basically if we don't go beyond floating point in doubles, we are safe in integer arithmetic? Or, to be more in line with question title, is there anything special that Lua does with its number type so it's working fine as both integer and float-point type?
How Lua handles both integer and float numbers?
lua;floating point
Lua claims that floating point numbers can represent integer numbers just as exactly as integer types can, and I'm inclined to agree. There's no imprecise representation of a fractional numeric part to deal with. Whether you store an integer in an integer type, or store it in the mantissa of a floating point number, the result is the same: that integer can be represented exactly, as long as you don't exceed the number of bits in the mantissa, + 1 bit in the exponent.Of course, if you try to store an actual floating-point number (e.g. 12.345) in a floating point representation, all bets are off, so your program has to be clear that the number is really a genuine integer that doesn't overflow the mantissa, in order to treat it like an actual integer (i.e. with respect to comparing equality).If you need more integer precision than that, you can always employ an arbitrary-precision library.Further ReadingWhat is the maximum value of a number in Lua?
_vi.5612
When I click on a link in thunderbird and firefox is already running, it doesn't open a new window, just a new tab.I want the same thing for vim: no matter where I am and how I send a bunch of files to it, I want them opened in a single session.To do this, I've added the following code in my ~/.bashrc and ~/.zshrc.function nv {vim --serverlist | grep -q VIMif [ $? -eq 0 ]; then if [ $# -eq 0 ]; then vim else vim --remote $@ fielse vim --servername vim $@fi}It defines the function nv which can do 3 things :if no VIM server is running, nv launches oneif a VIM server isrunning and one or more arguments were passed to nv, it sends themto the serverif a VIM server is running but no argument was passedto nv, it launches a simple vim session (so that I can still launch a separate vim session by using the same function / alias)I've recently read that you could redirect the output of a shell command as the quickfix list to vim. For example :vim -q <(grep -Rn foo *)It works with vim but not with my function nv.I would like to use the same syntax so that the output of grep is not opened by a new vim session, but by the VIM server.When I use nv -q <(grep -Rn foo *), the VIM server doesn't receive the output of grep but a file called -q and another one : /proc/<pid>/fd/11.I know why it doesn't work, the function was not written with that case in mind.But then, I tried something simpler : vim --remote -q <(grep -Rn foo *)And the result is the same, it doesn't work, the server still receives two files : -q and /proc/<pid>/fd/11.I would like to know if it's possible to edit the code of my nv function so that it works when I use it with the -q switch to remotely populate the quickfix list of an already running vim server, and if so receive some advice on how to do it.If it's not, at least, I would like to know how to use the -q and --remote switch simultaneously.Edit : I may be wrong but I don't think -q and --remote can be used simultaneously.For the moment, I've come up with the following command :vim --remote-send :grep -Rn foo *<cr><cr>Now I need to edit the nv function to integrate it, but I don't know how to do it.Edit bis: I don't think it's worth the trouble, I'll stick with nv and the last command when needed.
How to redirect the output of a command as the quickfix list to a vim server / function?
linux;invocation;quickfix;clientserver
You cannot use --remote with -q, any arguments after --remote are treated as filenames:--remote Connect to a Vim server and make it edit the files given in the rest of the arguments. If no server is found a warning is given and the files are edited in the current Vim.That said, you cannot use the result of process substitution (<(cmd)) with a program running elsewhere. If you notice, the file name from process substitution uses /proc/self:$ echo <(date)/proc/self/fd/11The shell sets up the file descriptors of the executed command so that one of them points the substituted process. Naturally, this fd cannot be easily used by a separate process - you'll at least need to translate /proc/self to /proc/<PID>.Therefore, it would be easier if we could run the command in Vim itself. The cexpr command should help us there:quickfix.txt For Vim version 7.4. Last change: 2015 Sep 08 :cex :cexpr E777:cex[pr][!] {expr} Create a quickfix list using the result of {expr} and jump to the first error. If {expr} is a String, then each new-line terminated line in the String is processed using the global value of 'errorformat' and the result is added to the quickfix list. If {expr} is a List, then each String item in the list is processed and added to the quickfix list. Non String items in the List are ignored. See :cc for [!]. Examples: :cexpr system('grep -n xyz *') :cexpr getline(1, '$')Huh, one of the examples uses grep much the same way you do.Using cexpr, the following function could work:function nv ( if vim --serverlist | grep -q VIM; then if [ $# -eq 0 ]; then vim elif [[ $1 == -q ]]; then shift IFS=' ' vim --remote-send :cexpr system('$*')<cr><cr> else vim --remote $@ fi else vim --servername VIM $@ fi)Use it thus:nv -q grep -Rn foo *That's to say, you pass the command as you would write it to nv -q. The function checks if the first argument is -q, and then uses vim --remote-send to call cexpr and system() on the rest of the arguments.This bit might need explaining:shiftIFS=' '... system('$*') ...Since the first argument is -q and it's no longer needed, and using $* is more convenient, I simply discard -q.Now, there are two quick ways of combining the arguments: $@ and $*. A quoted $@ is usually preferred, if you want separate words (we don't). So, we use $*, which combines the arguments using the first character of IFS. What the value of IFS is depends on the shell, so I set it to a space to get the needed effect.The best effect is if you send a quoted command:nv -q 'grep -Rn foo *'Then * won't be expanded by the shell you called it from, thus preventing problems when called by system().NoteMy earlier version of the function used {} to group the command. This version uses (), to create a subshell. This makes it easier to set variables like IFS locally without disturbing the shell.
_webapps.20158
Is there a way to browse public boards on Trello? I would be interested in how other users set up their boards and what they use it for.
Can I browse all the public Trello boards available?
trello
There is not currently a way to browse all public boards.
_codereview.10956
I'm an experienced developer myself but for a current project I am working on I have decided to hire some developers to develop a mobile app which requires some supporting web services (developed in PHP).I know myself that the code I have pasted below is worse than what I would expect a 5 year old to produce after spending 5 minutes reading Dummy's Guide to Programming PHP Badly. However, this is meant to be a professional software development company!After a quick perusal I can see that it is wide open to basic SQL injection attacks, conforms to no web services standard I know of, barely uses any sound principles of software design or architecture and quite frankly I think it must be some kind of practical joke.I was wondering if anyone else could help me out by pointing out the problems in this code and/or just generally tearing it apart and having a good laugh at it. I might then show this page to our developers in the hope that they can take on some of this feedback and hopefully end up producing some code that I would dare to put into a production environment.Note: The code I have pasted below is not doctored. It includes all the useful comments and descriptions that the developers have kindly left for us to make it easy to maintain.This is the 'Front Controller':<?phpinclude includes/dbconnect.php;include user_class.php;$userval = new userinfo();switch($_POST['action']){ //****** User profile Class *******//case login: $user=$userval->emailsign($_POST);break;case emailsignstp2: $user=$userval->emailsignstp2($_POST);break;case usersignIN: $user=$userval->usersignIN($_POST);break;/*case register: $user=$userval->user_registration($_REQUEST, $_FILES);break;case logout: $user=$userval->logout($_REQUEST);break;*/}echo json_encode($user);//print_r($user);?>And the API class itself:<?phpini_set(display_errors, 1);if($_SERVER['HTTP_HOST'] == localhost/whatittext/) {//define('DOMAIN', http://localhost/nglcc/profile_img/);} else {define('DOMAIN', http://myapi.com/);}class userinfo { function emailsign($email){ $whatittext['emaillogin'] = array(); $uemail = $email['uemail']; if (!preg_match(/^[^@]{1,64}@[^@]{1,255}$/, $uemail)) { $whatittex[emaillogin][result]=false; $whatittex[emaillogin][error]=Inalid email; }else{ $sql=mysql_query(select count(*) AS `cct`, t1.* from `users` AS t1 where t1.`uemail`='.$uemail.' AND t1.`status`= '1') or die(mysql_error()); $row = mysql_fetch_array($sql); if($row['cct'] > 0) { $whatittex[emaillogin][result]=true; $whatittex[emaillogin][uname]=$row['uname']; $whatittex[emaillogin][uemail]=$row['uemail']; $whatittex[emaillogin][create_dt]=$row['create_dt']; $whatittex[emaillogin][ucountry]=$row['ucountry']; } else { $whatittex[emaillogin][result]=false; $whatittex[emaillogin][error]=Email address dose not exist.; } } return $whatittex[emaillogin];}function emailsignstp2($emailpass){ $whatittext['emaillogin'] = array(); $uemail = $emailpass['uemail']; $upass = $emailpass['upass']; if (!preg_match(/^[^@]{1,64}@[^@]{1,255}$/, $uemail)) { $whatittex[emaillogin][result]=false; $whatittex[emaillogin][error]=Inalid Email; }else{ $sql=mysql_query(select count(*) AS `cct`, t1.* from `users` AS t1 where t1.`uemail`='.$uemail.' AND t1.`upassword`='.$upass.' AND t1.`status`= '1') or die(mysql_error()); $row = mysql_fetch_array($sql); if($row['cct'] > 0) { $whatittex[emaillogin][result]=true; $whatittex[emaillogin][uname]=$row['uname']; $whatittex[emaillogin][uemail]=$row['uemail']; $whatittex[emaillogin][create_dt]=$row['create_dt']; $whatittex[emaillogin][ucountry]=$row['ucountry']; } else { $whatittex[emaillogin][result]=false; $whatittex[emaillogin][error]=Invalid email or password.; } } return $whatittex[emaillogin];}function usersignIN($signin){ $whatittext['emaillogin'] = array(); $uemail = $signin['uemail']; $upass = $signin['upass']; $uname = $signin['uname']; $ucountry = $signin['ucountry']; if (!preg_match(/^[^@]{1,64}@[^@]{1,255}$/, $uemail)) { $whatittex[emaillogin][result]=false; $whatittex[emaillogin][error]=Inalid email; }else{ $sql=mysql_query(select count(*) AS `cct` from `users` where `uemail`='.$uemail.') or die(mysql_error()); $row = mysql_fetch_array($sql); if($row['cct'] > 0) { $whatittex[emaillogin][result]=false; $whatittex[emaillogin][error]=Email address already exist.; } else { if($uemail=='' || $upass=='' || $uname=='' || $ucountry=='') { $whatittex[emaillogin][result]=false; $whatittex[emaillogin][error]=Please fill all of the following option properly.; } else { $sql2=mysql_query(select count(*) AS `cct2` from `users` where `uname`='.$uname.') or die(mysql_error()); $row2 = mysql_fetch_array($sql2); if($row2['cct2'] > 0) { $whatittex[emaillogin][result]=false; $whatittex[emaillogin][error]=User name already exist.; } else { $sql=mysql_query(insert into `users` set `uemail`='.$uemail.', `uname`='.$uname.', `upassword`='.$upass.', `ucountry`='.$ucountry.'); $whatittex[emaillogin][result]=true; $whatittex[emaillogin][uname]=$uname; $whatittex[emaillogin][uemail]=$uemail; $whatittex[emaillogin][ucountry]=$ucountry; } } } } return $whatittex[emaillogin];}/*function user_login($arr) //LOGIN//{ $login[whatittext] = array(); $email=$arr['email']; $password=$arr['pass']; if (!eregi(^[_a-z0-9-]+(\.[_a-z0-9-]+)*@[a-z0-9-]+(\.[a-z0-9-]+)*(\.[a-z]{2,3})$, $email)) { $login[nglcc][result]=false; $login[nglcc][error]=Inalid Email; } else if(trim($password)!=) { $sql=mysql_query(select * from `user` where `email`='.$email.' AND password='.$password.' AND `status`= '1'); $rows=@mysql_num_rows($sql); if($rows == 1) { $row = mysql_fetch_assoc($sql); $r[0] = $row; mysql_query(update `user` set `login_status`='1' where uid=.$row['uid']); $login[nglcc][result]=true; $login[nglcc][login_info] = $r; } else { $login[nglcc][result]=false; $login[nglcc][error] = Invalid email/password; } } else { $login[nglcc][result]=false; $login[nglcc][error]=Email/password should not blank; } return $login;}*/}?>
User login and signup system
php5;authentication
In order of badness.SQL injection (as you pointed out).Plaintext passwords, hold the salt. Obviosuly no investigation was done on how to deal with passwords.mysql_* is softly deprecated now deprecated.Horrible double line-spacing (Choose every second line and start again. I'd choose the blanks.)Item by item setting of an array. Build the whole thing in a single statement.Lowercase is not a valid choice for naming methods and variables (e.g $whatitext, ->emailsignstp2), use pascal or camel case.Horrible variable name $whatitex (probably mispelt) also $whatitext.The User class is completely pointless, it is a group of functions. Fake OO sucks. Seriously, just use a namespace and normal functions.Validation of the email is woeful. See: filter_varNo useful comments.Pointless abbreviation of method emailsignstp2.Randomly commented code shouldn't exist when using a revision control system.I'm sure I have missed a few things. This was unprofessional work. They will need to learn very quickly. I wouldn't want them doing any serious work (especially if they were trying to write it with classes in an OO style).
_webapps.1577
I installed the Disqus plugin for my WordPress blog. How can I import my old comments into the plugin?
How do I import my WordPress comments into Disqus?
wordpress;disqus
null
_cstheory.38255
I'd like to do automated inference, say solving word problems or reducing to normal form, in an equational theory of the typed lambda calculus (with product and unit types). Equivalently, in category-theoretic language: I'd like to do inference in a cartesian closed category presented by a finite set of generators and relations. I would be interested in any references on this topic.I understand that inference in this setting is generally undecidable. However, I wonder whether there is any literature that either gives useful conditions under which exact inference is possible or proposes heuristic algorithms for approximate inference.
Inference in typed lambda calculus theories
reference request;lambda calculus;typed lambda calculus
At least the problem of whether 2 terms are equal modulo the theory of Cartesian Closed Categories (or $\beta\eta$ conversion) is decidable, because (in part) of the normalization property.Another, more categorical way to see this is by extracting a conversion algorithm through normalization by evaluation which gives decision procedures for equality in categories which can naturally be embedded in the presheaf or sheaf category over sets.See e.g. Altenkirch, Dybjer, Hoffmann & Scott, Normalization by Evaluation for Typed Lambda Calculus with Coproductsfor the version with products and coproducts (the version for just CCCs without coproducts can be found in the references).An account which gives examples of implementations can be found here.Solving more complex questions than conversion, or solving conversion questions in the presence of datatypes like the natural numbers becomes undecidable rather quickly. I'm not aware of much work on algorithms for such systems, which I would be interested in as well. One could try simply encoding the equational theory into a first-order logic prover like Vampire, but I don't know how well that would work. It would be an interesting experiment!
_scicomp.21988
I have the system of equations\begin{align}&A \frac{\partial u_1}{\partial t} = 1 - u_1 B \frac{\partial u_2}{\partial y}\\&\frac{\partial u_2}{\partial t} = \frac{\partial}{\partial y}\left[ e^{u_1} \frac{\partial u_2}{\partial y}\right] \enspace .\end{align}The initial condition is $u_1(y, 0) = 0$, and $u_2(y<1, 0) = 0$, $u_2(1, 0) = 1$. And the boundary conditions are $u_1(1, t)=1$, $u_2(0, t)=0$, and $u_1(1, t)=0$.Here, $A$ and $B$ are constants. The value of $A$ is 0.04 and the value of $B$ is 0.9. How can I solve these two PDEs simultaneously in COMSOL? If it is not possible in COMSOL please suggest me another software.
Solving coupled PDE in COMSOL
pde;nonlinear equations;comsol
null
_codereview.104704
I need to write 5 crores records with 72 columns into text file, the file size is growing as 9.7gb .I need to check each and every columm length need to format as according to the length as defined in XML file.Reading records from oracle one by one and checking the format and writing into text file.To write 5 crores records it is taking more than 24 hours. How can I increase the performance in the below code?Dim valString As String = Nothing Dim valName As String = Nothing Dim valLength As String = Nothing Dim valDataType As String = Nothing Dim validationsArray As ArrayList = GetValidations(Directory.GetCurrentDirectory() + \ReportFormat.xml) Console.WriteLine(passed xml) Dim k As Integer = 1 Try Console.WriteLine(System.DateTime.Now()) Dim selectSql As String = select * from table where record_date >= To_Date('01-01-2014','DD-MM-YYYY') and record_date <= To_Date('31-12-2014','DD-MM-YYYY') Dim dataTable As New DataTable Dim oracleAccess As New OracleConnection(System.Configuration.ConfigurationManager.AppSettings(OracleConnection)) Dim cmd As New OracleCommand() cmd.Connection = oracleAccess cmd.CommandType = CommandType.Text cmd.CommandText = selectSql oracleAccess.Open() Dim Tablecolumns As New DataTable() Using oracleAccess Using writer = New StreamWriter(Directory.GetCurrentDirectory() + \FileName.txt) Using odr As OracleDataReader = cmd.ExecuteReader() Dim sbHeaderData As New StringBuilder For i As Integer = 0 To odr.FieldCount - 1 sbHeaderData.Append(odr.GetName(i)) sbHeaderData.Append(|) Next writer.WriteLine(sbHeaderData) While odr.Read() Dim sbColumnData As New StringBuilder Dim values(odr.FieldCount - 1) As Object Dim fieldCount As Integer = odr.GetValues(values) For i As Integer = 0 To fieldCount - 1 Dim vals As Array = validationsArray(i).ToString.ToUpper.Split(|) valName = vals(0).trim valDataType = vals(1).trim valLength = vals(2).trim Select Case valDataType Case VARCHAR2 If values(i).ToString().Length = valLength Then sbColumnData.Append(values(i).ToString()) 'sbColumnData.Append(|) ElseIf values(i).ToString().Length > valLength Then sbColumnData.Append(values(i).ToString().Substring(0, valLength)) 'sbColumnData.Append(|) Else sbColumnData.Append(values(i).ToString().PadRight(valLength)) 'sbColumnData.Append(|) End If Case NUMERIC valLength = valLength.Substring(0, valLength.IndexOf(,)) If values(i).ToString().Length = valLength Then sbColumnData.Append(values(i).ToString()) 'sbColumnData.Append(|) Else sbColumnData.Append(values(i).ToString().PadLeft(valLength, 0c)) 'sbColumnData.Append(|) End If 'sbColumnData.Append((values(i).ToString())) End Select Next writer.WriteLine(sbColumnData) k = k + 1 Console.WriteLine(k) End While End Using writer.WriteLine(System.DateTime.Now()) End Using End Using Console.WriteLine(System.DateTime.Now()) 'Dim Adpt As New OracleDataAdapter(selectSql, oracleAccess) 'Adpt.Fill(dataTable) Return Tablecolumns Catch ex As Exception Console.WriteLine(System.DateTime.Now()) Console.WriteLine(Error: & ex.Message) Console.ReadLine() Return Nothing End Try
Performance improvement on vb.net code
performance;memory management;vb.net
So, what is happening here Select Case valDataType Case VARCHAR2 If values(i).ToString().Length = valLength Then sbColumnData.Append(values(i).ToString()) 'sbColumnData.Append(|) ElseIf values(i).ToString().Length > valLength Then sbColumnData.Append(values(i).ToString().Substring(0, valLength)) 'sbColumnData.Append(|) Else sbColumnData.Append(values(i).ToString().PadRight(valLength)) 'sbColumnData.Append(|) End If Case NUMERIC valLength = valLength.Substring(0, valLength.IndexOf(,)) If values(i).ToString().Length = valLength Then sbColumnData.Append(values(i).ToString()) 'sbColumnData.Append(|) Else sbColumnData.Append(values(i).ToString().PadLeft(valLength, 0c)) 'sbColumnData.Append(|) End If 'sbColumnData.Append((values(i).ToString()))End Selectif values(i).ToString().Length is < valLength ? You are calling 3 times .ToString() on the object. A much faster and better way would be to just do it once like so Dim currentValue As String = values(i).ToString()Select Case valDataType Case VARCHAR2 If vcurrentValue.Length = valLength Then sbColumnData.Append(currentValue) ElseIf currentValue.Length > valLength Then sbColumnData.Append(currentValue.Substring(0, valLength)) Else sbColumnData.Append(currentValue.PadRight(valLength)) End If Case NUMERIC valLength = valLength.Substring(0, valLength.IndexOf(,)) If currentValue.Length = valLength Then sbColumnData.Append(currentValue) Else sbColumnData.Append(currentValue.PadLeft(valLength, 0c)) End IfEnd Select I don't like this Dim vals As Array = validationsArray(i).ToString.ToUpper.Split(|)valName = vals(0).trimvalDataType = vals(1).trimvalLength = vals(2).trim for a couple of reasons. First you are creating a string array out of this ArrayList (however this is created) and then store it in an Array so casting it to an object to cast it again to a String on the next lines. But then you use the valLength String's to compare with Length which is an Integer. The valName variable is never used, so you can just remove it along with the commented code. Code which isn't used or is commented is just dead code which should be removed to increase readability.Another thing bothering me is the use of abbreviations for variables names. You won't gain any performancy increase by doing so, but you loose a lot of readability. Dim validationValues As String() = validationsArray(i).ToString.ToUpper.Split(|)Dim valueDataType As String = validationValues(1).Trim()Dim valueLength As String = validationValues(1).Trim() Dim currentValue As String = values(i).ToString()Select Case valDataType Case VARCHAR2 Dim length As Integer = Convert.ToInt32(valueLength) If vcurrentValue.Length = length Then sbColumnData.Append(currentValue) ElseIf currentValue.Length > length Then sbColumnData.Append(currentValue.Substring(0, length)) Else sbColumnData.Append(currentValue.PadRight(length)) End If Case NUMERIC Dim length As Integer = Convert.ToInt32(valueLength.Substring(0, valueLength.IndexOf(,))) If currentValue.Length = length Then sbColumnData.Append(currentValue) Else sbColumnData.Append(currentValue.PadLeft(length, 0c)) End IfEnd Select Another thing which is slowing down the performance is that you do this Dim vals As Array = validationsArray(i).ToString.ToUpper.Split(|)valName = vals(0).trimvalDataType = vals(1).trimvalLength = vals(2).trim for each datarow and each column. You should store these values once an reuse the values for each other datarow. k = k + 1Console.WriteLine(k) This doesn't serve any real purpose but will slow down your code. Get rid of it. You are using using statements to enclose objects which implements IDisposable which is good, but why don't you be consistent ? The OracleCommand also implements IDisposable but there is no using. Dim Tablecolumns As New DataTable() this isn't used anywhere except for the return value of the method. You can simply change it to a Sub and remove this variable. Doing a select * from table where... won't help any in the meaning of increasing performance. You only need columns which are either VARCHAR2 or NUMERIC so you should only add these columns to your select query which meets this requirement. There is for instance no use for any Date field like record_date. You don't need to retrieve this column to restrict the query to the value of it.
_unix.209812
For some reason, copying files from my phone over the MTP fuse interface sometimes results in corrupt files, missing their last few bytes. I want to remove each file on successful transfer, but not remove them if there was a problem. The mv command doesn't have a --verify option. I could write a short script which copies, checks, and removes, but I'm wondering if there's a more elegant existing command which can handle this?As a bonus, it'd be nice to specify both checksum match and success from an external verification command, in this case jpeginfo -c. I think the short reads are random occurrences, but I haven't really tested that the bad file isn't actually cached that way (or otherwise would be read incorrectly in the same way twice). So, something like mv --verify --verifywith='jpeginfo -c' would be ideal (where jpeginfo -c is a command that tests JPEG files for correctness and which I know will return an error on these particular truncated files).
Move only on verify?
mv;file transfer
null
_codereview.151018
This is my first Angular2 component, which makes two api calls: the first call provide the input for the the second one.The following is my attempt at this code. @Component({ selector: 'result-panel', templateUrl: 'panel.component.html', styleUrls: ['panel.component.css'], providers:[FindIpZoneDataService,ApiService] }) export class PanelComponent { networkDetail:Array<Network>; query = ; datafetchservice:any; searching:boolean = false; rowCount:boolean = false; iperror :boolean = false; nwerror :boolean = false; userinputerror:boolean = false; constructor(private dataservice :FindIpZoneDataService){ this.datafetchservice = dataservice; } getNetworkDetail(ip:string) { this.networkDetail = []; this.dataservice.getNetworkDetail(ip) .map((response) => { let res: Array<any> = []; let extattrs: Array<any> = []; res = response.json(); res.forEach((detail) => { extattrs.push(detail.extattrs); }); return extattrs; }) .map((network:Array<any>) => { console.log(network); let result: Array<Network> = []; if (network) { network.forEach((detail) => { result.push(new Network(detail['SITE Name'].value, detail['InfoSec Security Zone'].value, detail['Network Security Zone'].value, detail['Data Classification'].value)); }); return result; } }) .subscribe(details => { this.networkDetail = details; this.rowCount = true; this.searching = false; }, err => { //Valid Network not found this.handleNwServiceError(err); }); } getIPNetwork(){ this.datafetchservice.getNetwork(this.query) .map(response => response.json()) .subscribe(result => { this.getNetworkDetail(result[0].network); }, err => { //InValid IP this.handleIpServiceError(err); }); } private handleNwServiceError(error: any) { //Observable.throw(error.json().error || 'Invalid Network - Server error'); console.log(error); this.nwerror = true; this.rowCount = false; } private handleIpServiceError(error: any) { //Observable.throw(error.json().error || 'Invalid IP - Server error'); console.log(error); this.iperror = true; this.rowCount = false; console.log('ip error'); }Please let me know how I can improve it.
Call multiple services from angular2 component
javascript;angular.js;typescript;rxjs
null
_unix.277393
I installed the tcsh package on Ubuntu 15.10 and the prompt differs depending on how I invoke tcsh.First off, on Ubuntu 15.10, tcsh and csh really are the same executable:$ cmp /bin/tcsh /bin/csh && echo 'same' || echo 'different'same$ /bin/tcshhostname:/tmp>$ /bin/tcsh -f>Whereas with /bin/csh it ends with a percent sign$ /bin/cshhostname:/tmp%$ /bin/csh -f%I haven't set a .cshrc or a .tcshrc file. Is tcsh inspecting the first element of argv to determine what to use as the final character of the prompt?The /etc/csh.cshrc file does contain logic for changing the prompt if the program is invoked as tcsh, but it is not clear why setting the prompt in this way would change the last character from > to %. This file is also ignored if the -f flag is provided.# /etc/csh.cshrc: system-wide .cshrc file for csh(1) and tcsh(1)if ($?tcsh && $?prompt) then bindkey \e[1~ beginning-of-line # Home bindkey \e[7~ beginning-of-line # Home rxvt bindkey \e[2~ overwrite-mode # Ins bindkey \e[3~ delete-char # Delete bindkey \e[4~ end-of-line # End bindkey \e[8~ end-of-line # End rxvt set autoexpand set autolist set prompt = %U%m%u:%B%~%b%# endif
Why does the prompt differ for `tcsh` depending on whether it's invoked as `tcsh` or `csh`
tcsh
I don't see it in the manpage, but the source code checks if the program is invoked as tcsh or not. If it is, the code sets the prompt as noted in the question:HIST = '!';HISTSUB = '^';PRCH = tcsh ? '>' : '%'; /* to replace %# in $prompt for normal users */PRCHROOT = '#'; /* likewise for root */word_chars = STR_WORD_CHARS;bslash_quote = 0; /* PWP: do tcsh-style backslash quoting? */The program logic is fairly easy to read: { char *t; t = strrchr(argv[0], '/');#ifdef WINNT_NATIVE { char *s = strrchr(argv[0], '\\'); if (s) t = s; }#endif /* WINNT_NATIVE */ t = t ? t + 1 : argv[0]; if (*t == '-') t++; progname = strsave((t && *t) ? t : tcshstr); /* never want a null */ tcsh = strncmp(progname, tcshstr, sizeof(tcshstr) - 1) == 0; }andstatic const char tcshstr[] = tcsh;So it would not pass the test if it were named tcsh10, for instance.
_softwareengineering.333903
Take C++ constructors for example, we know they don't return values. Why did Bjarne Stroustrup in the beginning decide not to allow constructor returning 'false' to indicate it fails, so that the run time system can destruct its constructed members just like throwing an exception? What are the concerns that make OO language designers decide not to do so? The new operator can therefore return nullptr if it see constructor returns false. For static objects (in .data/.bss area or on .stack) the compiler generated construction code can still detect and signal, abort or exit accordingly.Consider the following two coding paradigms, using dynamic allocation as an example:objp = new object; // constructor returns 'false', 'new' returns 'nullptr'.if (objp != nullptr) { do_something(objp);} else { error_handling();}compare to:try { objp = new object; // object throw exception when construction failed do_something(objp);} catch (const typedException &e) { error_handling();}If we need to encode the reason of construction failure, the latter using exception is of course more helpful as we can't use 'objp' to pass any information once construction failed. But if the reason is simple (say 'out of memory'), or when skipping error handling does no harm (do_something() might just do decorating), do we really need to involve exception handling in such simple cases? How about allowing both paradigms exist in C++?(Another example is, for small embedded C++ compilers if they don't support exceptions, they can still support construction fail handling in this way.)Well, maybe my comparison is misleading, I am not against structural exception handling, on the contrary, I LOVE exceptions especially for big systems. But when it comes to small embedded systems where both code and data space are scarce, I think twice.The exception handling frames are things similar to setjmp() and longjmp() which are quite expensive in both execution time and memory use; while the (objp==nullptr) comparison takes only a comparison and a jump. Not to mention some of the compilers don't even support exception handling. In those cases, construction fail can only be dealt with other methods. That reminds me in the old days Turbo Pascal 5.5 OOP can call Fail on construction fail, and the newly allocated object will be null. What about other languages? Does all OOP languages use exception on construction fail cases?Actually, at the time I learnt C++ there is no exception handling available at all (Turbo C++ 3.0/Borland C++ 3.1). Before that I learnt Turbo Pascal 5.5 which supported Fail on dynamic construction fail. But when I move to C++ at that time this make me a bit upset since there is no way to test construction fail without defining an Init() function that actually do the inits. Since then I wondered, why can't constructors return values? Now I think maybe returning a value from constructor will make the language inconsistent. If we return a value from a constructor and the runtime system test it, this kind of behavior is a lot different from a normal function call since our code can test it. Maybe this kind of inconsistency is not a good idea when designing a programming language? What do you think?In the early days of C++ there was no exception handling, yet Bjarne Stroustrup still didn't let constructors to return error conditions. Same did the other OO languages at that time (correct me if I am wrong). Therefore, using exception handling was not their original intention to take care of constructor fails. Then why? I think I found the answer, please refer to my own answer here. Thanks.
Why don't constructor return bool to indicate its success or failure without having to throw an exception?
java;object oriented;c++;objective c
I think I found the answer for my own question.So far the public voted answer from @Erik was biased toward another direction debating the value of exception handling which I never doubted about. That doesn't answer this question which is about the philosophical reasoning why designers of OOP languages don't let constructor return error conditions in the first place?In the beginning, there was no exception handlingIn the early days of C++ compilers there was no SEH support. The problem of construction fail was not taken cared of at that time, except for Turbo Pascal 5.5 which at least provide a basic Fail procedure call to notify the system that object construction failed. The TP5.5 run time system then gracefully reverse the construction sequence, destructing any newly constructed objects within that scope. There was no similar design in C++ at that time. However, there are other ways to handle construction fail, @Robert Harvey, @Daniel Earwicker and others posted feasible solutions to deal with it without using exceptions.Nowadays we can say the exception handling approach is so far the best, but it's a result of C++ language evolution over the years. Thus this current causal result cannot be used to infer the past reason: why, in the beginning, don't they allow constructors to return error codes for the construction code to check? Is there any philosophical considerations? AnswerSo far looks like the conceptual inconsistency between function return value and constructor return value, that I mentioned in the end of my question, and the consequential language changes, to be the answer. ReasoningIf we allow constructors to return error codes only for object construction code (e.g. the new operator in C++, or any construction code that calls constructor) to read, it will easily be confused with function return values which is for our code to read. Especially when objects are constructed statically not using 'new' operator. @Jerry Coffin's answer enlighten me the consequence of this kind of confusion, his answer treated constructor return value as function return value.Let's prove it by contradiction, see what will happen to the C++ language by assuming constructor do return error condition.First, let's allow constructor to return bool to indicate its success or failure:MyInteger a=5; // so does 'MyInteger b{2}' or 'MyInteger c(3)'If things go well, MyInteger constructor will return 'true' which is integer '1' to let the construction code know it works. This is not to say we are letting 'a' to be '1', 'a' is still '5'. We return 'true' just to let the construction code know that the constructor MyInteger(int v) did not fail. From assembly's point of view, the compiler generated initialization code for 'a' will call the constructor MyInteger(5), and test its return value (bool true, i.e. '1'). This value '1' has nothing to do with the actual value '5' assigned to 'a'. It's obvious in assembly, but from high level language's point of view it's confusing, especially for beginners. It's not like normal function return values. This conceptual inconsistency also means it's error-prone. Next, let's make it more complicated by allowing error codes to be returned by a constructor. Imagine a pseudo constructor for MyInteger:int MyInteger::MyInteger(int v) { if (memory allocation fail) return 1; else if (failure_reason_one) return 2; ... else // everything works return 0;}Consider a simple declaration:MyInteger a = 1;Once 'a' is successfully assigned to '1', this means 'a' is successfully constructed and the constructor call MyInteger(1) returns '0'. But if we're running out of memory when constructing 'a', the constructor call MyInteger(1) will return '1' instead; wouldn't programmers tend to do so:if (a == 1) error( Out of memory );else { ...}Wrong! Constructor return value is for the construction code only! In assembly we can see it calls the constructor MyInteger() and test its return value, but this is invisible to our program so we can't use that value this way. The language might therefore in turn be forced to support C-like error code, a per-thread static class member variable, like the following:if (a.errcode == 1) // errcode is a static class member variable error( Out of memory );else { // Now we really have 'a' successfully assigned to 1 assert( a == 1 );}This kind of conceptual inconsistency of return seems not a good design for a programming language. We might again in turn need to add a new keyword like failreturn or errorcode for constructor returns or other, but, this still lead to some degrees of confusion. The language itself will have to be changed a lot due to constructor return value, which is not worth it.Conclusion, using exception handling mechanism can prevent all possible confusions of this kind. Even for languages without exception handling, returning error conditions from constructor is not a good idea.
_unix.176337
I have data that I need to change before running it through the pandas library in python. It is currently in a format that stores midnight values as 2400 and that should be changed to 0000. The format also does not pad either hour nor minute, and I think I need to do so in order to convert the 2 parameters into a desired zero padded 2400hr (0000-2359) format.Now that's the easy part!The hard part is that each time it rolls over from 2359 to 0000 it should also change the date (which is in 'dayoftheyear' format, which actually makes it easier I assume, i.e.: %j +1, aside from Dec 31).So here is a sample of my data (last 2 columns are non-date values stored) at a day's rollover (it is a csv file, but I am showing it with a single space to delimit for visual clarity):1,2014,361,2340,0,01,2014,361,2341,0,01,2014,361,2342,0,01,2014,361,2343,0,01,2014,361,2344,0,01,2014,361,2345,0,01,2014,361,2346,0,01,2014,361,2347,0,01,2014,361,2348,0,01,2014,361,2349,0,01,2014,361,2350,0,01,2014,361,2351,0,01,2014,361,2352,0,01,2014,361,2353,0,01,2014,361,2354,0,01,2014,361,2355,0,01,2014,361,2356,0,01,2014,361,2357,0,01,2014,361,2358,0,01,2014,361,2359,0,01,2014,361,2400,0,024,2014,361,2400,12.341,2014,365,2359,0,91,2014,365,2400,089.343,31,2015,1,1,234,4561,2015,1,2,090,991,2015,365,2359,0,01,2015,365,2400,xx,xxx1,2016,1,1,0,01,2016,1,2,0,01,2016,1,3,0,0I assume the solution is a bunch of sed/awk nested in a for loop, but i'll leave that up to you code ninjas. Thanks in advance.Ok, here is the same question but extended to include the what if once the new year rolls around. So I assume that the $2 column will get incremented come 365 to 366, and this is obviously not desirable. How do I then extend the same incrementing/formatting to include a roll over come 366 to increment the year by 1?I am going to take a blind stab at it:#!/bin/bashfilename=${1/.dat/_prepped.dat}awk '/^1/{print $0}' $1 |cut -d , -f2,3,4,5,6 |awk 'BEGIN{FS=OFS=,}$3 == 2400 {$2 = $2 + 1; $3 = 0}$2 == 366 {$1 = $1 + 1; $2 = 1}{ $3 = sprintf(%04i, $3) }1' >$filenameI took a stab at integrating everything into a script that i feed the raw data (ex: home.dat) into in order to output the file (ex: home_prepped.dat).Results of above data running through above script:2014,361,2340,0,02014,361,2341,0,02014,361,2342,0,02014,361,2343,0,02014,361,2344,0,02014,361,2345,0,02014,361,2346,0,02014,361,2347,0,02014,361,2348,0,02014,361,2349,0,02014,361,2350,0,02014,361,2351,0,02014,361,2352,0,02014,361,2353,0,02014,361,2354,0,02014,361,2355,0,02014,361,2356,0,02014,361,2357,0,02014,361,2358,0,02014,361,2359,0,02014,362,0000,0,02014,365,2359,0,92015,1,0000,089.343,32015,1,0001,234,4562015,1,0002,090,992015,365,2359,0,02016,1,0000,xx,xxx2016,1,0001,0,02016,1,0002,0,02016,1,0003,0,0
change non-zero padded hour and minute into 24hr time
date;timestamps;csv
awk does all of this by itself. sprintf does the formatting, ordinary patterns and assignments do the rest.$3 == 2400 {$2 = $2 + 1; $3 = 0}{ $3 = sprintf(%04i, $3) }1If you put that into dates.awk and then run your sample data through:$ awk -F, -vOFS=, -f dates.awk < datathen you will get:...2014,344,2359,0,02014,345,0000,0,02014,345,0001,0,0...The first line of the script checks whether the third field is 2400 using an expression pattern and zeros and increments appropriately. The second pads the field to four digits with sprintf. The last ensures the line is printed.You can squash that all into a single line to give a script to awk on the command line, and also put the field separators into the body by prepending {FS=OFS=,}.You can deal with year rollover yourself; you should be able to pattern it on the above easily, but making the effort yourself will do you good.
_unix.162327
I'm trying to run badblocks on a drive with a single partition. The drive contains a FreeBSD file system on it.I boot up using a Linux live USB drive. The drive is unmounted. The output of fdisk -l is: Device Boot Start End Id System/dev/sda1 * 63 976773167+ a5 FreeBSDSo I run:# badblocks -v /dev/sda1And it says:badblocks: invalid last block - /dev/sda1I can't find any useful information about this. Am I using the badblocks utility correctly here? Or is this an indication that something is wrong with the drive?
badblocks utility keeps reporting invalid last block
linux;badblocks
No, this isn't an indication something is wrong with the drive. You are getting this error because badblocks is accepting /dev/sda1 as the last-block argument instead of accepting it as the device. The syntax in your question looks correct to me. Try specifying the last-block argument after the device:badblocks -v /dev/sda1 976773167 If that doesn't work, try adding the first-block to that as well:badblocks -v /dev/sda1 976773167 63 Just to assure you that this does not indicate something is wrong with your drive, here is the output when I add an invalid last-block argument nope: sudo badblocks -v /dev/sdb1 nope badblocks: invalid last block - nope Here is an example from my bash history of the last time I used badblocks (sudo access is required to access these drives on my system):sudo badblocks -v /dev/sdb1 Output:Checking blocks 0 to 976751967 Checking for bad blocks (read-only test):If I cancel the process after awhile with Ctrl+C the output is:Interrupted at block 7470720Here is the syntax to resume the process (see man badblocks):badblocks -v device [ last-block ] [ first-block ] The last-block is the last block to be read on the device and first-block is where it should start reading. Example:sudo badblocks -v /dev/sdb1 976751967 7470720 Output:Checking blocks 7470720 to 976751967 Checking for bad blocks (read-only test):
_unix.317720
I use oh-my-zsh for all of my console goodness. Depending on what I'm working on, there are certain environment variables that I'm often overwriting either manually or through scripts to make my work easier for the next hour. For example: setting default ssh identify files, changing the AWS_PROFILE env variable, clearing or resetting other custom environment variables.I'd like to be able to switch to a different environment within a shell session based on some context, similar to how tools like RubyEnv and PyEnv work. Is there an easier way to do this in ZSH, either through a plugin or feature?
Is there a plugin or tool for multiple profiles in ZSH?
zsh;environment variables;oh my zsh
null
_datascience.10288
I have a data set that's a dictionary of tuples. Each key represents an ID number and each tuple is (yesvotes, totalvotes). Example:{17: (6, 10), 18: (1, 1), 21: (0, 2), 26: (1, 1), 27: (3, 4), 13: (2, 2)}I need to find the max key of the set. I want to assign weights so, for instance, key 17 would be ranked higher than key 18 because even though the ratio is much smaller, it has ten times the total votes.Is there an optimal way to do this? My best guess is simply calculate new ratios by (yesvotes/totalvotes)*(totalvotes+1) but that doesn't seem right... Is there some kind of standardized field of study concerning fair-voting?
Sort by average votes/ratings
python;data cleaning;weighted data;data wrangling
Yes, this is a well-studied problem: rank aggregation. Here is a solution with code.The problem is that the quantity you are trying to estimate, the score of the item, is subject to noise. The fewer votes you have the greater the noise. Therefore you want to consider the variance of your estimates when ranking them.
_codereview.172767
Learning how to code here, and I've spent a good long day coding what appears to be inefficient code on my part.Here is a summary of the assignment I tackled (this is all for self-study, but C is something I've wanted to learn for a long time):Using only switch/if selection statements, program a C calculator that handles simple expressions involving three values and two operators e.g. a op1 b op2 c. The calculator must perform most operations provided by C language. Should handle values of type int or float (as appropriate), and to handle the following operators: <, >, <=, >=, ==, !=, !, &&, ||, %After about ~2900 lines, I think I've got it (or at least most of it). It's too long for my own happiness. Here is a link to my code.Question: Is there a more efficient way to program this C calculator using only switch/if selection statements? I was told that I can use precedence to my advantage, but can someone give a concrete example (maybe even a few lines of code) with an explanation?printf(Program 5.9: \C\ Calculator for Three Numbers (e.g x op1 y op2 z)\n);printf(=================================================================\n\n);printf(Enter your first number: );scanf(%f, &a);printf(Enter your second number: );scanf(%f, &b);printf(Enter your third number: );scanf(%f, &c);printf(\nChoice of Arithmetic or Logical Operators between First and Second Number:\n);printf(==========================================================================\n\n);printf(Addition: (1)\n);printf(Subtraction: (2)\n);printf(Multiplication: (3)\n);printf(Division: (4)\n);printf(Less than: (5)\n);printf(Greater than: (6)\n);printf(Less than or equal to: (7)\n);printf(Greater than or equal to: (8)\n);printf(Equals: (9)\n);printf(Does not equal (x != y): (10)\n);printf(Logical NOT (x !y): (11)\n);printf(Logical AND (x && y): (12)\n);printf(Logical OR (x || y): (13)\n);printf(Remainder: (14)\n);printf(\nPlease enter your choice: );scanf(%d, &choice);if ((choice < 1) || (choice > 14)) { printf(Incorrect Choice!\n); return 1;}if (choice == 11){logi_not = TRUE;printf(\nHow do you want to analyze %.2f and !%.2f?: , a, b);printf(\n\n%.2f && !%.2f: (15)\n, a, b);printf(%.2f || !%.2f: (16)\n, a, b);printf(\nPlease enter your choice: );scanf(%d, &choice_3);switch (choice_3){ case 15: logic_and_log_not = TRUE; break; case 16: logic_or_log_not = TRUE; break;}}printf(\nChoice of Arithmetic or Logical Operators between Second and Third Number:\n);printf(==========================================================================\n\n);printf(Addition: (1)\n);printf(Subtraction: (2)\n);printf(Multiplication: (3)\n);printf(Division: (4)\n);printf(Less than: (5)\n);printf(Greater than: (6)\n);printf(Less than or equal to: (7)\n);printf(Greater than or equal to: (8)\n);printf(Equals: (9)\n);printf(Does not equal (y != z): (10)\n);printf(Logical NOT (y !z): (11)\n);printf(Logical AND (y && z): (12)\n);printf(Logical OR (y || z): (13)\n);printf(Remainder: (14)\n);printf(\nPlease enter your choice: );scanf(%d, &choice_2);if ((choice_2 < 1) || (choice_2 > 14)) { printf(Incorrect Choice!\n); return 1;}if (choice_2 == 11){ logi_not_a = TRUE; printf(\nHow do you want to analyze ); switch (choice) { case 1: add = TRUE; printf(%.2f + %.2f , a, b); break; case 2: subtract = TRUE; printf(%.2f - %.2f , a, b); break; case 3: multiply = TRUE; printf(%.2f * %.2f , a, b); break; case 4: divide = TRUE; printf(%.2f / %.2f , a, b); break; case 5: less_than = TRUE; printf(%.2f < %.2f , a, b); break; case 6: greater_than = TRUE; printf(%.2f > %.2f , a, b); break; case 7: less_than_equal = TRUE; printf(%.2f <= %.2f , a, b); break; case 8: greater_than_equal = TRUE; printf(%.2f >= %.2f , a, b); break; case 9: equals = TRUE; printf(%.2f = %.2f , a, b); break; case 10: does_not_equal = TRUE; printf(%.2f != %.2f , a, b); break; case 11: logi_not = TRUE;{ if (logic_and_log_not) printf(%.2f && !%.2f , a, b); else if (logic_or_log_not) printf(%.2f || !%.2f , a, b); break; } case 12: logi_and = TRUE; printf(%.2f && %.2f , a, b); break; case 13: logi_or = TRUE; printf(%.2f || %.2f , a, b); break; case 14: remain_mod = TRUE; printf(%.0f %% %.0f , a, b); break; } printf(with !%.2f?:\n\n, c); switch (choice) { case 1: add = TRUE; printf(%.2f + %.2f , a, b); break; case 2: subtract = TRUE; printf(%.2f - %.2f , a, b); break; case 3: multiply = TRUE; printf(%.2f * %.2f , a, b); break; case 4: divide = TRUE; printf(%.2f / %.2f , a, b); break; case 5: less_than = TRUE; printf(%.2f < %.2f , a, b); break; case 6: greater_than = TRUE; printf(%.2f > %.2f , a, b); break; case 7: less_than_equal = TRUE; printf(%.2f <= %.2f , a, b); break; case 8: greater_than_equal = TRUE; printf(%.2f >= %.2f , a, b); break; case 9: equals = TRUE; printf(%.2f = %.2f , a, b); break; case 10: does_not_equal = TRUE; printf(%.2f != %.2f , a, b); break; case 11: logi_not = TRUE;{ if (logic_and_log_not) printf(%.2f && !%.2f , a, b); else if (logic_or_log_not) printf(%.2f || !%.2f , a, b); break; } case 12: logi_and = TRUE; printf(%.2f && %.2f , a, b); break; case 13: logi_or = TRUE; printf(%.2f || %.2f , a, b); break; case 14: remain_mod = TRUE; printf(%.0f %% %.0f , a, b); break; } printf( && !%.2f: (15)\n, c); switch (choice) { case 1: add = TRUE; printf(%.2f + %.2f , a, b); break; case 2: subtract = TRUE; printf(%.2f - %.2f , a, b); break; case 3: multiply = TRUE; printf(%.2f * %.2f , a, b); break; case 4: divide = TRUE; printf(%.2f / %.2f , a, b); break; case 5: less_than = TRUE; printf(%.2f < %.2f , a, b); break; case 6: greater_than = TRUE; printf(%.2f > %.2f , a, b); break; case 7: less_than_equal = TRUE; printf(%.2f <= %.2f , a, b); break; case 8: greater_than_equal = TRUE; printf(%.2f >= %.2f , a, b); break; case 9: equals = TRUE; printf(%.2f = %.2f , a, b); break; case 10: does_not_equal = TRUE; printf(%.2f != %.2f , a, b); break; case 11: logi_not = TRUE;{ if (logic_and_log_not) printf(%.2f && !%.2f , a, b); else if (logic_or_log_not) printf(%.2f || !%.2f , a, b); break; } case 12: logi_and = TRUE; printf(%.2f && %.2f , a, b); break; case 13: logi_or = TRUE; printf(%.2f || %.2f , a, b); break; case 14: remain_mod = TRUE; printf(%.0f %% %.0f , a, b); break; } printf( || !%.2f: (16)\n, c); printf(\nPlease enter your choice: ); scanf(%d, &choice_4); switch (choice_4){ case 15: logic_and_log_not_a = TRUE; break; case 16: logic_or_log_not_a = TRUE; break; }}/*******************************************************//* Printing which choice was made between var 1 and 2. *//*******************************************************/printf(\nYou have chosen );switch (choice) { case 1: add = TRUE; printf(%.2f + %.2f , a, b); break; case 2: subtract = TRUE; printf(%.2f - %.2f , a, b); break; case 3: multiply = TRUE; printf(%.2f * %.2f , a, b); break; case 4: divide = TRUE; printf(%.2f / %.2f , a, b); break; case 5: less_than = TRUE; printf(%.2f < %.2f , a, b); break; case 6: greater_than = TRUE; printf(%.2f > %.2f , a, b); break; case 7: less_than_equal = TRUE; printf(%.2f <= %.2f , a, b); break; case 8: greater_than_equal = TRUE; printf(%.2f >= %.2f , a, b); break; case 9: equals = TRUE; printf(%.2f = %.2f , a, b); break; case 10: does_not_equal = TRUE; printf(%.2f != %.2f , a, b); break; case 11: logi_not = TRUE;{ if (logic_and_log_not) printf(%.2f && !%.2f , a, b); else if (logic_or_log_not) printf(%.2f || !%.2f , a, b); break; } case 12: logi_and = TRUE; printf(%.2f && %.2f , a, b); break; case 13: logi_or = TRUE; printf(%.2f || %.2f , a, b); break; case 14: remain_mod = TRUE; printf(%.0f %% %.0f , a, b); break;}/*******************************************************//* Printing which choice was made between var 2 and 3. *//*******************************************************/switch (choice_2) { case 1: add_a = TRUE; printf(+ %.2f.\n, c); break; case 2: subtract_a = TRUE; printf(- %.2f.\n, c); break; case 3: multiply_a = TRUE; printf(* %.2f.\n, c); break; case 4: divide_a = TRUE; printf(/ %.2f.\n, c); break; case 5: less_than_a = TRUE; printf(< %.2f.\n, c); break; case 6: greater_than_a = TRUE; printf(> %.2f.\n, c); break; case 7: less_than_equal_a = TRUE; printf(<= %.2f.\n, c); break; case 8: greater_than_equal_a = TRUE; printf(>= %.2f.\n, c); break; case 9: equals_a = TRUE; printf(= %.2f.\n, c); break; case 10: does_not_equal_a = TRUE; printf(!= %.2f.\n, c); break; case 11: logi_not_a = TRUE;{ if (logic_and_log_not_a) printf(&& !%.2f , c); else if (logic_or_log_not_a) printf(|| !%.2f , c); break; } case 12: logi_and_a = TRUE; printf(&& %.2f.\n, c); break; case 13: logi_or_a = TRUE; printf(|| %.2f.\n, c); break; case 14: remain_mod_a = TRUE; printf(%% %.0f.\n, c); break;}/*******************************************************//* Calculation for ADD. *//*******************************************************/if(add && add_a){ x = a + b + c; printf(\nAnswer is %.2f\n\n, x); return 1;}if(add && subtract_a){ x = a + b - c; printf(\nAnswer is %.2f\n\n, x); return 1;}if(add && multiply_a){ x = a + b * c; printf(\nAnswer is %.2f\n\n, x); return 1;}if(add && divide_a){ if (c == 0){ printf(\nThe solution does not exist!\n\n); return 5; } x = a + b / c; printf(\nAnswer is %.2f\n\n, x); return 1;}if(add && less_than_a){ x = a + b < c; if (x == 0) printf(\nThe statement is FALSE.\n\n); else printf(\nThe statement is TRUE.\n\n); return 8;}if(add && greater_than_a){ x = a + b > c; if (x == 0) printf(\nThe statement is FALSE.\n\n); else printf(\nThe statement is TRUE.\n\n); return 8;}if(add && less_than_equal_a){ x = a + b <= c; if (x == 0) printf(\nThe statement is FALSE.\n\n); else printf(\nThe statement is TRUE.\n\n); return 8;}if(add && greater_than_equal_a){ x = a + b >= c; if (x == 0) printf(\nThe statement is FALSE.\n\n); else printf(\nThe statement is TRUE.\n\n); return 8;}if(add && equals_a){ x = a + b == c; if (x == 0) printf(\nThe statement is FALSE.\n\n); else printf(\nThe statement is TRUE.\n\n); return 8;}if(add && does_not_equal_a){ x = a + b != c; if (x == 0) printf(\nThe statement is FALSE.\n\n); else printf(\nThe statement is TRUE.\n\n); return 8;}if(add && logi_not_a && logic_and_log_not_a){ x = a + b && !c; if (x == 0) printf(\nThe statement is FALSE.\n\n); else printf(\nThe statement is TRUE.\n\n); return 8;}if(add && logi_not_a && logic_or_log_not_a){ x = a + b || !c; if (x == 0) printf(\nThe statement is FALSE.\n\n); else printf(\nThe statement is TRUE.\n\n); return 8;}if(add && logi_and_a){ x = a + b && c; if (x == 0) printf(\nThe statement is FALSE.\n\n); else printf(\nThe statement is TRUE.\n\n); return 9;}if(add && logi_or_a){ x = a + b || c; if (x == 0) printf(\nThe statement is FALSE.\n\n); else printf(\nThe statement is TRUE.\n\n); return 9;}if(add && remain_mod_a){ y = a, z = b, d = c; w = y + z % d; printf(\nThe answer is %d\n\n, w); return 4;}
Handling simple expressions involving three values and two operators
beginner;c;calculator
Your code is very straightforward and easy to understand. Given the limited types of statements you're allowed to use, this is a reasonable answer. You say you're unhappy with the length, and that says to me that you know there's a better way, but just can't quite figure out what it is. You're right! Here's how I would go about it.SimplifyYou've been asked to handle 16 different operations for 2 operators. That gives you 16 * 16 = 256 different possible combinations. But that doesn't mean that you have to write cases for all 256. You should be able to do it with just 16 + 16 operations.To demonstrate, let's reduce the number of operators to just 4 - addition, subtraction, multiplication, and division. With 4 operators you have 16 possible combinations - add and add, add and subtract, add and multiply, add and divide, subtract and add, subtract and subtract, etc.What you can do is calculate the operation of the first operator and store the result in an intermediate variable. Then use it in the second operation. I'll assume you've already gotten the user's input here. The numbers are in a, b, and c, and the operators are in variables I'll call operator1 and operator2. First, I'd make an enum that describes the possible operators:enum { OP_ADD = 0, OP_SUBTRACT, OP_MULTIPLY, OP_DIVIDE};Then once you have the above inputs, you can do something like this:float intermediate = 0.0;switch (operator1) { case OP_ADD: intermediate = a + b; break; case OP_SUBTRACT: intermediate = a - b; break; case OP_MULTIPLY: intermediate = a * b; break; case OP_DIVIDE: intermediate = a / b; break;};float result = 0.0;switch (operator2) { case OP_ADD: result = intermediate + c; break; case OP_SUBTRACT: result = intermediate - c; break; case OP_MULTIPLY: result = intermediate * c; break; case OP_DIVIDE: result = intermediate / c; break;};So now we've done the operations with 4 + 4 cases instead of 4 * 4 cases.Operator PrecedenceNow you might have noticed that this doesn't properly handle operator precedence. For example, when you see a + b * c, it should be handled as a + (b * c). This can be handled by making an array that holds the precedence of each operator, and looking it up before doing the actual operations. It could work something like this:const int operator_precedence[] = { 0, // OP_ADD 0, // OP_SUBTRACT 1, // OP_MULTIPLY 1, // OP_DIVIDE};int op1_prec = operator_precedence [ operator1 ];int op2_prec = operator_precedence [ operator2 ];If we find that operator 2 has higher precedence, we need to switch the order of operations. We can do that like this: bool swap_vars = FALSE;if (op1_prec < op2_prec){ // Swap the operators int tempOp = operator1; operator1 = operator2; operator2 = tempOp; // Perform (b op2 c) before working with a float temp_var = a; a = b; b = c; c = temp_var; // we'll need to swap variables later swap_vars = TRUE;}Now this will give us b op2 c op1 a which is still not quite right. (Imagine if op1 is either subtract or divide.) So once we've calculated the intermediate variable we'll need to swap the intermediate and c so we get the order right.if (swap_vars){ float temp_var = intermediate; intermediate = c; c = temp_var;}
_cs.77330
I have the following problem:Given an unsorted array A of size n, print the first k elements in A larger than its median.Here's my approach to the problem: 1. Create a minHeap and a maxHeap 2. Iterate over elements in A // O(n) - if maxHeap.count < minHeap.count insert current element to maxHeap // O(log(n)) else: insert current element to minHeap // O(log(n)) 3. if maxHeap.count < minHeap.count: median = minHeap.extractMin() 4. output k elements from minHeap // O(klog(n))This maintains a maxHeap of elements less than the median and a minHeap of elements greater than or equal to the median. But from my analysis, this seems to take O(nlogn + k(log(n)) which is no better than sorting A first and grabbing A[n/2:n/2+k] in just O(nlog(n) + k).Now I have 2 questions:Is my analysis tight? I am doubtful since the heaps have at most i elements in the ith iteration, not n.Is there a better algorithm? Maybe something like O(n+klog(k))?
Algorithm to find k elements following the median in sorted order
algorithms;sorting;arrays;heaps
Answering your second question:Find the median $m$, and partition the set into $L_1 = \{x | x \in S \wedge x < m\}$ and $L_2 = \{x | x \in S \wedge x > m\}$.Find the $k$th largest element $x_k$ in $L_2$, and partition the set into $L_2' = \{x | x \in L_2 \wedge x \leq x_k\}$ and $L_3 = \{x | x \in L_2 \wedge x > x_k\}$.Sort $L_2'$ and return.Steps 1 and 2 take $O(n)$. Step 3 takes $O(k \log k)$. Total time $O(n + k \log k)$.Note this assumes distinct values. Repeated values involves some edge case handling.
_codereview.146080
Basically, it's a matter of code organization, trying to delegate as most responsibility as possible to the server and keeping JavaScript use to a minimum, mostly as glue code between the view and the server.Simple example:Say you have a select field (states) which based on user selection it populates another select field (cities).I currently define the route in a data-* attribute inside the select tag like this:<select id=state name=state data-route={{ route('get_cities')></select><div id=cities-container></div>...<script>$('#state').on('change', function() { $('#cities-container').load($(this).attr('data-route'), {id_state: $(this).val()});});</script>Then on the method called server-side I just return a view with the select already populated, formatted and with all needed processing done server-side instead of concatenating strings and values with jQuery append, etc.Is a good practice to do it like this? I'm not really fond of using 20 libs to do something so simple, just plain old school PHP and jQuery. Is there a better way to tackle this? I found this avoids mixing a lot of html code with javascript and prevent using extra javascript functions for formatting (for example numbers, text, etc), since everything is already done by PHP and JavaScript is just the glue, simply request something and show it, do nothing else; complex logic server-side.Also, defining the route in a data-* attribute lets me use the templating engine (Blade, Twig, etc) and PHP variables, without needing to define a superglobal base_url for JavaScript in order to do a route request based on its alias, instead of its URL.For example:$('#cities-container').load(base_url + '/get-cities')Because, what happens if your route URL changes? Finding every route on the code and replacing it.Or using some other request class to communicate with PHP and use route aliases, etc. Symfony's FOSJsRouting Bundle for example.
Handling routing with jQuery + Laravel
php;jquery;mvc;laravel
null
_unix.155971
I am interested to acquire a small amount of intermediate knowledge in Linux, I would like to learn to write scripts using Linux operation, I am preparing for Red-Hat Certification sometime next year. How does it differ from other web-scripting languages such as Php, Ruby? Furthermore, can you direct me to some links on how to write Linux?.
How Linux differs from other scripting languages?
linux
Linux is a whole operating system, just like Windows. You usually use a shell like bash to run any commands on it. If you want to connect from a Windows box to a remote linux box you can use putty.But those are very different things. You run putty on your local windows box. It will connect to the remote linux box and present you a shell which allows you to issue commands on that box.The scripting language you are referring to is probably that bash shell. But bash is not linux specific. You can have bash for windows, too. For example if you install git it also gives you a really nice bash called git-bash, which allows you exactly the same type of scripting as on linux.Also be aware that there are no real web-scripiting languages. Bash is never used for serving web sites. Ruby is not the same as ruby on rails and even for php it is perfectly possible to write applications that do nothing web related at all.
_scicomp.27610
I'm prototyping a system that finds the 3D pose of a object in a video sequence. For this I minimize a error function involving the rotation and translation of the object as parameters and two sets of points (polluted with gaussian noise but no outliers) as data.I have tried to parameterize the rotation matrix both using 3 euler angles, and a rotation vector. Then I test my system with a synthetic data sequence and I see I obtain substantially less error with the 3 euler angles parameterization that with the rotation vector. I was expecting the opposite (notice the sequence I use doesn't result in any gimbal locks for the euler angles). This makes me wonder if I'm using the rotation vector parameterization correctly. The steps I follow are, first I convert the rotation matrix to rotation vector (using the Rodrigues formula available in OpenCV) the resulting 3-vector is part of the parameters of my error function. Inside the error function I convert the rotation vector to rotation matrix and compute the residuals. I'm using python and scipy.optimize.leastsq (which internally uses MINPACK.lmdif) to minimize this error function. leastsq will compute the jacobian needed for the minimization numerically from the error function. After reading a bit of theory about the lie group SO(3) and the lie algebra so(3), I understand that the so(3) (for me the rotation vector) corresponds to a tangent plane at the identity of SO(3). So my rotation vector is a local approximation to the corresponding rotation matrix.What I thought that maybe I'm missing in my minimization is to convert the rotation vector back to a rotation matrix after each minimization step, and then back to a rotation vector (to take a new local approximation to the rotation matrix) for the next step. I can't test this possibility as I'm using a standard minimizer, which doesn't allow to do this. Now I think that the Jacobian calculated at each minimization step should be enough to direct the rotation vector parameters in the correct direction without the conversions I mention in the above paragraph. However, I'm not fully sure and as I get worse performance with the rotation vector parameterization wrt. the euler angle parameterization, I assume I'm doing something wrong. Would someone clarify/confirm the correct way of using the rotation vector in a minimization problem?
Optimizing an error function involving rotation vectors in Python
optimization;python
null
_webmaster.76386
I want to register a domain called webmasters.com. Can I enter [email protected] as the contact email when filing for the domain name? The forseeable problem is that the email address has not yet been set up at the time of registration, but could be configured shortly thereafter.I do not have an alternate permanent email I want to use for this purpose. This may seem like a simple question, but I have never registered a domain name and do not know how the process works from start to finish... or whether that email addres will be used immediatelly for verification purposes.
How Can I Use An Email Address From A Domain To Register The Same Domain
email;domain registration;domain registrar
I suggest that you use your existing email while registering a domain name. When you complete the domain registration process, most domain providers send domain info, account activation links, invoice & billing info to email address which you put in during registration. Once you create account and register domain then you can easily create new email and change it for that account also.
_codereview.151635
I have a C# application where I have implemented the MVP pattern. On one of my forms I have two ComboBoxes with values through which I loop with a button press. The first ComboBox contains different scenarios and the second one contains different (threat)actors. I show each scenario, and for each scenario I show each actor.The button press event is triggered in my AnalysisScreenView class, to which my AnalysisScreenPresenter responds.The code in the Presenter that loops through the values seems a little clunky to me. I hope someone could give me some improvements in terms of speed or style.AnalysisScreenView - class where the button press is registered and the event is fired.public partial class AnalysisScreenView : Form, IAnalysisScreenView{ // Private members. private AnalysisScreenPresenter _presenter; // Public members. public List<string> Scenarios { get { var scenarios = new List<string>(); foreach(var item in scenarioInput.Items) { scenarios.Add(item.ToString()); } return scenarios; } set { foreach(var scen in value) { this.scenarioInput.Items.Add(scen); } } } public List<string> ThreatActors { get { var actors = new List<string>(); foreach (var item in actorInput.Items) { actors.Add(item.ToString()); } return actors; } set { foreach(var actor in value) { this.actorInput.Items.Add(actor); } } } public string SelectedScenario { get { string scenarios = null; if (scenarioInput.SelectedItem != null) { scenarios = this.scenarioInput.SelectedItem.ToString(); } return scenarios; } set { this.scenarioInput.SelectedItem = value; } } public string SelectedThreatActor { get { string actors = null; if (actorInput.SelectedItem != null) { actors = this.actorInput.SelectedItem.ToString(); } return actors; } set { this.actorInput.SelectedItem = value; } } // Public events. public event EventHandler SettingNext; // Initialize zone screen. Create an instance of the presenter with a reference the view itsself. public AnalysisScreenView(IRiskAnalysisModel model) { InitializeComponent(); _presenter = new AnalysisScreenPresenter(this, model); Show(); } // Selects the next actor or scenario. private void nextButton_Click(object sender, EventArgs e) { SettingNext?.Invoke(this, EventArgs.Empty); }}AnalysisScreenPresenter - class where the next actor or scenario is set.public class AnalysisScreenPresenter{ // Private members. private readonly IAnalysisScreenView _view; private IRiskAnalysisModel _model; // Initialize the analysis screen presenter with an IAnalysisScreenView and a new risk analysis assessments model. Subsribe to necessary events. public AnalysisScreenPresenter(IAnalysisScreenView view, IRiskAnalysisModel model) { _view = view; _model = model; _view.SettingNext += SetNext; } // Sets the next actor or scenario. public void SetNext(object sender, EventArgs e) { var actors = _view.ThreatActors; int actorIndex = actors.IndexOf(_view.SelectedThreatActor); if(actorIndex >= 0 && actorIndex < actors.Count - 1) { _view.SelectedThreatActor = actors[actorIndex + 1]; return; } _view.SelectedThreatActor = actors[0]; var scenarios = _view.Scenarios; int scenarioIndex = scenarios.IndexOf(_view.SelectedScenario); if (scenarioIndex >= 0 && scenarioIndex < scenarios.Count - 1) { _view.SelectedScenario = scenarios[scenarioIndex + 1]; return; } _view.SelectedScenario = scenarios[0]; }}
Looping through the values of two ComboBoxes by pressing a button
c#;performance;winforms;mvp
null
_webmaster.67563
We have updated VPSs and apparently the latest version of PHP(?) does not allow you to use the servers ip /~accountname as a way to view a site that does not yet have a domain name associated to it, eg:122.221.10.23/~newsiteorourhostingserver.com/~newsiteWe can view it from our computers by altering the hosts file, adding something like:122.221.10.23 www.newsite.com.au122.221.10.23 newsite.com.aubut its not very practical to ask our not-tech-savvy clients to go altering their host files.Does anyone know of a way we can show our clients these sites using some kind of temporary domain name or similar?
How to show client a domainless site without altering hosts file or using 'serverip/~accountname'?
domains;web hosting;vps
null
_unix.186695
I am running Debian Wheezy on a BananaPi.I just compiled the kernel and the keyboard was working in terminal(if it's called that way).However, after I installed a dekstop interface using apt-get install task-lxde-desktop my keyboard did not respond anymore.This is my dmesg and lsusb... Notice how weird the lsusb looksroot@bananapi:~# lsusbBus 003 Device 003: ID 04ca:002f Lite-On Technology Corp.Bus 004 Device 003: ID 2808:81c9Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubroot@bananapi:~# lsusbBus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubroot@bananapi:~# dmesg (last part of it)[ 301.255226] usb 4-1: new full-speed USB device number 3 using ohci-platform[ 301.470215] usb 4-1: No LPM exit latency info found, disabling LPM.[ 301.494962] hid-generic 0003:2808:81C9.0005: hiddev0: USB HID v1.10 Device [Focaltech Systems FT5926 MultiTouch] on usb-1c1c400.usb-1/input0[ 301.504628] input: Focaltech Systems FT5926 MultiTouch as /devices/platform/soc@01c00000/1c1c400.usb/usb4/4-1/4-1:1.1/0003:2808:81C9.0006/input/input3[ 301.504694] hid-multitouch 0003:2808:81C9.0006: input: USB HID v1.10 Device [Focaltech Systems FT5926 MultiTouch] on usb-1c1c400.usb-1/input1[ 381.565209] usb 3-1: new low-speed USB device number 3 using ohci-platform[ 381.799992] input: Lite-On Technology Corp. USB Multimedia Keyboard as /devices/platform/soc@01c00000/1c14400.usb/usb3/3-1/3-1:1.0/0003:04CA:002F.0007/input/input4[ 381.855227] hid-generic 0003:04CA:002F.0007: input: USB HID v1.10 Keyboard [Lite-On Technology Corp. USB Multimedia Keyboard] on usb-1c14400.usb-1/input0[ 381.867586] input: Lite-On Technology Corp. USB Multimedia Keyboard as /devices/platform/soc@01c00000/1c14400.usb/usb3/3-1/3-1:1.1/0003:04CA:002F.0008/input/input5[ 381.925522] hid-generic 0003:04CA:002F.0008: input,hiddev0: USB HID v1.10 Device [Lite-On Technology Corp. USB Multimedia Keyboard] on usb-1c14400.usb-1/input1[ 403.888550] usb 4-1: USB disconnect, device number 3[ 405.680680] usb 3-1: USB disconnect, device number 3
Debian wheezy, 3.19, problem keyboard and mouse on dekstop environment
debian;kernel;keyboard;hid
null
_webapps.99716
Why is Twitter not accepting my short url that does not contain .co? the url in question is vote4thame.uk, It works in search engines but doesn't display as a link on Twitter.
Why is Twitter not accepting my short url that does not contain .co?
twitter
null
_unix.149370
I have a device (CCTV decoder) which was rebooting itself, so can't login using LAN cable. It has RS232 port only, no VGA and keyboard/USB port (Embedded PC).I connected RS232 cross cable and used Teraterm, I entered to setup mode. Where I can see all files and directories. I can't edit any file/folder being root user. But I can edit through telnet remotely for a normal device. I do not know the reason behind.Here below is the log, please help me to sort out the problem.~ # ls -ldrwxrwxrwx 2 root 0 0 Jan 1 1970 bindrwxrwxrwx 2 root 0 0 Nov 12 2003 bootdrwxr-xr-x 2 root 0 0 Feb 7 2007 configdrwxrwxrwx 5 root 0 0 Jan 1 1970 devdrwxrwxrwx 5 root 0 0 Feb 7 2007 etc-rw-r--r-- 1 root 0 1157024 Jan 1 1970 h264dec_encryptdrwxrwxrwx 4 root 0 0 Nov 16 2007 liblrwxrwxrwx 1 root 0 11 Nov 16 2007 linuxrc -> bin/busybox-rw-r--r-- 1 root 0 52707 Jan 1 1970 logdrwxrwxrwx 2 root 0 0 Nov 24 2003 procdrwxrwxrwx 2 root 0 0 Nov 16 2007 sbindrwxr-xr-x 3 root 0 0 Nov 16 2007 snmp-rw-r--r-- 1 root 0 23 Jan 1 1970 snmpd.logdrwxrwxrwx 2 root 0 0 Dec 16 2003 tmp-rw-r--r-- 1 root 0 21 Jan 1 00:00 userinfodrwxrwxrwx 6 root 0 0 Jan 1 1970 usrdrwxrwxr-x 2 root 0 0 Mar 12 2004 var-rw-r--r-- 1 root 0 1 Nov 16 2007 verdrwxr-xr-x 4 root 0 0 Nov 16 2007 web~ # U-Boot 1.0.0 (Jul 13 2009 - 10:43:14)CPU: IBM PowerPC 405EP Rev. B at 199.999 MHz (PLB=99, OPB=49, EBC=33 MHz) IIC Boot EEPROM disabled PCI async ext clock used, internal PCI arbiter enabled 16 kB I-Cache 16 kB D-CacheBoard: ### No HW ID - assuming WMIA405EPI2C: readyDRAM: 16 MBMemory ok!Boot from FlashTop of RAM usable for U-Boot at: 01000000Reserving 194k for U-Boot at: 00fcf000Reserving 132k for malloc() at: 00fae000Reserving 112 Bytes for Board Info at: 00fadf90Reserving 48 Bytes for Global Data at: 00fadf60Setting up new stack spaceStack Pointer at: 00fadf48New Stack Pointer is: 00fadf48Watch dog resetCopy Global data struct Relocate the codeNow running in RAM - U-Boot at: 00fcf000FLASH: Manufacture id 0## Unknown FLASH on Bank 0 - Size = 0x00000000 = 0 MB2 banks configBank 1 size 800000Manufacture id 0Bank 0 size 0Resize bank 1 to 800000 8 MBenv_relocate[217] offset = 0x100f000env_relocate[235] malloced ENV at 00fae008default=00:11:22:12:67:20, curr=00:0F:D0:80:01:62.PCI Autoconfig: Memory region: [80000000-9fffffff]PCI Autoconfig: Memory region: [a0000000-bfffffff]PCI Autoconfig: I/O region: [800000-3ffffff]PCI Scan: Found Bus 0, Device 10, Function 0 Vendor 12d5PCI Autoconfig: BAR 0, Mem, size=0x8000000, address=0x80000000PCI Autoconfig: BAR 1, Mem, size=0x1000000, address=0xa0000000In: serialOut: serialErr: serialKGDB: kgdb readyreadyU-Boot relocated to 00fcf000### main_loop entered: bootdelay=1### main_loop: bootcmd=bootm 0xF0000000Hit any key to stop autoboot: 1 0 ## Booting image at f0000000 ... Image Name: WMIA Created: 2009-07-13 2:44:13 UTC Image Type: PowerPC Linux Kernel Image (gzip compressed) Data Size: 748316 Bytes = 730.8 kB Load Address: 00000000 Entry Point: 00000000 Verifying Checksum ... OK Uncompressing Kernel Image ... OK## Current stack ends at 0x00FAD578 => set upper limit to 0x00800000## cmdline at 0x007FFC00 ... 0x007FFCF2bd address = 0x00FADF90memstart = 0x00000000memsize = 0x01000000flashstart = 0xFFFC0000flashsize = 0x00800000flashoffset = 0x0002AF00sramstart = 0x00000000sramsize = 0x00000000bootflags = 0x0000A000procfreq = 199.999 MHzplb_busfreq = 99.999 MHzpci_busfreq = 24.999 MHzethaddr = 00:0F:D0:80:01:62IP addr = 192.168.1.3baudrate = 115200 bpsNo initrd## Transferring control to Linux (at address 00000000) ...Linux version 2.4.20_mvl31-405ep_eval ([email protected]) (gcc version 3.2.1 20020930 (MontaVista)) #9 Mon Jul 13 10:43:58 CST 2009ioremap PCLIO_BASE = 0xe7ffd000PCI bridge regs before fixup pmm0ma 0xe0000001 pmm0ma 0x80000000 pmm0ma 0x80000000 pmm0ma 0x0 pmm1ma 0xe0000001 pmm1ma 0xa0000000 pmm1ma 0xa0000000 pmm1ma 0x0 pmm2ma 0x0 pmm2ma 0x0 pmm2ma 0x0 pmm2ma 0x0 ptm1ms 0x80000001 ptm1la 0x0 ptm2ms 0x0 ptm2la 0x0BUS 0, device 0, Function 0 bar 0x00000014 is 0x00000008BUS 0, device 0, Function 0 bar 0x00000018 is 0x00000008BUS 0, device 0, Function 0 bar 0x00000018 is 0xc0000008PCI bridge regs after fixup pmm0ma 0xc0000001 pmm0ma 0x80000000 pmm0ma 0x80000000 pmm0ma 0x0 pmm1ma 0x0 pmm1ma 0x0 pmm1ma 0x0 pmm1ma 0x0 pmm2ma 0x0 pmm2ma 0x0 pmm2ma 0x0 pmm2ma 0x0 ptm1ms 0x1 ptm1la 0x0 ptm2ms 0xfff00001 ptm2la 0xf0800000Setup DMA channel: 0Transfer width: 2^0 bytesSetup DMA channel: 1Transfer width: 2^0 bytesSetup DMA channel: 2Transfer width: 2^0 bytesSetup DMA channel: 3Transfer width: 2^0 bytesASTRI WMIA405EP Evaluation Board port Version 3On node 0 totalpages: 4096zone(0): 4096 pages.zone(1): 0 pages.zone(2): 0 pages.Kernel command line: root=/dev/mtdblock1 ip=192.168.1.3:192.168.1.100:192.168.1.1:255.255.255.0:WMIA405EP_1:eth0: console=ttyS1,115200 mtd_part=kernel,0x00000000,0x000C0000,rfs,0x000C0000,0x00700000,bootloader,0x007C0000,0x00040000 boardname=wmia405epV2 decoder=1Parsing partition parameter from command line...3 partitions in total.Console: colour dummy device 80x25Calibrating delay loop... 199.47 BogoMIPSMemory: 14392k available (1252k kernel code, 412k data, 96k init, 0k highmem)Dentry cache hash table entries: 2048 (order: 2, 16384 bytes)Inode cache hash table entries: 1024 (order: 1, 8192 bytes)Mount-cache hash table entries: 512 (order: 0, 4096 bytes)Buffer-cache hash table entries: 1024 (order: 0, 4096 bytes)Page-cache hash table entries: 4096 (order: 2, 16384 bytes)POSIX conformance testing by UNIFIXPCI: Probing PCI hardwareidsel 10 pin 1Linux NET4.0 for Linux 2.4Based upon Swansea University Computer Society NET3.039Initializing RT netlink socketOCP uart ver 1.6.2 init completeLSP Revision 1Starting kswapdDisabling the Out Of Memory KillerJFFS2 version 2.1. (C) 2001, 2002 Red Hat, Inc., designed by Axis Communications AB.i2c-core.o: i2c core module version 2.6.2 (20011118)i2c-dev.o: i2c /dev entries driver module version 2.6.2 (20011118)i2c-proc.o version 2.6.2 (20011118)pty: 256 Unix98 ptys configuredSerial driver version 5.05c (2001-07-08) with MANY_PORTS SHARE_IRQ SERIAL_PCI enabledttyS00 at 0xef600300 (irq = 0) is a 16550AttyS01 at 0xef600400 (irq = 1) is a 16550APPC 405 watchdog driver v0.6IBM gpio driver version 07.25.02GPIO #0 at 0xc2070700loop: loaded (max 8 devices)Reset ethernet interfacesReset ethernet interfaceseth1: Cannot open interface without LinkIBM IIC driver v2.0ibm-iic0: using fast (400 kHz) modephysmap flash device: 800000 at f0000000NO QRY responsecfi_cmdset_0001: Erase suspend on write enabledUsing buffer write methodCreating 3 MTD partitions on phys_mapped_flash:0x00000000-0x000c0000 : kernel0x000c0000-0x007c0000 : rfs0x007c0000-0x00800000 : bootloaderNET4: Linux TCP/IP 1.0 for NET4.0IP Protocols: ICMP, UDP, TCP, IGMPIP: routing cache hash table of 512 buckets, 4KbytesTCP: Hash tables configured (established 1024 bind 2048)eth0: IBM EMAC: link up, 10 Mbps Half Duplex.eth0: IBM EMAC: MAC 00:0f:d0:80:01:62.After display statuseth0:$$$$$$$$$$$$$will config mac addresseth0:$$$$$$$$$$$$$config mac address oketh0:$$$$$$$$$$$$$will request irq$$$$$$$$$$$$$request irq oketh0:will init ringswill enable mal chaneth0: IBM EMAC: open completedIP-Config: Complete: device=eth0, addr=192.168.1.3, mask=255.255.255.0, gw=192.168.1.1, host=WMIA405EP_1, domain=, nis-domain=(none), bootserver=192.168.1.100, rootserver=192.168.1.100, rootpath=NET4: Unix domain sockets 1.0/SMP for Linux NET4.0.VFS: Mounted root (jffs2 filesystem) readonly.Freeing unused kernel memory: 96k initinit started: BusyBox v1.00-pre8 (2007.11.16-10:05+0000) multi-call binaryBusyBox v1.00-pre8 (2007.11.16-10:05+0000) Built-in shell (ash)Enter 'help' for a list of built-in commands.Processing /etc/profile... Press CTRL-C to enter shell within 3 secKERNEL,wdt_period=15The timeout was set to 15000000 secondsThe timeout was is 0 seconds(reboot)U-Boot 1.0.0 (Jul 13 2009 - 10:43:14)CPU: IBM PowerPC 405EP Rev. B at 199.999 MHz (PLB=99, OPB=49, EBC=33 MHz) IIC Boot EEPROM disabled PCI async ext clock used, internal PCI arbiter enabled 16 kB I-Cache 16 kB D-CacheBoard: ### No HW ID - assuming WMIA405EPI2C: readyDRAM: 16 MBMemory ok!Boot from FlashTop of RAM usable for U-Boot at: 01000000Reserving 194k for U-Boot at: 00fcf000Reserving 132k for malloc() at: 00fae000Reserving 112 Bytes for Board Info at: 00fadf90Reserving 48 Bytes for Global Data at: 00fadf60Setting up new stack spaceStack Pointer at: 00fadf48New Stack Pointer is: 00fadf48Watch dog resetCopy Global data struct Relocate the codeNow running in RAM - U-Boot at: 00fcf000FLASH: Manufacture id 0## Unknown FLASH on Bank 0 - Size = 0x00000000 = 0 MB2 banks configBank 1 size 800000Manufacture id 0Bank 0 size 0Resize bank 1 to 800000 8 MBenv_relocate[217] offset = 0x100f000env_relocate[235] malloced ENV at 00fae008default=00:11:22:12:67:20, curr=00:0F:D0:80:01:62.PCI Autoconfig: Memory region: [80000000-9fffffff]PCI Autoconfig: Memory region: [a0000000-bfffffff]PCI Autoconfig: I/O region: [800000-3ffffff]PCI Scan: Found Bus 0, Device 10, Function 0 Vendor 12d5PCI Autoconfig: BAR 0, Mem, size=0x8000000, address=0x80000000PCI Autoconfig: BAR 1, Mem, size=0x1000000, address=0xa0000000In: serialOut: serialErr: serialKGDB: kgdb readyreadyU-Boot relocated to 00fcf000### main_loop entered: bootdelay=1### main_loop: bootcmd=bootm 0xF0000000Hit any key to stop autoboot: 1 0 => <INTERRUPT>=> <INTERRUPT>=> <INTERRUPT>=> <INTERRUPT>=> (reboot)U-Boot 1.0.0 (Jul 13 2009 - 10:43:14)CPU: IBM PowerPC 405EP Rev. B at 199.999 MHz (PLB=99, OPB=49, EBC=33 MHz) IIC Boot EEPROM disabled PCI async ext clock used, internal PCI arbiter enabled 16 kB I-Cache 16 kB D-CacheBoard: ### No HW ID - assuming WMIA405EPI2C: readyDRAM: 16 MBMemory ok!Boot from FlashTop of RAM usable for U-Boot at: 01000000Reserving 194k for U-Boot at: 00fcf000Reserving 132k for malloc() at: 00fae000Reserving 112 Bytes for Board Info at: 00fadf90Reserving 48 Bytes for Global Data at: 00fadf60Setting up new stack spaceStack Pointer at: 00fadf48New Stack Pointer is: 00fadf48Watch dog resetCopy Global data struct Relocate the codeNow running in RAM - U-Boot at: 00fcf000FLASH: Manufacture id 0## Unknown FLASH on Bank 0 - Size = 0x00000000 = 0 MB2 banks configBank 1 size 800000Manufacture id 0Bank 0 size 0Resize bank 1 to 800000 8 MBenv_relocate[217] offset = 0x100f000env_relocate[235] malloced ENV at 00fae008default=00:11:22:12:67:20, curr=00:0F:D0:80:01:62.PCI Autoconfig: Memory region: [80000000-9fffffff]PCI Autoconfig: Memory region: [a0000000-bfffffff]PCI Autoconfig: I/O region: [800000-3ffffff]PCI Scan: Found Bus 0, Device 10, Function 0 Vendor 12d5PCI Autoconfig: BAR 0, Mem, size=0x8000000, address=0x80000000PCI Autoconfig: BAR 1, Mem, size=0x1000000, address=0xa0000000In: serialOut: serialErr: serialKGDB: kgdb readyreadyU-Boot relocated to 00fcf000### main_loop entered: bootdelay=1### main_loop: bootcmd=bootm 0xF0000000Hit any key to stop autoboot: 1 0 ## Booting image at f0000000 ... Image Name: WMIA Created: 2009-07-13 2:44:13 UTC Image Type: PowerPC Linux Kernel Image (gzip compressed) Data Size: 748316 Bytes = 730.8 kB Load Address: 00000000 Entry Point: 00000000 Verifying Checksum ... OK Uncompressing Kernel Image ... OK## Current stack ends at 0x00FAD578 => set upper limit to 0x00800000## cmdline at 0x007FFC00 ... 0x007FFCF2bd address = 0x00FADF90memstart = 0x00000000memsize = 0x01000000flashstart = 0xFFFC0000flashsize = 0x00800000flashoffset = 0x0002AF00sramstart = 0x00000000sramsize = 0x00000000bootflags = 0x0000A000procfreq = 199.999 MHzplb_busfreq = 99.999 MHzpci_busfreq = 24.999 MHzethaddr = 00:0F:D0:80:01:62IP addr = 192.168.1.3baudrate = 115200 bpsNo initrd## Transferring control to Linux (at address 00000000) ...Linux version 2.4.20_mvl31-405ep_eval ([email protected]) (gcc version 3.2.1 20020930 (MontaVista)) #9 Mon Jul 13 10:43:58 CST 2009ioremap PCLIO_BASE = 0xe7ffd000PCI bridge regs before fixup pmm0ma 0xe0000001 pmm0ma 0x80000000 pmm0ma 0x80000000 pmm0ma 0x0 pmm1ma 0xe0000001 pmm1ma 0xa0000000 pmm1ma 0xa0000000 pmm1ma 0x0 pmm2ma 0x0 pmm2ma 0x0 pmm2ma 0x0 pmm2ma 0x0 ptm1ms 0x80000001 ptm1la 0x0 ptm2ms 0x0 ptm2la 0x0BUS 0, device 0, Function 0 bar 0x00000014 is 0x00000008BUS 0, device 0, Function 0 bar 0x00000018 is 0x00000008BUS 0, device 0, Function 0 bar 0x00000018 is 0xc0000008PCI bridge regs after fixup pmm0ma 0xc0000001 pmm0ma 0x80000000 pmm0ma 0x80000000 pmm0ma 0x0 pmm1ma 0x0 pmm1ma 0x0 pmm1ma 0x0 pmm1ma 0x0 pmm2ma 0x0 pmm2ma 0x0 pmm2ma 0x0 pmm2ma 0x0 ptm1ms 0x1 ptm1la 0x0 ptm2ms 0xfff00001 ptm2la 0xf0800000Setup DMA channel: 0Transfer width: 2^0 bytesSetup DMA channel: 1Transfer width: 2^0 bytesSetup DMA channel: 2Transfer width: 2^0 bytesSetup DMA channel: 3Transfer width: 2^0 bytesASTRI WMIA405EP Evaluation Board port Version 3On node 0 totalpages: 4096zone(0): 4096 pages.zone(1): 0 pages.zone(2): 0 pages.Kernel command line: root=/dev/mtdblock1 ip=192.168.1.3:192.168.1.100:192.168.1.1:255.255.255.0:WMIA405EP_1:eth0: console=ttyS1,115200 mtd_part=kernel,0x00000000,0x000C0000,rfs,0x000C0000,0x00700000,bootloader,0x007C0000,0x00040000 boardname=wmia405epV2 decoder=1Parsing partition parameter from command line...3 partitions in total.Console: colour dummy device 80x25Calibrating delay loop... 199.47 BogoMIPSMemory: 14392k available (1252k kernel code, 412k data, 96k init, 0k highmem)Dentry cache hash table entries: 2048 (order: 2, 16384 bytes)Inode cache hash table entries: 1024 (order: 1, 8192 bytes)Mount-cache hash table entries: 512 (order: 0, 4096 bytes)Buffer-cache hash table entries: 1024 (order: 0, 4096 bytes)Page-cache hash table entries: 4096 (order: 2, 16384 bytes)POSIX conformance testing by UNIFIXPCI: Probing PCI hardwareidsel 10 pin 1Linux NET4.0 for Linux 2.4Based upon Swansea University Computer Society NET3.039Initializing RT netlink socketOCP uart ver 1.6.2 init completeLSP Revision 1Starting kswapdDisabling the Out Of Memory KillerJFFS2 version 2.1. (C) 2001, 2002 Red Hat, Inc., designed by Axis Communications AB.i2c-core.o: i2c core module version 2.6.2 (20011118)i2c-dev.o: i2c /dev entries driver module version 2.6.2 (20011118)i2c-proc.o version 2.6.2 (20011118)pty: 256 Unix98 ptys configuredSerial driver version 5.05c (2001-07-08) with MANY_PORTS SHARE_IRQ SERIAL_PCI enabledttyS00 at 0xef600300 (irq = 0) is a 16550AttyS01 at 0xef600400 (irq = 1) is a 16550APPC 405 watchdog driver v0.6IBM gpio driver version 07.25.02GPIO #0 at 0xc2070700loop: loaded (max 8 devices)Reset ethernet interfacesReset ethernet interfaceseth1: Cannot open interface without LinkIBM IIC driver v2.0ibm-iic0: using fast (400 kHz) modephysmap flash device: 800000 at f0000000NO QRY responsecfi_cmdset_0001: Erase suspend on write enabledUsing buffer write methodCreating 3 MTD partitions on phys_mapped_flash:0x00000000-0x000c0000 : kernel0x000c0000-0x007c0000 : rfs0x007c0000-0x00800000 : bootloaderNET4: Linux TCP/IP 1.0 for NET4.0IP Protocols: ICMP, UDP, TCP, IGMPIP: routing cache hash table of 512 buckets, 4KbytesTCP: Hash tables configured (established 1024 bind 2048)eth0: IBM EMAC: link up, 10 Mbps Half Duplex.eth0: IBM EMAC: MAC 00:0f:d0:80:01:62.After display statuseth0:$$$$$$$$$$$$$will config mac addresseth0:$$$$$$$$$$$$$config mac address oketh0:$$$$$$$$$$$$$will request irq$$$$$$$$$$$$$request irq oketh0:will init ringswill enable mal chaneth0: IBM EMAC: open completedIP-Config: Complete: device=eth0, addr=192.168.1.3, mask=255.255.255.0, gw=192.168.1.1, host=WMIA405EP_1, domain=, nis-domain=(none), bootserver=192.168.1.100, rootserver=192.168.1.100, rootpath=NET4: Unix domain sockets 1.0/SMP for Linux NET4.0.VFS: Mounted root (jffs2 filesystem) readonly.Freeing unused kernel memory: 96k initinit started: BusyBox v1.00-pre8 (2007.11.16-10:05+0000) multi-call binaryBusyBox v1.00-pre8 (2007.11.16-10:05+0000) Built-in shell (ash)Enter 'help' for a list of built-in commands.chmod: /etc/profile: Read-only file systemProcessing /etc/profile... KERNEL,wdt_period=15The timeout was set to 15000000 secondsThe timeout was is 0 secondsPress CTRL-C to enter shell within 3 secrm: unable to remove `/dev/rts_wtd': Read-only file systemmkfifo failed!Using /lib/modules/proc_jiffies.oUsing /lib/modules/etimap.oLoading etimap modulespacketintf:[1]Using packet interface9600/etc/profile: 55: cannot create /proc/sys/net/core/rmem_default: Directory nonexistent/etc/profile: 55: cannot create /proc/sys/net/core/wmem_default: Directory nonexistent/etc/profile: 55: cannot create /proc/sys/net/core/rmem_max: Directory nonexistent/etc/profile: 55: cannot create /proc/sys/net/core/wmem_max: Directory nonexistentBuild date Nov 16 2007 18:05:41RTC enableUse codec ./h264dec_encryptCPU clock speed: 378MEM clock speed: 126/var/etidaemon.pid: Read-only file systemETimap think device 0 has etimap c2891000 1332etimap_ioctl called one timeInit bsp CORE speed: 378etimap_ioctl called one timeInit bsp MEM speed: 126etimap_ioctl called one timeeti_board(0): brd_open(): CPU: Equator Technologies, Inc. BSP-15.eti_board(0): brd_open(): board: WMIA405EP Version 3/4 (0xf). Use StringRay settings.No ACK from slave @ addr 0xa4 in iicTxeti_board(0): WARNING: IIC Send failed while attempting to read EEPROMNo ACK from slave @ addr 0x70 in iicTxeti_board(0): WARNING: IIC Write to IIC Expander failed. VCXO may be active.No ACK from slave @ addr 0xa0 in iicTxsizeof(bsp_argv[0]) = 32Segmentation fault/etc/profile: 55: cannot create /proc/etivideo0: Read-only file systemnanowm: opening frontpanel device /dev/ttys0/etc/last_ch is accessibleInit channel:1listening :use setting file: /etc/rts_player.conf'main: setting_file /etc/rts_player.confOpenning config file /etc/rts_player.confOpenLogFile: log_file((null))WmiaPlayerOpenstartPlayer(): get configstartPlayer(): set config: watchdog=1, colorsystem=1WmiaPlayerSetConfigWmiaPlayerSetConfig: Set Protocol to 1WmiaPlayerSetConfig: display DateTime = 0WmiaPlayerSetConfig: display Framerate = 0WmiaPlayerSetConfig: color system = 1WmiaPlayerSetConfig: display Camera Id = 0WmiaPlayerSetConfig: camera id info X=50, Y=400, color=BLACKWmiaPlayerSetConfig: datetime info X=450, Y=400, color=BLACKWmiaPlayerSetConfig: frame rate info X=450, Y=40, color=BLACKWmiaPlayerStartWmiaPlayerStart: TsPlayerOpenconfig : 0x100a6390version: 1profileIndication: 0levelIndication: 1compatibleProfiles: 0lengthSize: 4nrOfSps: 0nrOfPps: 0/dev/etivideo: No such deviceG722DecWrapper: AudioDecOpenrm: unable to remove `/dev/event': Read-only file systemTsDemuxThread pid = 61TsDemuxReceiveThread pid = 62/dev/dsp: No such deviceAudio Disabled: No such deviceCannot bind to named socketOpening /dev/fifoCommand : urlParameter : ts://239.255.3.13:31012==================================================73 74 61 72 74 00 aa bb cc dd 00 00 00 70 00 0000 17 74 73 3a 2f 2f 32 33 39 2e 32 35 35 2e 332e 31 33 3a 33 31 30 31 32 65 6e 64 00 =================================================45 bytes writtennxclient: retry connect attempt 1Donechmod: /etc/profile: Read-only file systemPassword: nxclient: retry connect attempt 2nxclient: retry connect attempt 3nxclient: retry connect attempt 4nxclient: retry connect attempt 5Test Program -> Checking test mode jumper...Not test mode.VideoDecSetColorSystemDirect: 1player_watchdog pid = 71WmiaPlayerCommandRun pid = 72nxclient: retry connect attempt 6nxclient: retry connect attempt 7nxclient: retry connect attempt 8nxclient: retry connect attempt 9nxclient: retry connect attempt 10Couldn't connect to Nano-X server!Password: ~ # ls -ldrwxrwxrwx 2 root 0 0 Jan 1 1970 bindrwxrwxrwx 2 root 0 0 Nov 12 2003 bootdrwxr-xr-x 2 root 0 0 Feb 7 2007 configdrwxrwxrwx 5 root 0 0 Jan 1 1970 devdrwxrwxrwx 5 root 0 0 Feb 7 2007 etc-rw-r--r-- 1 root 0 1157024 Jan 1 1970 h264dec_encryptdrwxrwxrwx 4 root 0 0 Nov 16 2007 liblrwxrwxrwx 1 root 0 11 Nov 16 2007 linuxrc -> bin/busybox-rw-r--r-- 1 root 0 52754 Jan 1 1970 logdrwxrwxrwx 2 root 0 0 Nov 24 2003 procdrwxrwxrwx 2 root 0 0 Nov 16 2007 sbindrwxr-xr-x 3 root 0 0 Nov 16 2007 snmp-rw-r--r-- 1 root 0 23 Jan 1 1970 snmpd.logdrwxrwxrwx 2 root 0 0 Dec 16 2003 tmp-rw-r--r-- 1 root 0 21 Jan 1 00:00 userinfodrwxrwxrwx 6 root 0 0 Jan 1 1970 usrdrwxrwxr-x 2 root 0 0 Mar 12 2004 var-rw-r--r-- 1 root 0 1 Nov 16 2007 verdrwxr-xr-x 4 root 0 0 Nov 16 2007 webifconfig~ # ifconfigifconfig: Warning: cannot open /proc/net/dev. Limited output.: No such file or directoryeth0 Link encap:Ethernet HWaddr 00:0F:D0:80:01:62 inet addr:192.168.159.81 Bcast:192.168.159.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:15 df~ # df Filesystem 1k-blocks Used Available Use% Mounted ondf: /proc/mounts: No such file or directory~ # fdisk -lfdisk: /proc/partitions: No such file or directorySegmentation faultpp ss~ # pp ss-sh: pss: not foundrmdir~ # rmdir /etc/profilermdir: `/etc/profile': Read-only file system~ # ?-sh: ?: not found~ # I have tried to see built in commands- below are the system commands1stRun find mkfs.minix tail 1stRunEnc free mknod tar \[ freeramdisk mkswap telnet add_log fsck.minix modprobe telnetd ash ftpget more test awk ftpput mount tftp basename getty mv time blockdev grep nano-X top bunzip2 gunzip nanowm touch busybox gzip netstat traceroute bzcat halt nslookup true cal head passwd tty cat hexdump patch udhcpc chgrp hostid pidof umount chinit.sh hostname ping uname chmod httpd player_ap.ppc uncompress chown hwclock printf uniq chroot id ps unix2dos clear ifconfig pwd unzip cmp ifdown reboot up_stage1.sh cp ifup renice up_stage2.sh cut inetd reset uptime date init rm usleep dd insmod rmdir uudecode df kill rmmod uuencode dirname killall route vi dmesg klogd sed watch dos2unix ln sh wc du logger sleep wget echo login snmpd which egrep losetup sort who env ls stty whoami etidaemon lsmod su wmia_ctrl.ppc event_handler.ppc mc_wmia_ctrl.ppc sulogin wtd expr md5sum swapoff xargs false mesg swapon yes fdisk mkdev.sh sync zcat fgrep mkdir syslogd
Unable to delete files as a root user using RS232 serial cross cable
files;embedded;startup;readonly
null
_unix.354082
Following this answer, I tried to use the command line tool openvt on my debian jessie install. However it seems to not be available.Some googling suggests that the openvt command is included in the console-tools package. However trying to install console-tools just results in the following error:$ sudo apt-get install console-toolsReading package lists... DoneBuilding dependency tree Reading state information... DonePackage console-tools is not available, but is referred to by another package.This may mean that the package is missing, has been obsoleted, oris only available from another sourceE: Package 'console-tools' has no installation candidateIt seems like console-tools is no longer available? It's also not in the allpackages list.So my questions are:is there another package that can be used to install openvt?is there an alternative to openvt?how do I find out why the package is no longer availbable?
How do I install `openvt` in Debian Jessie?
debian
openvt is in the kbd package (you can find that out using apt-file search openvt). There isnt really an alternative to openvt, although you can manage VTs with systemd.To find out why the package is no longer available, look for it in the package tracker; this will point you to the removal bug which usually has the relevant details.
_computerscience.5362
On the main page for Allegorithmic's Bitmap2Material, it mentions that the software uses a Slope Based approach over a Luminance Based approach. What exactly does this mean?
Slope Based Texturing
texture;algorithm;image processing
null
_cs.11836
I'm wondering if there is a standard way of measuring the sortedness of an array? Would an array which has the median number of possible inversions be considered maximally unsorted? By that I mean it's basically as far as possible from being either sorted or reverse sorted.
How to measure sortedness
algorithms;algorithm analysis;sorting;arrays
No, it depends on your application. The measures of sortedness are often refered to as measures of disorder, which are functions from $N^{<N}$ to $\mathbb{R}$, where $N^{<N}$ is the collection of all finite sequences of distinct nonnegative integers. The survey by Estivill-Castro and Wood [1] lists and discusses 11 different measures of disorder in the context of adaptive sorting algorithms.The number of inversions might work for some cases, but is sometimes insufficient. An example given in [1] is the sequence $$\langle \lfloor n/2 \rfloor + 1, \lfloor n/2 \rfloor + 2, \ldots, n, 1, \ldots, \lfloor n/2 \rfloor \rangle$$that has a quadratic number of inversions, but only consists of two ascending runs. It is nearly sorted, but this is not captured by inversions.[1] Estivill-Castro, Vladmir, and Derick Wood. A survey of adaptive sorting algorithms. ACM Computing Surveys (CSUR) 24.4 (1992): 441-476.
_unix.338967
My laptop screen is 13 2560x1440 and I'm connecting to a desktop monitor that is 23 1920x1080.I've tried using KDE and the scaling factor was nice but it could not be set for each individual monitor. So it would only look right on one monitor or the other.Is there a different desktop environment that does this better or maybe some sort of add-on for KDE?
What is the best desktop environment for using 2 different monitors with drastically different DPI?
window manager;desktop environment;dpi
null
_unix.90616
I have a shell script that generates reports. Using crontab I run the scrip everyday at 1000 hrs.I want to mail that report to my gmail id as an attachment.I have tried using mutt but it doesn't work for me. Please could you guide me through it?After installing sendEmail, when I am attempting to send the mail I am getting the following informationSep 14 15:15:37 debal sendEmail[3671]: DEBUG => Connecting to smtp.gmail.com:587Sep 14 15:15:38 debal sendEmail[3671]: DEBUG => My IP address is: 192.168.2.103Sep 14 15:15:38 debal sendEmail[3671]: SUCCESS => Received: 220 mx.google.com ESMTP uw6sm17314211pbc.8 - gsmtpSep 14 15:15:38 debal sendEmail[3671]: INFO => Sending: EHLO debalSep 14 15:15:38 debal sendEmail[3671]: SUCCESS => Received: 250-mx.google.com at your service, [180.151.208.181], 250-SIZE 35882577, 250-8BITMIME, 250-STARTTLS, 250-ENHANCEDSTATUSCODES, 250 CHUNKINGSep 14 15:15:38 debal sendEmail[3671]: INFO => Sending: STARTTLSSep 14 15:15:38 debal sendEmail[3671]: SUCCESS => Received: 220 2.0.0 Ready to start TLS******************************************************************* Using the default of SSL_verify_mode of SSL_VERIFY_NONE for client is deprecated! Please set SSL_verify_mode to SSL_VERIFY_PEER together with SSL_ca_file|SSL_ca_path for verification. If you really don't want to verify the certificate and keep the connection open to Man-In-The-Middle attacks please set SSL_verify_mode explicitly to SSL_VERIFY_NONE in your application.******************************************************************* at /usr/local/bin/sendEmail line 1906.invalid SSL_version specified at /usr/share/perl5/vendor_perl/IO/Socket/SSL.pm line 414.I did a yum install on perl-Net-SSLeay perl-Net-SMTP-SSL and this is the result. [root@debal ~]# yum install perl-Net-SSLeay perl-Net-SMTP-SSLLoaded plugins: langpacks, refresh-packagekitPackage perl-Net-SSLeay-1.54-1.fc19.x86_64 already installed and latest versionPackage perl-Net-SMTP-SSL-1.01-13.fc19.noarch already installed and latest versionNothing to doThe problem still persists and I am using Fedora 19.
Send attachment to gmail id from shell script
linux;shell script;email
null
_webapps.78739
In my table, column A has row names and other columns have values: +------+---+| M100 | D || M130 | B || M340 | || P304 | F || P400 | || P499 | C |+------+---+I'd like to join nonempty values, preceding them with row names and separating by commas. So, the desired output is: M100A,M130B,P304A,P499CThis is somewhat similar to Concatenating only filled cells except I also have row names. If the query function allowed concatenation of string values, a solution could be something like select concat(A,B) where B!='', subsequently joined. At present this is not supported, however. I post my solution as an answer, but I'd like to see other approaches, as the double-filter formula looks a bit repetitive.
Concatenating two columns using only filled cells in one of them
google spreadsheets;formulas
Short answerA formula that does the required in the question=JOIN(,, QUERY( {ArrayFormula({A1:A6}&{B1:B6}),B1:B6}, Select Col1 Where Col2<>'' ) )ExplanationThe above formula has nested three functions, use the matrix handling feature of Google Sheets and the concatenate operator.ArrayFormula({A1:A6}&{B1:B6}) : Concatenates the cell values of each row.{ArrayFormula({A1:A6}&{B1:B6}),B1:B6} : Creates a range with two columns. The second column will be used for filtering.QUERY is used to do the filtering.JOIN is used to create the string separating the elements with a comma.
_cs.71738
I have an algorithm that goes something like this $$\begin{align}&\mathbf{ALGORITHM}\operatorname{BruteForceMedian}(A[0..n-1])\\&\mbox{// Returns the median value in a given array $A$ of $n$ numbers. This is}\\&\mbox{// the $k$th element, where $k=|n/2|$, if the array was sorted.}\\&k\leftarrow|n/2|\\&\mathbf{for}\ i\ \mathbf{in}\ 0\ \mathbf{to}\ n-1\ \mathbf{do}\\&\quad\text{numsmaller}\leftarrow0\quad\hbox{// How many elements are smaller than $A[i]$}\\&\quad\text{numeral}\leftarrow0\qquad\hbox{// How many elements are equal to $A[i]$}\\&\quad\mathbf{for}\ j\ \mathbf{in}\ 0\ \mathbf{to}\ n-1\ \mathbf{do}\\&\qquad\mathbf{if}\ A[j]<A[i]\ \mathbf{then}\\&\qquad\quad\text{numsmaller}\leftarrow\text{numsmaller}+1\\&\qquad\mathbf{else}\\&\qquad\quad\mathbf{if}\ A[j]=A[i]\ \mathbf{then}\\&\qquad\qquad\text{numequal}\leftarrow\text{numequal}+1\\&\quad\mathbf{if}\ \text{numsmaller}<k\ \mathbf{and}\ k\le(\text{numsmaller}+\text{numequal})\ \mathbf {then}\\&\qquad\mathbf{return}\ A[i]\end{align}$$I figure as this is a nested loop algorithm its efficiency class would be $O(n^2)$. I am just confused as to what the algorithm's Basic Operation should be. Is it the comparison operator down the bottom that (when correct) returns the median value of the array?
What's the basic operation of a brute force median algorithm?
algorithms;complexity theory
null
_cs.51655
Let $t$ and $s$ be a words we will say that two words are completly different if for all $1\leqslant i\leqslant |t|$ the $i$ letter in $t$ diffrent from the $i$ letter in $s$.Prove that the language $\mathcal{L}=\{ts|t,s\in \{0,1\}^*,|t|=|s|,t,s \text{ completly different} \}$ is not a free-context-languageAttempt :Applaying the pumping lemma for free-contex-language:Suppose that $\mathcal L$ is regular so exists a word '$z=uxvyw$' with length of at least $n$ such that: $(1)\,\,\,|xvy|\leqslant n$$(2)\,\,\,|xy|\geqslant 1$$(3)\,\,\,ux^ivy^iw \in \mathcal L\,\,\,\,\,\,\,\,\,i\geqslant 0$Now, let's choose the word $\color{blue}{z=0^n1^n}$ it is obvious that $|z|\geqslant n$ so we can use $(1)-(3)$$z=0^{\alpha}0^{\beta}0^{\gamma}0^{\lambda}1^n$So $\alpha+\beta+\gamma+\lambda=n$I am stuck here.EDIT: After using @Renato's answer:Consider $z=0^p1^p0^p1^p0^p1^p\in \mathcal{L}$ since $|z|>p$, there are $u,v,w,x,y$ such that $z=uvwxy,|vwx|\leqslant p, |vx|>0$ and $uv^iwx^iy\in \mathcal{L}$$vwx$ must straddle the midpoint of $z$ there are fore possibilities:$vwx$ is in $0^p$ part.$vwx$ is in $1^p$ part.$vwx$ is in $1^p0^p$ part.$vwx$ is in $0^p1^p$ part.Thus, it is not of the form that we want For $i=2$ $z\notin \mathcal{L}$
Showing that $\mathscr{L}$ is not context-free-grammar language
formal languages;context free;pumping lemma
You are wrong because you can pump in the middle of the word. A guy has commited the same mistake as you yesterday. Check this answer: https://cs.stackexchange.com/a/51613/31129
_unix.368408
I am trying to compile Ceres solver on a yocto build (for Intel Aero drone). I am currently a newbie at using bitbake and yocto.I earlier tried pulling using the git repository at https://ceres-solver.googlesource.com/ceres-solver/ but could not get it to fetch. So I downloaded the tar.gz file myself, uploaded it on a public domain I own and tried to compile. Here is how my bb file looks as of now.LICENSE = CLOSEDLIC_FILES_CHKSUM = DEPENDS = glog gcc libeigenSRC_URI = http://sidj.in/wp-content/uploads/2017/06/ceres-solver-41455566ac633e55f222bce7c4d2cb4cc33d5c72.tar.gzSRC_URI[md5sum] = 6f24d5639bbe738e6f8ca5d7a129400eSRC_URI[sha256sum] = 005ed7405350767f22164d9fff93b3613207eeef9cbb56afbd02335542360b16PV = 1.0S = ${WORKDIR}/ceres-cmakeinherit cmake pkgconfig# Specify any options you want to pass to cmake using EXTRA_OECMAKE:EXTRA_OECMAKE = Now I see the download getting completed, but upon running the bitbake ceres command, I get an error saying that CMakelists.txt could not be found, while it is clearly in the tar.gz file. I looked at other recipes that seem to work, and feel that the do_unpack step is not the one failing. The output of the console is:Loading cache...done.Loaded 2790 entries from dependency cache.Parsing recipes...done.Parsing of 2225 .bb files complete (2219 cached, 6 parsed). 2794 targets, 109 skipped, 0 masked, 0 errors.NOTE: Resolving any missing task queue dependenciesBuild Configuration:BB_VERSION = 1.30.0BUILD_SYS = x86_64-linuxNATIVELSBSTRING = universalTARGET_SYS = x86_64-poky-linuxMACHINE = intel-aeroDISTRO = poky-aeroDISTRO_VERSION = 1.4.0-devTUNE_FEATURES = m64 corei7TARGET_FPU = meta meta-poky meta-yocto-bsp = HEAD:cca8dd15c8096626052f6d8d25ff1e9a606104a3meta-qt4 = HEAD:fc9b050569e94b5176bed28b69ef28514e4e4553meta-qt5 = HEAD:9aa870eecf6dc7a87678393bd55b97e21033ab48meta-uav = HEAD:0f9395139b6a3c3f0f2c18a6a87f4048d0ca1a4fmeta-ros = HEAD:4258013ec33f5ed2b0c9be12fb5902fe918fe98bmeta-intel-realsense = HEAD:82e9dbfd8783292f42f4a6fcc7bd5b8a6b1c567ameta-intel-aero = HEAD:1d7e341ff35aa903c37491f94677bdacc9427f6emeta-oe meta-python meta-networking = HEAD:55c8a76da5dc099a7bc3838495c672140cedb78emeta-cmu-rasl = master:39e39bf41e915323bf7cb70cad50e67cb8b1b90emeta-dense-visual-tracking = HEAD:cca8dd15c8096626052f6d8d25ff1e9a606104a3meta-intel = HEAD:1f8dd1b00ce9c72d73583c405ec392690d9b08b7NOTE: Preparing RunQueueNOTE: Executing SetScene TasksNOTE: Running setscene task 184 of 202 (/home/thesidjway/rasl_ws/src/intel-aero/poky/meta-dense-visual-tracking/recipes-dry/ceres/ceres.bb, do_populate_lic_setscene)NOTE: recipe ceres-1.0-r0: task do_populate_lic_setscene: StartedNOTE: recipe ceres-1.0-r0: task do_populate_lic_setscene: SucceededNOTE: Executing RunQueue TasksNOTE: Running task 711 of 722 (ID: 4, /home/thesidjway/rasl_ws/src/intel-aero/poky/meta-dense-visual-tracking/recipes-dry/ceres/ceres.bb, do_fetch)NOTE: recipe ceres-1.0-r0: task do_fetch: StartedNOTE: recipe ceres-1.0-r0: task do_fetch: SucceededNOTE: Running task 712 of 722 (ID: 0, /home/thesidjway/rasl_ws/src/intel-aero/poky/meta-dense-visual-tracking/recipes-dry/ceres/ceres.bb, do_unpack)NOTE: recipe ceres-1.0-r0: task do_unpack: StartedNOTE: recipe ceres-1.0-r0: task do_unpack: SucceededNOTE: Running task 713 of 722 (ID: 1, /home/thesidjway/rasl_ws/src/intel-aero/poky/meta-dense-visual-tracking/recipes-dry/ceres/ceres.bb, do_patch)NOTE: recipe ceres-1.0-r0: task do_patch: StartedNOTE: recipe ceres-1.0-r0: task do_patch: SucceededNOTE: Running task 714 of 722 (ID: 5, /home/thesidjway/rasl_ws/src/intel-aero/poky/meta-dense-visual-tracking/recipes-dry/ceres/ceres.bb, do_generate_toolchain_file)NOTE: recipe ceres-1.0-r0: task do_generate_toolchain_file: StartedNOTE: recipe ceres-1.0-r0: task do_generate_toolchain_file: SucceededNOTE: Running task 716 of 722 (ID: 6, /home/thesidjway/rasl_ws/src/intel-aero/poky/meta-dense-visual-tracking/recipes-dry/ceres/ceres.bb, do_configure)NOTE: recipe ceres-1.0-r0: task do_configure: StartedLog data follows:| DEBUG: Executing python function sysroot_cleansstate| DEBUG: Python function sysroot_cleansstate finished| DEBUG: Executing shell function do_configure| CMake Error: The source directory /home/thesidjway/rasl_ws/src/intel-aero/poky/build/tmp/work/x86_64-linux/ceres/1.0-r0/ceres-cmake does not appear to contain CMakeLists.txt.| Specify --help for usage, or press the help button on the CMake GUI.| WARNING: exit code 1 from a shell command.| ERROR: Function failed: do_configure (log file is located at /home/thesidjway/rasl_ws/src/intel-aero/poky/build/tmp/work/x86_64-linux/ceres/1.0-r0/temp/log.do_configure.9002)NOTE: recipe ceres-1.0-r0: task do_configure: FailedNOTE: Tasks Summary: Attempted 716 tasks of which 711 didn't need to be rerun and 1 failed.Summary: 1 task failed: /home/thesidjway/rasl_ws/src/intel-aero/poky/meta-dense-visual-tracking/recipes-dry/ceres/ceres.bb, do_configureSummary: There was 1 ERROR message shown, returning a non-zero exit code.Am I doing something stupid or wrong? Any sort of help would be highly appreciated.
ERROR: Bitbake couldn't find CMakelists.txt after extraction (Yocto)
cmake;yocto
null
_cs.68593
I am trying to understand how pure Monte Carlo tree search (https://en.wikipedia.org/wiki/Monte_Carlo_tree_search) handles games where random playout will likely result in a loss easily avoided by other types of agents.For example, imagine a hypothetical game with a single player. At game start, the player can choose between move A or move B.Move A results in a fair coin flip that determines win/loss.Move B results in a second set of 10 possible moves for player A, one of which is a win (move C, let's call it), and the other 9 a loss.Random playout will have move A winning 50% of the time, and Move B winning 10% of the time, even though the ability to always pick the winning move B->C is within the player's power.Can MCTS handle games of this nature well? How does it do so?Obviously this toy example can trivially be solved by other search methods, but the real game I'm interested in has a prohibitive branching factor for things like depth- or breadth-first search, and a proper evaluation function is difficult.
How does MCTS handle games with large numbers of poor moves?
board games;monte carlo
Issues like these are essentially solved in the MCTS itself. The principle of the algorithm is that it tries to make the best pick, rather than guaranteeing it (if done in this randomized fashion rather than a 'pure MCTS' fashion). The trees are built up out of large amounts of collected data, aggretating success rates in the nodes. Nodes that end up being succesful more often will have a larger likelihood of being picked. This process of positive reinforcement means that nodes that always lead to a victory will, over the course of many playthroughs, approach a 100% pick rate.In your example, node A will have a rougly 50/50 success/failure ratio throughout any number of playthroughs (due to the coinflip). Node B will have a 10/90 ratio of success/failure at first but will approach a 100/0 ratio as the number of playthroughs nears infinity (node C will end up always being picked due to the 100% success rate, and this propagates to node B).The result is that node B will gradually end up being picked more often than node A, reinforcing the effect and causing it to be picked even more often.The image on the wikipedia page illustrates it very well; the nodes with a high chance of success gradually make up a larger fraction of the total 'node weight' on a level, thus increasing the odds that it is picked with every playthrough.
_unix.89282
I currently working on a script that updates a file. The file has several versions. The important ones have static links. These get redirected to the actual file which then gets downloaded using wget. I have figured out that wget has a flag whcih prints the headers recived. There is a list of Locations. The last location meantoned in the header is the actual URL. I need to get that!My idea was to use wget -S to get the header (I need another flag that prevents the file from downloading and creating). Then use a pipe to parse the lines and catch the last line containing location. I guess this could be realized by using grep -l -i location: | tail -l. Then I should be left with a single line that can be easily parsed.So the command would look something like this:# The -??? flag is the one that prevents the file from downloading. (I don't know it)Location=$(wget -S -??? $URL | grep -l -i location: | tail -l)My question what flag do I have to use to not download the file with wget or is there another way/command to accomplish this?
How to figure out where a link gets redirected
shell script;wget;curl;http
What you want is a HEAD request, but wget does not support it; curl does.Your distribution most probably has curl in repositories.curl -s -I $URL -L | awk '/Location: (.*)/ {print $2}' | tail -n 1$ URL=http://unix.stackexchange.com/questions/89282/$ curl -s -I $URL | awk '/Location: (.*)/ {print $2}' | tail -n 1/questions/89282/how-to-figure-out-where-a-link-gets-redirected$ _Here: -s prevents curl from showing a progress bar;-I makes curl issue a HEAD request;-L makes curl follow redirects (thanks @brianstone), you may want or not want to include this, depending on which redirect headers you want to track;the awk script prints the matched expression in parens, just the local part of the URI.
_unix.382294
I have an awk file that loads other awk files. Rather than calling the loading code everytime I run the main function of the file I'm trying to load everything in a BEGIN statement first, but if I do that the function itself never gets run. Is there anyway to have a BEGIN statement, and functions called from outside of the script?My awk script:#! /usr/bin/awk -ffunction include(includeFile) {INCLUDE_FILES[includeFile]}function sourceIncludes(){ if(!l) { getline t < /proc/self/cmdline; split(t,T, \0) scriptname=T[3] for (i = 1; i < ARGC; i++) args=args ARGV[i] for(iFile in INCLUDE_FILES ) inc = inc -f iFile cmd=sprintf(%s %s -v l=1 -- %s\n,scriptname,inc,args) system(cmd); exit }}function pkginfo(pkg){ { print pkg }}BEGIN { include(wrap.awk) sourceIncludes()}wrap.awk contents:#! /usr/bin/awk -ffunction wrap(text, q, y, z){ while(text) { q = match(text, / |$/) y += q if(y >= 80) { z = z RS sprintf(%c, 0x2502) #chr(2502)#\\u2502 for(i = 0; i < 20; i++) z = z FS y = q - 1 } else if(z) z = z FS z = z substr(text, 1, q - 1) text = substr(text, q + 1) } return z}This is how I call everything from bash / zsh:awk -f ~/.ZSH_CUSTOM/awkscripts/pkginfo.awk -e '{ pkginfo(test) }'
awk function not getting called if I have a begin statement in the awk file
awk
null
_unix.222499
I'm trying to replace a character in a file at a random position. My file looks something like:aab babab abab I'm trying to replace a random character for 'c'. So the output might look like:aab bcbab abab I have tried removing all line breaks and saving in a file new_string.txt and then using sed but it isn't working.This is the code I have tried: rand1=$(shuf -i 0-$tot_len -n 1)sed s/^\(.\{${rand1}\}\)./\1G/ new_string.txtI keep getting the error: sed: -e expression #1, char 25: Invalid content of \{\}
Replacing a character at a random position using sed?
sed
No need for the curly brackets in your variable, and the variable should be quoted as well. Use:sed s/^\(.\{$rand1\}\)./\1G/ new_string.txtUPDATE: as stated below in comments:The original code is fine, however the integer for $rand1 is too large for sed. I found that the maximum value can be 32767 for GNU sed, i.e. sed still takes 16bit integers only.You can obtain that limit for the system's regular expression library (though GNU sed generally uses a builtin version) with:$getconf RE_DUP_MAX32767POSIX requires that limit to be at least _POSIX_RE_DUP_MAX (255), and that's the maximum you can expect portably (some systems like Solaris or OS/X have it as low as that).
_codereview.72033
I've been working on a scheduling application and I have the middle tier completed at this point. It's not changed in a few days, so I feel it's ready for review. I have just this one routine that feels dirty. It's definitely verging on arrow code, but without short-circuiting, I'm not sure how it can be improved any farther.My Schedule class wraps a collection of ScheduleEntries and provides methods to add entries, remove entries, and cascade changes (as well as a way to listen for changes to the underlying collection). When CascadeChanges is called, the collection of entries is searched for dirty records. Those records are then cascaded to the corresponding records in future Cycles. A number of conditions must be met in order to ensure changes are being cascaded to the correct future entries. Currently, I have sacrificed an amount of performance for cleaner, more readable code. How can this method be improved?Public Sub CascadeChanges() Dim innerEntries As SmartScheduleEntries Set innerEntries = Me.Entries '??? use group id to cascade changes? Dim entry As SmartScheduleEntry For Each entry In Me.Entries If entry.IsDirty Then Dim innerEntry As SmartScheduleEntry For Each innerEntry In innerEntries If innerEntry.Store = entry.Store Then If (innerEntry.Cycle.Year = entry.Cycle.Year _ And innerEntry.Cycle.Number > entry.Cycle.Number) _ Or innerEntry.Cycle.Year > entry.Cycle.Year Then With innerEntry If .WeekDay = mOldWeekDay And .Week = mOldWeek And .Team = mOldTeam Then .Team = entry.Team .Week = entry.Week .WeekDay = entry.WeekDay End If End With End If End If Next innerEntry End If Next entry RaiseEvent OnCascadeChangesEnd SubHere are my two test cases. (I've been using Rubberduck to unit test all of this.)'@TestMethodPublic Sub CascadeShouldUpdateFuture() On Error GoTo TestFailArrange: Dim mock As SmartSchedule Set mock = Mocks.MockFullSchedule Dim originalDay As VbDayOfWeek originalDay = mock.Entries(1).WeekDay Dim shouldBeChanged As New SmartScheduleEntries Dim entry As SmartScheduleEntry For Each entry In mock.Entries If entry.WeekDay = originalDay And entry.Store = 6003 Then shouldBeChanged.Add entry, entry.ID End If NextAct: mock.Entries(1).WeekDay = vbFriday ' make a change to first record mock.CascadeChangesAssert: For Each entry In shouldBeChanged Assert.AreEqual vbFriday, entry.WeekDay, Cycle: & entry.Cycle.ToString NextTestExit: Exit SubTestFail: If Err.Number <> 0 Then Assert.Fail Test raised an error: # & Err.Number & - & Err.Description Else Resume TestExit End IfEnd Sub'@TestMethodPublic Sub CascadeShouldNotUpdatePast() On Error GoTo TestFailArrange: Dim mock As SmartSchedule Set mock = Mocks.MockFullSchedule Dim originalDay As VbDayOfWeek originalDay = mock.Entries(1).WeekDay Dim shouldNotBeChanged As New SmartScheduleEntries Dim entry As SmartScheduleEntry For Each entry In mock.Entries If entry.WeekDay <> originalDay And entry.Store <> 6003 Then shouldNotBeChanged.Add entry, entry.ID End If NextAct: mock.Entries(1).WeekDay = vbFriday ' make a change to first record mock.CascadeChangesAssert: For Each entry In shouldNotBeChanged Assert.AreNotEqual vbFriday, entry.WeekDay, Cycle: & entry.Cycle.ToString & ; Store: & entry.ToString NextTestExit: Exit SubTestFail: If Err.Number <> 0 Then Assert.Fail Test raised an error: # & Err.Number & - & Err.Description Else Resume TestExit End IfEnd SubFor context, below you will find the relevant classes. I'm happy to receive criticism on these, but I'm pretty happy with them as they are.Schedule:Option ExplicitPrivate WithEvents mEntries As SmartScheduleEntriesPublic Event OnAddEntry(ByRef entry As SmartScheduleEntry)Public Event OnRemoveEntry(ByRef entry As SmartScheduleEntry)Public Event OnCascadeChanges()Private mOldWeek As CycleWeekPrivate mOldWeekDay As VbDayOfWeekPrivate mOldTeam As StringPublic Property Get Entries() As SmartScheduleEntries Set Entries = mEntriesEnd PropertyPublic Property Set Entries(ByVal value As SmartScheduleEntries) Set mEntries = valueEnd PropertyPublic Sub AddEntry(ByVal entry As SmartScheduleEntry) mEntries.Add entry, entry.ID RaiseEvent OnAddEntry(entry)End SubPublic Sub RemoveEntry(ByVal entry As SmartScheduleEntry) mEntries.Remove entry RaiseEvent OnRemoveEntry(entry)End SubPublic Sub Validate() 'todo: implement Validate() RaiseNotImplementedError ValidateEnd SubPublic Sub CascadeChanges() Dim innerEntries As SmartScheduleEntries Set innerEntries = Me.Entries '??? use group id to cascade changes? Dim entry As SmartScheduleEntry For Each entry In Me.Entries If entry.IsDirty Then Dim innerEntry As SmartScheduleEntry For Each innerEntry In innerEntries If innerEntry.Store = entry.Store Then If (innerEntry.Cycle.Year = entry.Cycle.Year _ And innerEntry.Cycle.Number > entry.Cycle.Number) _ Or innerEntry.Cycle.Year > entry.Cycle.Year Then With innerEntry If .WeekDay = mOldWeekDay And .Week = mOldWeek And .Team = mOldTeam Then .Team = entry.Team .Week = entry.Week .WeekDay = entry.WeekDay End If End With End If End If Next innerEntry End If Next entry RaiseEvent OnCascadeChangesEnd SubPublic Sub CleanEntries() Dim entry As SmartScheduleEntry For Each entry In mEntries entry.IsDirty = False NextEnd SubPrivate Sub Class_Initialize() Set mEntries = New SmartScheduleEntriesEnd SubPrivate Sub mEntries_Add(ByRef entry As SmartScheduleEntry) ' ReRaises event RaiseEvent OnAddEntry(entry)End SubPrivate Sub mEntries_ItemChanged(ByRef outWeek As CycleWeek, ByRef outWeekDay As VbDayOfWeek, ByRef outTeam As String) mOldWeekDay = outWeekDay mOldWeek = outWeek mOldTeam = outTeamEnd SubPrivate Sub mEntries_Remove(ByRef entry As SmartScheduleEntry) ' ReRaises Event RaiseEvent OnRemoveEntry(entry)End SubPrivate Sub RaiseNotImplementedError(ByVal procName As String) Err.Raise vbObjectError + 1, TypeName(Me) & . & procName, Not implemented yet.End SubEntry:Option ExplicitPublic Enum ScheduleEntryError ReadOnlyPropertyError = vbObjectError + 3333End EnumPublic Enum CycleWeek weekOne = 1 WeekTwoEnd EnumPrivate Type TScheduleEntry ID As Long GroupID As Long Cycle As Cycle Team As String Store As Integer WeekDay As VbDayOfWeek Week As CycleWeek IsDirty As BooleanEnd TypePrivate this As TScheduleEntryPublic Event OnWeekDayChange(ByRef outDay As VbDayOfWeek)Public Event OnWeekChange(ByRef outWeek As CycleWeek)Public Event OnTeamChange(ByRef outTeam As String)Public Property Get ID() As Long ID = this.IDEnd PropertyPublic Property Let ID(ByVal value As Long) If this.ID = 0 Then this.ID = value Else RaiseReadOnlyError ID End IfEnd PropertyPublic Property Get GroupID() As Long GroupID = this.GroupIDEnd PropertyPublic Property Let GroupID(ByVal value As Long) If this.GroupID = 0 Then this.GroupID = value Else RaiseReadOnlyError GroupID End IfEnd PropertyPublic Property Get IsDirty() As Boolean IsDirty = this.IsDirtyEnd PropertyPublic Property Let IsDirty(ByVal value As Boolean) this.IsDirty = valueEnd PropertyPublic Property Get Team() As String Team = this.TeamEnd PropertyPublic Property Let Team(ByVal value As String) Dim old As String old = this.Team this.Team = value this.IsDirty = True RaiseEvent OnTeamChange(old)End PropertyPublic Property Get Store() As Integer Store = this.StoreEnd PropertyPublic Property Let Store(ByVal value As Integer) this.Store = value this.IsDirty = TrueEnd PropertyPublic Property Get Cycle() As Cycle Set Cycle = this.CycleEnd PropertyPublic Property Set Cycle(ByVal value As Cycle) Set this.Cycle = value this.IsDirty = TrueEnd PropertyPublic Property Get Week() As CycleWeek Week = this.WeekEnd PropertyPublic Property Let Week(ByVal value As CycleWeek) Dim old As CycleWeek old = this.Week this.Week = value this.IsDirty = True RaiseEvent OnWeekChange(old)End PropertyPublic Property Get WeekDay() As VbDayOfWeek WeekDay = this.WeekDayEnd PropertyPublic Property Let WeekDay(ByVal value As VbDayOfWeek) Dim old As VbDayOfWeek old = this.WeekDay this.WeekDay = value this.IsDirty = True RaiseEvent OnWeekDayChange(old)End Property'read-only propertyPublic Property Get SetDate() As Date Dim result As Date ' vbMonday == 2, and our week starts on Monday. ' If DayOfWeek == vbMonday, it is the startdate, we should add zero days. ' In other words, Add (2 - 2) to startdate if it's Monday. If this.Week = weekOne Then result = DateAdd(d, this.WeekDay - 2, this.Cycle.StartDate) Else result = DateAdd(d, this.WeekDay - 2 + 7, this.Cycle.StartDate) End If SetDate = resultEnd PropertyPublic Function ToString() As String ToString = this.Cycle.ToString & , & this.Team & , & this.Store & , & this.Week & , & this.WeekDay & , & this.IsDirtyEnd FunctionPrivate Sub RaiseReadOnlyError(ByVal procName As String) Err.Raise ScheduleEntryError.ReadOnlyPropertyError, TypeName(Me) & . & procName, Property Is ReadOnly.End SubEntries Collection:VERSION 1.0 CLASSBEGIN MultiUse = -1 'TrueENDAttribute VB_Name = SmartScheduleEntriesAttribute VB_GlobalNameSpace = FalseAttribute VB_Creatable = FalseAttribute VB_PredeclaredId = FalseAttribute VB_Exposed = FalseOption ExplicitPrivate mCollection As CollectionPrivate WithEvents mEntryListener As SmartScheduleEntryAttribute mEntryListener.VB_VarHelpID = -1Public Event Added(ByRef entry As SmartScheduleEntry)Public Event Removed(ByRef entry As SmartScheduleEntry)Public Event ItemChanged(ByRef outWeek As CycleWeek, ByRef outWeekDay As VbDayOfWeek, ByRef outTeam As String)Public Function Add(ByRef entry As SmartScheduleEntry, ByVal Key As Long) mCollection.Add entry, CStr(Key) RaiseEvent Added(entry)End FunctionPublic Function Remove(ByVal entry As SmartScheduleEntry) mCollection.Remove IndexOf(entry) RaiseEvent Removed(entry)End FunctionPublic Function Item(ByVal index As Variant) As SmartScheduleEntryAttribute Item.VB_UserMemId = 0 Set mEntryListener = mCollection(index) Set Item = mEntryListenerEnd FunctionPublic Function Count() As Long Count = mCollection.CountEnd Function' returns index of item if found, returns 0 if not foundPublic Function IndexOf(ByVal entry As SmartScheduleEntry) As Long Dim i As Long For i = 1 To mCollection.Count If mCollection(i).ID = entry.ID Then IndexOf = i Exit Function End If NextEnd FunctionPublic Function NewEnum() As IUnknownAttribute NewEnum.VB_UserMemId = -4 Set NewEnum = mCollection.[_NewEnum]End FunctionPrivate Sub Class_Initialize() Set mCollection = New CollectionEnd SubPrivate Sub Class_Terminate() Set mCollection = NothingEnd SubPrivate Sub mEntryListener_OnTeamChange(ByRef outTeam As String) RaiseEvent ItemChanged(mEntryListener.Week, mEntryListener.WeekDay, outTeam)End SubPrivate Sub mEntryListener_OnWeekChange(ByRef outWeek As CycleWeek) RaiseEvent ItemChanged(outWeek, mEntryListener.WeekDay, mEntryListener.Team)End SubPrivate Sub mEntryListener_OnWeekDayChange(ByRef outDay As VbDayOfWeek) RaiseEvent ItemChanged(mEntryListener.Week, outDay, mEntryListener.Team)End SubCycle:Option ExplicitPrivate Type TCycle StartDate As Date EndDate As Date Year As Integer Number As IntegerEnd TypePrivate this As TCyclePublic Property Get Year() As Integer Year = this.YearEnd PropertyPublic Property Let Year(ByVal value As Integer) this.Year = valueEnd PropertyPublic Property Get Number() As Integer Number = this.NumberEnd PropertyPublic Property Let Number(ByVal value As Integer) this.Number = valueEnd PropertyPublic Property Get StartDate() As Date StartDate = DateValue(this.StartDate)End PropertyPublic Property Let StartDate(ByVal value As Date) this.StartDate = valueEnd PropertyPublic Property Get EndDate() As Date EndDate = DateValue(this.EndDate)End PropertyPublic Property Let EndDate(ByVal value As Date) this.EndDate = valueEnd PropertyPublic Function ToString() As String ToString = this.Year & -P & Format(this.Number, 00)End FunctionPublic Sub SetFromString(ByVal value As String) Dim arr As Variant arr = Split(value, -P, 2) Me.Year = arr(0) Me.Number = arr(1)End Sub
Cascading Changes to Future Entries in a Schedule
vba
Arrow AntiPatternYes your arrow code is dirty it can be broken down into other methods. They maybe only used in one method now but as your code expands you will find it convenient that these methods are already defined. I find keeping every method to one or two control structures helps. Please better names should be used than what I used as I don't fully understand your product.Public Sub CascadeChanges() Dim entries As SmartScheduleEntries Set entries = Me.Entries Dim entry As SmartScheduleEntry For Each entry in entries If entry.IsDirty Then CascadeEntry entry, entries Next entry RaiseEvent OnCascadeChangesEnd SubPrivate Sub CascadeEntry(ByVal inputEntry As SmartScheduleEntry, _ ByVal entries As SmartScheduleEntries) Dim entry As SmartScheduleEntry For Each entry In entries If OughtCascade(inputEntry, entry) And IsOutDated(entry) Then DoCascade inputEntry, entry End If Next entry End SubPrivate Function IsOutDated(ByVal entry As SmartScheduleEntry) As Boolean IsOutDated = (entry.WeekDay = mOldWeekDay And _ entry.Week = mOldWeek And _ entry.Team = mOldTeam)End FunctionYou might want to abstract various comparisons of OughtCascade out but I do know which are relevant enough to abstract. All of the comparisons are just simple properties, so the lack of short circuit evaluation has marginal cost. Looking back into your Scheduler class, not all of these methods belong in that class. The following two could be ported to your SmartScheduleEntry class.Private Function OughtCascade(ByVal entryFrom SmartScheduleEntry, _ ByVal entryTo SmartScheduleEntry) As Boolean OughtCascade = (entryFrom.Store = entryTo.Store) And _ ((entryFrom.Cycle.Year = entryTo.Cycle.Year) And _ (entryFrom.Cycle.Number < entryTo.Cycle.Number) Or _ (entryFrom.Cycle.Year < entryTo.Cycle.Year))End FunctionPublic Sub DoCascade(ByRef entryFrom As SmartScheduleEntry, _ ByRef entryTo As SmartScheduleEntry With entryTo .Team = entryFrom.Team .Week = entryFrom.Week .WeekDay = entryFrom.WeekDay End WithEnd SubIsDirty is DirtyLooking further at your code I am becoming suspicious of the IsDirty member. I believe the property should evaluate whether the entry is dirty or not and not read from a member. It appears to be causing boiler plate code in the Let properties of other members.Public Property Let WeekDay(ByVal value As VbDayOfWeek) Dim old As VbDayOfWeek old = this.WeekDay this.WeekDay = value this.IsDirty = True RaiseEvent OnWeekDayChange(old)End PropertyThe issue is Get IsDirty is dependent on the code of Let WeekDay and other properties. Get IsDirty should be independent on any methods it does not specifically reference. Isolating IsDirty may require completely redesigning your structure. Seeing as IsDirty seems to be synonymous with HasMutated, consider making your SmartScheduleEntry class immutable.
_softwareengineering.215826
Recently I started programming in Groovy for a integration testing framework, for a Java project. I use Intellij IDEA with Groovy plug-in and I am surprised to see as a warning for all the methods that are non-static and do not depend on any instance fields. In Java, however, this is not an issue (at least from IDE's point of view).Should all methods that do not depend onto any instance fields be transformed into static functions? If true, is this specific to Groovy or it is available for OOP in general? And why?
Make methods that do not depend on instance fields, static?
java;object oriented;groovy;static;instance
Note that IDEA has this inspection for Java as well, it is called Method may be 'static',This inspection reports any methods which may safely be made static. A method may be static if it doesn't reference any of its class' non static methods and non static fields and isn't overridden in a sub class...Thing is though that for Java code, this inspection is turned off by default (programmer can turn it on at their discretion). The reason for this is most likely that validity / usefulness of such an inspection could be challenged, based on a couple of quite authoritative sources.To start with, official Java tutorial is rather restrictive on when methods should be static:A common use for static methods is to access static fields.Given above, one could argue that turning on by default mentioned inspection doesn't comply with recommended use of static modifier in Java.Besides, there is a couple other sources that go as far as suggesting a judicious approach on using ideas that lie behind this inspection or even discouraging it.See for example Java World article - Mr. Happy Object teaches static methods:Any method that is independent of instance state is a candidate for being declared as static.Note that I say candidate for being declared as static. Even in the previous example nothing forces you to declare instances() as static. Declaring it as static just makes it more convenient to call since you do not need an instance to call the method. Sometimes you will have methods that don't seem to rely on instance state. You might not want to make these methods static. In fact you'll probably only want to declare them as static if you need to access them without an instance.Moreover, even though you can declare such a method as static, you might not want to because of the inheritance issues that it interjects into your design. Take a look at Effective Object-Oriented Design to see some of the issues that you will face...An article at Google testing blog even goes as far as claiming Static Methods are Death to Testability:Lets do a mental exercise. Suppose your application has nothing but static methods. (Yes, code like that is possible to write, it is called procedural programming.) Now imagine the call graph of that application. If you try to execute a leaf method, you will have no issue setting up its state, and asserting all of the corner cases. The reason is that a leaf method makes no further calls. As you move further away from the leaves and closer to the root main() method it will be harder and harder to set up the state in your test and harder to assert things. Many things will become impossible to assert. Your tests will get progressively larger. Once you reach the main() method you no longer have a unit-test (as your unit is the whole application) you now have a scenario test. Imagine that the application you are trying to test is a word processor. There is not much you can assert from the main method...Sometimes a static methods is a factory for other objects. This further exuberates the testing problem. In tests we rely on the fact that we can wire objects differently replacing important dependencies with mocks. Once a new operator is called we can not override the method with a sub-class. A caller of such a static factory is permanently bound to the concrete classes which the static factory method produced. In other words the damage of the static method is far beyond the static method itself. Butting object graph wiring and construction code into static method is extra bad, since object graph wiring is how we isolate things for testing...You see, given above it looks only natural that mentioned inspection is turned off by default for Java.IDE developers would have a really hard time explaining why they think it is so important as to set it on by default, against widely recognized recommendations and best practices.For Groovy, things are quite different. None of arguments listed above apply, particularly the one about testability, as explained eg in Mocking Static Methods in Groovy article at Javalobby:If the Groovy class you're testing makes calls a static method on another Groovy class, then you could use the ExpandoMetaClass which allows you to dynamically add methods, constructors, properties and static methods...This difference is likely why default setting for mentioned inspection is opposite in Groovy. While in Java default on would be source of users confusion, in Groovy, an opposite setting could confuse IDE users.Hey the method doesn't use instance fields, why didn't you warn me about it? That question would be easy to answer for Java (as explained above), but for Groovy, there is just no compelling explanation.
_unix.206313
I am running Debian 8 as the only operating system on my Toshiba Satellite laptop, though I previously had other operating systems installed.Running the command sudo fdisk -l produces the following output:Disk /dev/sda: 698.7 GiB, 750156374016 bytes, 1465149168 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisklabel type: dosDisk identifier: 0x33d70f34Device Boot Start End Sectors Size Id Type/dev/sda1 * 2048 1432047615 1432045568 682.9G 83 Linux/dev/sda2 1432049662 1465147391 33097730 15.8G 5 Extended/dev/sda5 1432049664 1465147391 33097728 15.8G 82 Linux swap / SolarisPartition 3 does not start on physical sector boundary.1) Why are sda2 and sda5 treated as different partitions when they almost exactly overlap?2) What happened to sda3 and 4?3) Why doesn't sda1 start at 0?
Odd output from fdisk -l: overlapping and missing partitions?
partition;fdisk
all information about partitions should be saved in some place. this place is a reserved portion of HDD at his beginning also called MBR (Master Boot Record).Depending on type of partition table chosen there can be used different partition layouts and in you case it's Disklabel type: dosdos partition table. Using this type of partition table you can create only four partitions, but what happen if you need five partitions? If you need more that 4 partition you need to create a special partition called extended partition which can then be organized in logical partitions. In this way, using this DOS partition table you can create 3 primary partitions and one extended ... the last one can be subdivided in logical partitions. In this way you can create more that 4 partitions on your HDD.Extended partition can be seen as a closure for logical drivers where all logical partition will reside. In you situation you have only one logical partition (sda5) which will take all space inside extended partition... so they seems to overlapping but the truth is that sda2 includes sda5why there aren't sda3 and sda4 ? ... it may depends on actions done at partition creation time or it make sens to reserve sda1, sda2, sda3 and sda4 for primary partitions and sda5 and so forth for logical drivers (partitions)
_softwareengineering.177311
Is there any specific language which is designed for mechatronics programming? I know about LabView, which is a data flow language, but not sure about its main platform.Could you recommend to me, some languages apart from c/c++? Any language which is used in the topic of mechatronics( robotics, sensor programming, etc ).
What programming languages exist for mechatronics purposes?
programming languages;robotics
Before mechatronics was called that, in the industry they just called it automation. The field is dominated in North America initially by Ladder logic, which everyone on this website (except me) would absolutely hate if they saw it. It has its purposes however.In the last 10 years you see a lot more standardization of automation languages, specifically the IEC-61131-3 standard, which includes the following languages:LD (Ladder Diagram) - a.k.a. ladder logicFBD (Function Block Diagram) - similar to your labviewSFC (Sequential Function Chart) - a fancy state machine or state diagramST (Structured Text) - a normal computer language with a syntax similar to BASIC/PascalIL (Instruction List) - kind of like assembly, but not reallyIn real life (I do this for a living) I see a lot of LD, and then secondarily a bunch of FBD and ST. I myself have used SFC in several projects over the last 5 years, and I like some of its features, but it has some problems (notably gentle recovery from faults is usually harder in SFC than in LD).Note that IEC-61131-3 is only a standard that nails down the data types and the features of the languages, but the syntax of each language typically differs greatly from vendor to vendor. You can't just export code from one vendor's IDE and import it into another. They're not compatible.There was one other proprietary automation language I used, called Steeplechase. It's a flowchart based language, similar to SFC, but simpler. I believe it was purchased by Entivity, which was then purchased by Phoenix Contact, so I think they still sell it. I remember it also had a ladder logic engine as well.Edit:For an example of ladder logic (and a bit of SFC), here's an introductory tutorial I wrote on how to get going with Rockwell Software's RSLogix 5000 ladder logic programming software for Allen-Bradley's popular line of ControlLogix PLC's: RSLogix 5000 Tutorial. It'll give you a good idea of how it works, even though the example is a bit contrived.
_codereview.119004
I am totally new to PHP. I just wrote a PHP script for google oauth to pull the data and insert into my database. I don't know if my code is vulnerable to SQL injection. Should I have used prepared statements and should I rewrite the code?index.php <?php ini_set('display_errors', 1); error_reporting(E_ALL ^ E_NOTICE); ?> <?php include_once(config.php); include_once(includes/functions.php); //print_r($_GET);die; if(isset($_REQUEST['code'])){ $gClient->authenticate(); $_SESSION['token'] = $gClient->getAccessToken(); header('Location: ' . filter_var($redirect_url, FILTER_SANITIZE_URL)); } if (isset($_SESSION['token'])) { $gClient->setAccessToken($_SESSION['token']); } if ($gClient->getAccessToken()) { $userProfile = $google_oauthV2->userinfo->get(); //DB Insert //$gUser->setApprovalPrompt (auto); $gUser = new Users(); // As of PHP 5.3.0 $gUser->checkUser('google',$userProfile['id'],$userProfile['given_name'],$userProfile['family_name'],$userProfile['email'],$userProfile['gender'],$userProfile['locale'],$userProfile['link'],$userProfile['picture'],$username); $_SESSION['google_data'] = $userProfile; // Storing Google User Data in Session header(location: feed.php); $_SESSION['token'] = $gClient->getAccessToken(); } else { $authUrl = $gClient->createAuthUrl(); } $email = $_SESSION['google_data']['email']; $user = strstr($email, '@', true); $username = $user; ?>functions.php <?php ini_set('display_errors', 1);error_reporting(E_ALL ^ E_NOTICE); ?><?php session_start();class Users { public $tableName = 'users'; function __construct(){ //database configuration $dbServer = 'localhost'; //Define database server host $dbUsername = 'root'; //Define database username $dbPassword = ''; //Define database password $dbName = 'livelor'; //Define database name //connect databse $con = mysqli_connect($dbServer,$dbUsername,$dbPassword,$dbName); if(mysqli_connect_errno()){ die(Failed to connect with MySQL: .mysqli_connect_error()); }else{ $this->connect = $con; } } function checkUser($oauth_provider,$oauth_uid,$fname,$lname,$email,$gender,$locale,$link,$picture,$username){ $prevQuery = mysqli_query($this->connect,SELECT * FROM $this->tableName WHERE oauth_provider = '.$oauth_provider.' AND oauth_uid = '.$oauth_uid.') or die(mysqli_error($this->connect)); if(mysqli_num_rows($prevQuery) > 0){ $update = mysqli_query($this->connect,UPDATE $this->tableName SET oauth_provider = '.$oauth_provider.', oauth_uid = '.$oauth_uid.' ,fname = '.$fname.', lname = '.$lname.', email = '.$email.', gender = '.$gender.', locale = '.$locale.', picture = '.$picture.', gpluslink = '.$link.', modified = '.date(Y-m-d H:i:s).' WHERE oauth_provider = '.$oauth_provider.' AND oauth_uid = '.$oauth_uid.') or die(mysqli_error($this->connect)); }else{ $insert = mysqli_query($this->connect,INSERT INTO $this->tableName SET oauth_provider = '.$oauth_provider.', oauth_uid = '.$oauth_uid.', fname = '.$fname.', lname = '.$lname.', email = '.$email.', gender = '.$gender.', locale = '.$locale.', picture = '.$picture.', gpluslink = '.$link.', created = '.date(Y-m-d H:i:s).', modified = '.date(Y-m-d H:i:s).' , username='.$username.' ) or die(mysqli_error($this->connect)); } $query = mysqli_query($this->connect,SELECT * FROM $this->tableName WHERE oauth_provider = '.$oauth_provider.' AND oauth_uid = '.$oauth_uid.') or die(mysqli_error($this->connect)); $result = mysqli_fetch_array($query); return $result; }}
Inserting OAuth data into a database
php;beginner;mysqli;sql injection;oauth
I don't know that my code is vulnerable to SQL injection.Yes, it is. You should never put any variables directly into SQL statements. Even if you think that the variables may possibly be safe, it's just really bad practice, and you will mess it up sooner or later. In your case, an attacker could use the profile fields, which would very likely lead to SQL injection (this depends a bit on the input filter for the profile fields, but I would be surprised if it caught all injections, and you should definitely not rely on it).Should I have to use prepared statements and rewrite the code. Yes. Prepared statements are the only reliable way to prevent SQL injection, and you should never write any SQL statements without prepared statements. It is bad style and very insecure.MiscYou really don't want to display errors in production. You have too much vertical whitespace.Some may disagree, but 2 spaces for indentation makes code harder to read. If you need this because your code is too nested, remove some levels of nesting instead of removing indentation. functions.php is quite a generic name. It also doesn't fit it's content at all. As it contains a class, it should have the same name as the class (possibly with .class added).You should create your database connection only once, and then pass it to the functions needing it, instead of creating a new connection for each object needing it.don't shorten variable names, just write them out.don't die in functions, it makes it hard for the calling code to handle.
_unix.74520
Can I redirect output to a log file and background a process at the same time?In other words, can I do something like this?nohup java -jar myProgram.jar 2>&1 > output.log &Or, is that not a legal command? Or, do I need to manually move it to the background, like so:java -jar myProgram.jar 2>$1 > output.logjobs[CTRL-Z]bg 1
Can I redirect output to a log file and background a process at the same time?
bash;shell;shell script
One problem with your first command is that you redirect stderr to where stdout is (if you changed the $ to a & as suggested in the comment) and then, you redirected stdout to some log file, but that does not pull along the redirected stderr. You must do it in the other order, first send stdout to where you want it to go, and then send stderr to the address stdout is atsome_cmd > some_file 2>&1 &and then you could throw the & on to send it to the background. Jobs can be accessed with the jobs command. jobs will show you the running jobs, and number them. You could then talk about the jobs using a % followed by the number like kill %1 or so. Also, without the & on the end you can suspend the command with Ctrlz, use the bg command to put it in the background and fg to bring it back to the foreground. In combination with the jobs command, this is powerful.to clarify the above part about the order you write the commands. Suppose stderr is address 1002, stdout is address 1001, and the file is 1008. The command reads left to right, so the first thing it sees in yours is 2>&1 which moves stderr to the address 1001, it then sees > file which moves stdout to 1008, but keeps stderr at 1001. It does not pull everything pointing at 1001 and move it to 1008, but simply references stdout and moves it to the file.The other way around, it moves stdout to 1008, and then moves stderr to the point that stdout is pointing to, 1008 as well. This way both can point to the single file.
_webapps.84574
I'm trying to delete a mobile phone associated to my Gmail account. The number on the mobile phone used to be issued to me by my former company. How can I do that?
How can I remove a device from my Gmail account
gmail;account management
null
_softwareengineering.354400
I'm currently looking into setting up a publish subscribe messaging infra structure for our microservice based platform. The new setup is meant to replace our current kafka based one, with something that's a little easier to maintain. One reason that we initially went with kafka, was the ability to have consumer groups. We have multiple instances of each consuming service running and by putting all instances of the same service into the same consumer group, we insured they were spread across partitions, and so, only one of the instances would receive the message(as I understand it). So, basically, if we have service a, with instances a1 and a2, and service b with b1 and b2, all listening on the same topic, is there any way, with something like rabbitmq or redis, to gaurantee that only one instance of service a and one instance of service receives a message?
pubsub with multiple instances of each consumer
pubsub
Yes, you need two Worker Queues A and B https://www.rabbitmq.com/tutorials/tutorial-two-dotnet.htmland one Fan Out routing which sends messages to both queueshttps://www.rabbitmq.com/tutorials/tutorial-three-dotnet.htmlSo in rabbitmq you publish to a fan out Exchange and read from persistent queue A or B both of which get a copy of the message from the Exchange.Now not all messaging systems are equal, and things can get weird if you want to guarentee a message is only read once as most of them have some sort of 'resend if it crashed' mechanic.
_softwareengineering.250635
I have been taught that shifting in binary is much more efficient than multiplying by 2^k. So I wanted to experiment, and I used the following code to test this out:#include <time.h>#include <stdio.h>int main() { clock_t launch = clock(); int test = 0x01; int runs; //simple loop that oscillates between int 1 and int 2 for (runs = 0; runs < 100000000; runs++) { // I first compiled + ran it a few times with this: test *= 2; // then I recompiled + ran it a few times with: test <<= 1; // set back to 1 each time test >>= 1; } clock_t done = clock(); double diff = (done - launch); printf(%f\n,diff);}For both versions, the print out was approximately 440000, give or take 10000. There was no (visually, at least) significant difference between the two versions' outputs. So my question is, is there something wrong with my methodology? Should there even be a visual difference? Does this have something to do with the architecture of my computer, the compiler, or something else?
When I test out the difference in time between shifting and multiplying in C, there is no difference. Why?
c;efficiency;bitwise operators
null
_unix.236293
The command which throws the error:$ find /mydir/tmp/*20151014* -print | xargs grep -l 'filesTransmitted=1'bash: /usr/bin/find: The parameter or environment lists are too long.Is there any optimal command for doing the same?
Find, xargs and grep combination throws an error
grep;aix;xargs
null
_codereview.3158
I've written the following SQL to count the number of times the name 'Cthulhu' turns up for each tag on Stack Overflow (original here):select t.TagName, count (*) 'Tainted'from Posts p, Tags t, PostTags ptwhere p.Body like '%cthulhu%'and (pt.PostId = p.Id and t.Id = pt.TagId)group by t.TagNameorder by Tainted DESC, t.TagName ASCIt works, but I'm not used to cutting SQL manually; I'm more accustomed to using ORMs. I tried using CONTAINS instead of LIKE, but apparently Body isn't set up for full-text search.Could you please provide me some feedback - in particular, are there any best practices I'm missing, and whether there are standard formatting rules for SQL that would make it a bit easier on the eye?
Finding the use of the word 'Cthulhu' in tags on Stack Overflow
sql;stackexchange
select t.TagName, count (*) 'Tainted' from Posts p inner join PostTags pt on (pt.PostId == p.Id) inner join Tags t on (t.Id == pt.TagId) where lower(p.Body) like '%cthulhu%' group by t.TagName order by Tainted desc, t.TagName ascNotice the lower on the body, because like is (should?) be case sensitive.The joins are also easier to read IMHO than the conditions in the where clause.The formatting is based on right-aligning keywords and left-aligning clauses.
_unix.23518
With SLES9 SP4 we managed to set up a XEN PV DomU.We are using SLES10 SP4 as Dom0 and DRBD 8 as disc-backed: a drbd-device corresponds to xvda in the DomU. The DomU uses the xenblk and xennet drivers, so everything seems ok.We applied the last patches (EoL of SLES9 SP4 was on 31th of August). After live-migration to another Dom0 the DomU seems to crash. No reaction to SYSRQ, nothing on the console. DRBD switches from one side (primary) to the other, so the disk-backend does not seem to be a problem here. With CentOS 4/5, SLES 10/11 DomUs we never had an issue with live-migration.Even W2K3 works.After destroy/create of the DomU it comes up without problems, last-log shows a crash entry.Any hints are welcome...
Live-migration of a SLES9 DomU?
xen
It seems that is simply not possible.
_softwareengineering.347597
Let's say we have 5 functional software requirements (R1-R5). Our software design results in 6 modules (M1-M6) or classes to be implemented. I am assuming that the design is more or less optimal, each module having a well-defined purpose and the dependencies between the modules have been minimized.In one case the relation between requirements and modules will almost be one-to-one, as shown on the left side of the figure below. In another case (different project) it might be many-to-many as on the right side:The left version will probably be much easier to handle from a testing and project management point of view.My questions:Are there commonly used names for these different types of relations between requirements and modules?Can we say that in the highly interconnected case, the requirements are flawed and should be restructured somehow, or is it just an indication of a more complex system?
Mapping between functional requirements and software modules
design;project management;requirements
Relationship between requirements and modulesThe relation between requirements and modules is in general not obvious, so that you'll always end up with many-to-many:it is very rare that one module implements one single requirement. Usually a module implements several related requirements.there are often some general requirements (e.g. non-functional security requirements, or user-interface requirements such as for example the availability of a help button) that are supposed to be implemented in many if not all of the modules.Therefore, having a highly interconnected schema doesn't necessarily mean that the requirements are flawed. It can also suggest that you're dealing with a complex system. Or that you have identified many general requirements. How to simplify the picture ?A first step for mastering the complexity is to decide what you want to represent: do you try to represent how the requirements are implemented in the system ? In this case, you can reduce the links you'll show in the diagram, by limiting the links to is implemented in relations. So if a general requirement is implemented in a utility module that provides the functionality to the other module, you would have only one link.do you want to show how modules comply with requirements from the user perspective ? In this case you can't avoid links of type is offered to the user by, which will multiply the links.Another approach is to envisage categories of requirements. For example:general requirements non functional (expected in all modules)use case requirements (transactional requirements, related to specific business functions: these would be expected in one module)utility requirements (includes or extents, used in several different use cases: you'd expect these either in a utility module or in several modules)user-interface requirements (expected in modules interacting with user interface modules)How to use this this diagram to improve your system ?So in general, high interconnections doesn't mean flawed requirements. However, some fundamental theory on systems, like for example Herbert's Simon's Science of the artificial, suggest that in an optimal structured system, one would expect more interrelation within a subsystem than between subsystems. Therefore, if you draw only is implemented in links, and remove all the general requirements (or better, take only the use-case transactional requirements), then you should come more to a situation like the diagram to the left. With such a simplified/filtered diagram, a high interconnection might suggest that either the use-cases requirements are flawed (still too general, or too ambiguous). Or that the grouping into modules was not ideally thought.
_cs.43074
Lets imagine we have a satisfiable formula $F(A_0, A_1,...A_k,S_0,...,S_n)$ The problem to solve is Is there an assignment for variables $(S_0,...,S_n)$ which will make F unsatisfiable?. One way of solving is to find all solutions for F in terms of variables $S_0,...,S_n$ and if the count is < $2^n$, the missing solution will be the answer, but the complexity of this algorithm is huge, if the number of such assignments is small.My questions are:Is there a way to solve the problem with less SAT solver calls? Is it a well-known problem in theory (What I should google to read about it)?
Assignment to make formula unsatisfiable
logic;satisfiability;sat solvers
Your problem is the canonical $\Sigma_2^P$-complete problem:$$\exists \vec{S} \forall \vec{A} \lnot F(\vec{A},\vec{S}).$$As such, it is thought to be more difficult than SAT (which is $\Sigma_1^P$). Solving it with a few SAT-oracle calls is akin to solving SAT itself efficiently (the P vs. NP question), though it could be that $\Sigma_2^P = \Sigma_1^P$ while $P \neq NP$, so in some sense there is more hope for your problem than for SAT itself.
_unix.336051
I have a series of files with names which are of formpath/A_b#_c#_d#_e#.outwhere # stands for float numbers. How can I extract all of these numbers from the filename, most likely, with the help of sed?
Extracting float numbers from the filename
sed;filenames
Here's what I would do:sed -E 's/[A-Za-z_]/ /g;s/. {1,}$//;s/^ {1,}([0-9])/\1/'Example:echo A_b0.5_c0.654_d0.157_e1.6.out | sed -E 's/[A-Za-z_]/ /g;s/. {1,}$//;s/^ {1,}([0-9])/\1/'0.5 0.654 0.157 1.6Someone with higher sed skills might produce a better one.
_webmaster.14504
Is there a way (either client side or server side) to see who is viewing your website's source code that is generated in the their browser?
Who's viewing website source code?
web development
No. This is not currently possible.
_cs.55043
I am learning about state machines from the Stanford online cs143 course. Here I have tried to build a nondeterministic finite automaton for the regular expression (1+0)*1, but when I attempt to convert it to a DFA I find that the epsilon closure of the initial state A ends up having two outputs for the value 1, which I imagine would be impossible for a deterministic automaton.I understand that my NFA differs from the instructors, but I'm unsure of whether or not it is valid. If it is valid, why am I having difficulty converting it to a DFA?
What happens when -clos(X) contains duplicate elements? Example: (1+0)*1
automata;finite automata
null
_webmaster.92337
We are two people that want to use FB Pixel for conversion measurements. The other person went through the FB setup process and installed the script on our website. But I want to be able to view Pixel statistics in Facebook. Is there a way the Pixel info can be accessed using the ID for our FB page, so that we can both access the statistics? Right now only the other person has access from his own private FB account.
Using a Facebook Page ID to see Facebook Pixel stats
facebook
null
_webmaster.74637
I just noticed that there was a new spec for microformats: microformats2. But I tried testing some HTML (the example from here) in the Rich Snippets testing tool in Google Webmaster Tools and it says no data detected.Google's help pages state they still support microformats, although schema.org microdata is preferred.Has Google or any other search engine confirmed that they will or won't support microformats2?
Do search engines support microformats2?
google search console;microdata;rich snippets;google rich snippets tool
null
_scicomp.7467
I want to determine the big and small frequency seperation from timeseries data for the sun. An excerpt of the data (timeseries and power series) is plotted below.The power series is calculated in MATLAB like this:n = length(t_obs);dt = diff(t_obs(1:2));y = fft(d_obs, n);P = y .* conj(y)/n;f = (0:n/2)/(n*dt);f = f(1:(n/2));P = P(1:(n/2));plot(f, P);What I can't understand is this:How can I get the big frequency separation $\Delta \nu$ and the small separation $\delta \nu$ for $l=0$ and $l=1$ from the powerseries without having to read it manually from the plot (there are many datasets). If you should want to show an example of implementing this (although an explanation or a hint will suffice), I'm fluent in Python as well as MATLAB.What is the unit of the y-axis on the powerseries?Timeseries plot:Power series plot:For reference:$\delta\nu_l = \nu_{nl}-\nu_{n-1l+2}$I hope you can help.
Calculate large and small frequency separation for the Sun
matlab;python;homework;numpy;signal processing
Frequency separationsThe easiest way to estimate the large separation (without fitting the individual frequencies) is to take the autocorrelation of the power spectrum and find the maximum. That's a start. To find the small separation, you'd be looking for a second peak. Anyway, have a look at the autocorrelation of the pwer spectrum and see if you can see the separations.You could try to find the maxima of the autocorrelation, but I suspect the numerical derivatives will make it hard to find the correct zero-point. And, if you actually fit the frequencies, then you can take the average of the differences directly.Units of the power spectrumYou can follow the units through the calculation of the objects you're working with. Suppose your time-series has units $x$. The Fourier transform is a sum of products of the time-series with something like $e^{i\omega}$, which is dimensionless. So the elements of the Fourier transform have units $x$ too. Finally, the power series is like the transform squared, so it has units $x^2$.If you end up with weird units this way, note that in asteroseismology, people usually divide their signal my the mean (or median) and subtract one, so that have a dimensionless number. They usually express the power series in units of parts per million (ppm).
_cstheory.9333
UPDATENow community wiki. My new version of the question is: let's make a big list of classes of polygons. We may be able to produce the most comprehensive list on the web, or in the literature. If there is community interest, after Jan 1st I will organize the information from all the answers into a post on the community blog.ORIGINAL QUESTION BELOWCould you recommend a source, either in print or online, for a menagerie of polygons? An extensive/exhaustive list of classes of polygon?The Wikikpedia article on polygons provides a partial classification, but I would like something more complete. Also, I am not concerned about how to name a 90-sided figure. Rather, I am trying to find a list of classes that contain infinitely many figures each (examples: star polygon, isothetic polygon). For example: actual words for looks like a spider, would look like a football if it were smoothed out, and so on.I've already checked The Geometry Junkyard and several pages of a Google search, but perhaps I didn't know what to look for.Thanks very much.
Menagerie of polygons
cg.comp geom;big list;polygon
null
_softwareengineering.322841
I am working on a project where you have controllers that expose an API and services that implement some logic. In this project, each service* checks for the user's permissions to use the service. When asked, I am told it makes life easier to the service's consumers because they don't have to bother themselves with permissions. The code in this design looks like so@Post@Path('/addItem')public Response addItem(@RequestBody Item item){ itemHandler.add(item); // this call actually does the permissions verification}However, I am familiar with another design where controllers are responsible for permission checking before activating the service. This design keeps the services simple and unaware of any permissions mechanism, and the liability for security lies on the consumer. @Post@Path('/addItem')public Response addItem(@RequestBody Item item, Request request){ if (hasPermissions(request)){ itemHandler.add(item); // is not concerned/aware of permissions. }}I guess there are pros and cons for each. I am trying to find more material on the matter to decide if the current status in the project requires change or not. * service - meaning an object we are invoking.
Who should be reliable for permissions check in a web app?
permissions
null
_cstheory.17195
Let us define a class of functions over a set of $n$ bits. Fix two distributions $p, q$ that are reasonably different from each other (if you like, their variational distance is at least $\epsilon$, or something similar). Now each function $f$ in this class is defined by a collection of $k$ indices $S$, and is evaluated as follows: If the parity of the selected bits is 0, return a random sample from $p$, else return a random sample from $q$.Problem: Suppose I'm given oracle access to some $f$ from this class, and while I know $\epsilon$ (or some other measure of distance), I don't know $p$ and $q$. Are there any bounds on the number of calls I need to make to PAC-learn $f$ ? Presumably my answer will be in terms of $n, k$ and $\epsilon$. Note: I didn't specify the output domain. Again, I'm flexible, but for now let's say that $p$ and $q$ are defined over a finite domain $[1..M]$. In general, I'd also be interested in the case when they are defined over ${\mathbb R}$ (for example, if they're Gaussians)
A parity learning question
lg.learning
null
_vi.12255
I'm just new to vim script, and I want to make gvim width auto adjust when open/close vimfiler or tagbar window.here's my not working vim script:let g:window_raw_width = 90let g:window_width = g:window_raw_widthfunction adjust_tagbar_width() if (g:window_width == g:window_raw_width) g:window_width = g:window_raw_width + 40 else g:window_width = g:window_raw_width endif set lines=g:window_widthendfunctionnmap <F9> :TagbarToggle<cr>autocmd VimEnter,BufNewFile,BufRead * nested :TagbarToggle<cr>please help me.
how to change gvim width when open/close vimfiler or tagbar?
plugin tagbar
null
_scicomp.20983
I am looking at approximating my function $f(x)$ using a Chebyshev and Legendre series and I ran into this question. Is interpolation using $n+1$ Chebyshev nodes the same as representing the function using the first $n+1$ Chebyshev coefficients in its Chebyshev series, i.e., is $$f(x) = \sum_{k=0}^n f(x_k) l_k(x)$$where $x_k$ are the Chebyshev nodes and $l_k(x)$ is the appropriate Lagrange polynomial associated with the $k^{th}$ node, the same as $$f(x) = \sum_{k=0}^n a_k T_k(x)$$where $T_k(x)$ is the $k^{th}$ Chebyshev polynomial and $a_k = \dfrac{\langle f, T_k\rangle}{\langle T_k, T_k\rangle}$?Here the inner product is the inner product using the appropriate weight function for the polynomials, i.e., $$\langle f, T_k \rangle = \int_{-1}^1 \dfrac{f(y)T_k(y)}{\sqrt{1-y^2}}dy$$Is the same true for Legendre interpolation and Legendre expansion as well, where the inner product is $$\langle f, P_k \rangle = \int_{-1}^1 f(y)P_k(y)dy$$? If so, could someone direct me to the proof of this?
Chebyshev and Legendre expansions
numerical analysis;polynomials
null
_unix.71804
I have a fleet of Ubuntu clients running msmtp-mta along with heirloom-mailx.I'd like to have the same /etc/msmtprc on all machines.At present, when someone uses mailx or sendmail, the mail will be from [email protected] and the only clue to the client machine is the IP address in the mail header.Is there a way to add the client host name to every mail sent? LikePrepend it to the subject orauto-attach a file to any mail orchange mydomain.com to client.mydomain.com (remember, one file to rule them all, and changes in host name should be met automatically)Note: I cannot configure the actual SMTP-Server, just the msmtp client.
msmtp-mta: Add $HOSTNAME to every mail
sendmail;mailx
null
_unix.62559
so... we know that we can test that if a port is open on the firewall with: telnet SERVERIP PORT..but afaik there are services that can't be tested with telnet, because ex.: telnet doesn't know that protocol that the service is using, and telnet will report that the port is closed, but in reality the service is up&running. Q: first: was I correct about telnet? second: What to use for testing that a port is opened on a server? (so it's not blocked by a firewall) - are there any unix tools for this?
What to use for firewall testing (port opened or not)
networking;firewall;ip;telnet
On the first question, maybe the service does not wait for interactive input. There could be other explanations, too. On the second, nmap can be used to test the firewall. There are many options.Scan the first 1,000 ports (default):nmap -v -A -PN hostname.domainname.comOr perhaps a specific range:nmap -v -A -p 10000-11000 -PN hostname.domainname.com
_unix.364064
So long story short I tried to move from Ubuntu to Arch and broke my computer. It's stuck in grub with an error saying:enter link description hereSo unless someone has any idea how to solve that, I've come to terms that I'm willing to wipe my drive and start fresh. How do I go about doing that?I've tried booting arch, Ubuntu, and boot-repair from a USB but I can't get Grub to recognize any USB.Thanks!
Starting Fresh because of Grub Issue
ubuntu;arch linux;grub2;dual boot;boot loader
null