id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_softwareengineering.164965
MSSQL and DB, Zend as PHP Framework, I am using this way to call SP with I/P Parameters and to get O/p Parameters.It seems I am writing SQL code in PHP.Any other good approaches? $str1 = DECLARE @Msgvar varchar(100); DECLARE @last_id int;exec DispatchProduct_m_Ins $DispatchChallanId,'$FRUNo',$QTY,$Rate,$Amount,.$this->cmpId.,.$this->aspId.,.$this->usrId.,@Msg = @Msgvar OUTPUT,@LAST_ID = @last_id OUTPUT;SELECT @Msgvar AS N'@Msg',@last_id AS '@LAST_ID'; ;//Calling SP$stmt = $db->prepare($str1);$stmt->execute();$rsDispProd = $stmt->fetchAll();$DispatchProductId = $rsDispProd[0][@LAST_ID];//get last ins ID as O/p Parameter
Calling MSSQL stored procedure from Zend Controller ? Any other approaches?
sql;zend framework;stored procedures;sql server
null
_webmaster.68931
Let's say you have a website that is akin to the YellowPages. It is a listing of lots of other companies.Should you place microdata about each company on each company listing page? Or should you just put Microdata about the Yellow Pages?For example:http://www.lots-o-companies.com/supplier/joes-floristsDo I put all the Microdata on the page to be about Joe's Florists, or the parent website, Lots O' Companies?I'm not quite sure how Google wants this data? On a site level, or a page content level?
Adding Microdata to a website of company listings
seo;microdata
null
_cs.60161
$\text{ALL}$ is literally the class of ALL languages.Are there $\text{ALL}$complete problems? That is, are there problems for which a solution would allow one to solve any problem whatsoever? Such problems could reasonably be consideredthe hardest problems, bar none.One such problem seems to be: given a problem and an input size, output it's truth table.
Problem complete for the class of ALL languages
undecidability
You haven't specified your notion of reduction, so I will assume that you choose some countable class of functions $\cal F$ which can be used for reductions (any subset of the computable functions would work here). Let $\cal L$ be any class of languages over some fixed alphabet $\Sigma$, say $\Sigma = \{0,1\}$. A language $K$ is hard for $\cal L$ (with respect to $\cal F$) if for every $L \in \cal L$ there exists $f \in \cal F$ such that $x \in L$ iff $f(x) \in K$. If also $K \in \cal L$ then we say that $K$ is complete for $\cal L$.I will now show that no language is hard for $\mathsf{ALL}$. Suppose that $K$ were $\mathsf{ALL}$-hard. Let $f_x : x \in \Sigma^*$ be an enumeration of the functions in $\cal F$ (such an enumeration exists since both $\Sigma^*$ and $\cal F$ are countable). Define a language $L$ by$$L = \{x : f_x(x) \notin K \}.$$Since $L \in \mathsf{ALL}$, there exists a function $f \in \cal F$ such that for all $x \in \Sigma^*$, $x \in L$ iff $f(x) \in K$. Since $f \in \cal F$, there exists $x \in \Sigma^*$ such that $f = f_x$. For this particular $x$, $x \in L$ iff $f_x(x) \in K$. However, by definition $x \in L$ iff $f_x(x) \notin K$.
_datascience.11911
For simplicity, suppose we're looking at Yelp reviews of restaurants, and are trying to classify the restaurant by cuisine type (e.g. Italian, Japanese, etc.). Lets also assume our data already a cuisine type column that we can use for accuracy checking.One way of approaching this problem would be a supervised Latent Dirichlet Allocation approach, where the restaurant type is the response. In this way, topics are trained to be then used in a multinomial logistic regression to guess the cuisine type.Is the above approach far superior to instead running (unsupervised) LDA with plenty of topics, and then using something like XGBOOST to predict cuisine type? In other words, we run unsupervised LDA, and then project all reviews to vector distances to each topic, and then use these feature vectors to predict cuisine type?I understand that sLDA will try to pick topics that better characterize each category type but is the former really superior to the latter approach? The reason I ask this is because I don't know of any fast sLDA implementations out there.
sLDA vs. LDA+Classifier
nlp;supervised learning;unsupervised learning
null
_codereview.99098
I'm currently working on this for my own practice. I get 4 digits and an operation.The input is x1 y1 op x2 y2 and the fraction is x1/y1 and x2/y2. If I get the input: 1 3 + 1 2 then it's 1/3 + 1/2 and the answer should be given the minimal fractional so it's 5/6. I pass the testcases I get and I can't figure out what I'm doing wrong.To summarize what I do:Read input and check the operation if it's +,-,/ or *. I generate a prime array to find the biggest common divisor.Send the input to a function depending on which operation it is.I count the given input with simple math.Then I find the biggest common divisor and divide both numerator and denominator with this.After that I print out the result.Here is the main function and how I handle if the operation is *. I handle the other operation the same but with other math. #include <iomanip>#include <iostream>#include <vector>#include <string>#include <cctype>#include <iterator>#include <array>#include <stdio.h>#include <string.h>#include <cstddef>#include <string>#include <sstream>#include <math.h>#include <cmath>#include <stdio.h>#include <stdlib.h>#include <string.h>#include <algorithm> using namespace std;long long nwd(long long a, long long b){ long long c; while(b != 0){ c = a % b; a = b; b = c; } return a;}void add(long long x1, long long y1, long long x2, long long y2){ long long bottom = (y1) * (y2); long long top = ((x1) * (y2)) + ((x2) * (y1)); //cout << bottom << << top << endl; long long frac; if(bottom != 0||top != 0){ frac = nwd(top,bottom); }else{ frac = 1; } string sign = ; if(top * bottom < 0){ sign = -; }else{ sign = ; } printf(%s%lld / %lld\n,sign.c_str(),abs(top/frac),abs(bottom/frac) );}void sub(long long x1, long long y1,long long x2, long long y2){ long long bottom = (y1) * (y2); long long top = ((x1) * (y2)) - ((x2) * (y1)); long long frac; if(bottom != 0||top != 0){ frac = nwd(top,bottom); }else{ frac = 1; } string sign = ; if(top * bottom < 0){ sign = -; }else{ sign = ; } printf(%s%lld / %lld\n,sign.c_str(),abs(top/frac),abs(bottom/frac) );}void divi(long long x1, long long y1, long long x2, long long y2){ long long top = (x1) * (y2); long long bottom = (x2) * (y1); long long frac; if(bottom != 0||top != 0){ frac = nwd(top,bottom); }else{ frac = 1; } string sign = ; if(top * bottom < 0){ sign = -; }else{ sign = ; } printf(%s%lld / %lld\n,sign.c_str(),abs(top/frac),abs(bottom/frac) );}void mult(long long x1, long long y1, long long x2, long long y2){ long long top = (x1) * (x2); long long bottom = (y2) * (y1); long long frac; if(bottom != 0||top != 0){ frac = nwd(top,bottom); }else{ frac = 1; } string sign = ; if(top * bottom < 0){ sign = -; }else{ sign = ; } printf(%s%lld / %lld\n,sign.c_str(),abs(top/frac),abs(bottom/frac) );}int main(){ int numOp; scanf(%d, &numOp); while(numOp != 0){ long long x1,x2,y1,y2; char op[2]; scanf(%lld %lld %s %lld %lld, &x1, &y1, op, &x2, &y2); if( op[0] == '+'){ add(x1, y1, x2,y2); } else if(op[0] == '-'){ sub(x1,y1,x2,y2); } else if(op[0] == '/'){ divi(x1,y1,x2,y2); } else{ mult(x1,y1,x2,y2); } numOp--; } return 0; }Here is my code with the given testcase and I get the correct result. I need some tips with either different testcases or any suggestions.
Rational arithmetic from given input
c++;rational numbers
I see a number of things that may help you improve your code.Use only required #includesThe code has a great number of #includes that are not needed. Some files are even listed twice. This clutters the code and makes it more difficult to read and understand. Only include files that are actually needed.Don't Repeat Yourself (DRY)The mathematical operations all include large portions of repeated code. Instead of repeating code, it's generally better to make common code into a function.Don't abuse using namespace stdPutting using namespace std at the top of every program is a bad habit that you'd do well to avoid. Use objectsSince you're writing in C++, what would make more sense to me would be to create a rational fraction object. Then each mathematical operation would be very naturally expressed as an operator on the object. Use switch instead of nested ifThe more appropriate control flow for your if statement in main is a switch statement. This makes it somewhat clearer to read and understand and often provides a small improvement in performance, especially if there are many cases.Prefer iostream to stdio.hUsing iostream style I/O makes the code easier to adapt to change. For example, using scanf, the code has to have the appropriate matching format string for each argument, but using the >> operator, there is no such requirement and the compiler adapts automatically to whatever type is being read.Omit return 0When a C++ program reaches the end of main the compiler will automatically generate code to return 0, so there is no reason to put return 0; explicitly at the end of main.Putting it all togetherHere's an alternative implementation which uses all of these suggestions. Note that a rational fraction is only reduced when it is being printed. This saves some time over always maintaining that state.#include <iostream>class RationalFraction{public: RationalFraction(long long a=0, long long b=1) : num{a}, den{b} { } // unary negation RationalFraction& operator-() { num = -num; return *this; } // add the passed fraction to this one // The two-operand addition is built from this, and a // similar pattern is used for each of the other operators. RationalFraction& operator+=(const RationalFraction &rh) { num = num * rh.den + rh.num *den; den *= rh.den; return *this; } RationalFraction& operator-=(const RationalFraction &rh) { num = num * rh.den - rh.num *den; den *= rh.den; return *this; } RationalFraction& operator*=(const RationalFraction &rh) { num *= rh.num; den *= rh.den; return *this; } RationalFraction& operator/=(const RationalFraction &rh) { num *= rh.den; den *= rh.num; return *this; } friend std::ostream &operator<< (std::ostream& out, const RationalFraction& f) { RationalFraction fr=f; fr.reduce(); return out << fr.num << / << fr.den; } friend std::istream &operator>> (std::istream& in, RationalFraction& f) { return in >> f.num >> f.den; }private: void reduce() { if (num == 0) { den = 1; return; } long long mygcd = gcd(num, den); num /= mygcd; den /= mygcd; if (den < 0) { den = -den; num = -num; } } long long gcd(long long a, long long b) const { long long c; while(b != 0){ c = a % b; a = b; b = c; } return a; } // data members long long num, den;};RationalFraction operator+(RationalFraction lh, const RationalFraction &rh){ lh += rh; return lh;}RationalFraction operator-(RationalFraction lh, const RationalFraction &rh){ lh -= rh; return lh;}RationalFraction operator*(RationalFraction lh, const RationalFraction &rh){ lh *= rh; return lh;}RationalFraction operator/(RationalFraction lh, const RationalFraction &rh){ lh /= rh; return lh;}int main(){ int numOp; for (std::cin >> numOp; numOp; --numOp) { RationalFraction lhs, rhs; char op[2]; std::cin >> lhs >> op >> rhs; switch (op[0]) { case '+': std::cout << lhs+rhs << '\n'; break; case '-': std::cout << lhs-rhs << '\n'; break; case '/': std::cout << lhs/rhs << '\n'; break; default: std::cout << lhs*rhs << '\n'; } }}A bit more detailTo understand this version of the code, it's useful to look at a few individual class member functions. First, is the constructor: RationalFraction(long long a=0, long long b=1) : num{a}, den{b} { }This allows for construction in a few different ways:RationalFraction half{1,2}; // 1/2 = one halfRationalFraction three{3}; // 3/1 = 3RationalFraction zero; // 0/1 = 0The basic four mathematical functions are all built in the same way. First, there is a member function such as +=: RationalFraction& operator+=(const RationalFraction &rh) { num = num * rh.den + rh.num *den; den *= rh.den; return *this; }This version is a member function and can be used like this:RationalFraction a{1,3};RationalFraction b{1,2}; a += b; // now a = 5/6 and b is unchangedIt's a common idiom to define those member functions first and then construct the two-operand versions:RationalFraction operator+(RationalFraction lh, const RationalFraction &rh){ lh += rh; return lh;}This is a bit subtle. Note that lh is passed by value, so that means a local copy is constructed using the compiler-generated default copy. By contrast rh is a const reference, so no additional copy is created. Now all we need to do is apply the member function version and return lh which, remember, is a copy of whatever was originally passed in for that parameter. It is a standalone function and not a member function so that either of these works:RationalFraction f={-1,2};std::cout << f+3 << std::endl; // (-1/2) + 3 = 5/2std::cout << 3+f << std::endl; // 3 + (-1/2) = 5/2In the expression 3+f first a temporary rational fraction is created from 3. Then that rational fraction is added to f and the result displayed using the << operator. That is, it's approximately the equivalent to this:RationalFraction temp1{3}; // = 3/1RationalFraction temp2 = temp1 + f; // do addition using non-member functionstd::cout << temp2 << std::endl; // show the result = 5/2The only other slightly tricky bit of code is in private member function reduce. Here I've added comments to make it easier to understand:void reduce() { // if it's of the form 0/x, turn it into canonical 0/1 if (num == 0) { den = 1; return; } // reduce by dividing by the greatest common divisor of // numerator and denominator long long mygcd = gcd(num, den); num /= mygcd; den /= mygcd; // Only allow the numerator to be negative to simplify printing. // Changing sign of both numerator and denominator has no // mathematical effect on the value. if (den < 0) { den = -den; num = -num; }}
_codereview.115047
My code handles concurrent requests by waiting for the result of an already running operation. Requests for data may come in simultaneously with same/different credentials (including empty credentials).For each unique set of credentials there can be at most one GetCurrentInternal call in progress, with the result from that one call returned to all queued waiters when it is ready.private static readonly Credentials EmptyCredentials = new Credentials{ SqlCredentials = null, ExchangeCredentials = null,};public AgentMetadata GetCurrent(Credentials credentials){ var agentMetadata = new Lazy<AgentMetadata>(() => GetCurrentInternal(credentials)); var lazyMetadata = (Lazy<AgentMetadata>)MemoryCache.Default.AddOrGetExisting( (credentials ?? EmptyMetadataCredentials).ToString(), agentMetadata, DateTime.Now.AddMinutes(5)); try { return (lazyMetadata ?? agentMetadata).Value; } catch (InvalidOperationException ex) { _logger.ErrorException(ex, An error occurs during getting full metadata); throw; }}Is it thread safe? Do I have a bugs? How to create a good key for the AddOrGetExisting?Credentials its a DataContract that contains other DataContract like SqlCredentials, ExchangeCredentials. And they contains their own strings like user_name and password.
Caching data by using the result of first running operation
c#;lazy
Yes, it seems thread safe.I run a test to prove it which I give the details at the bottom, but first are some code review comments.The code is missing the Credentials class which I think is also important for the review. The cache items are added with keys credentials.ToString() and it is important for this method to return the same value for Credentials objects having the same credential values, and distinct values for Credentials instances with different credential values. Is it?Naming conventions. Class names should be PascalCase and the class AgentMetadata violates this rule. Should be AgentMetaDataTo achieve better encapsulation and seperation of concerns, (The creation of an empty credential object should be a concern for the Credentials class, which has control over the internals of itself) instead of having a static readonly Credentials EmptyCredentials field in this class, it is better to have a public static readonly Credentials Empty field (or property getter) in Credentials class.Instead of this:Credentials empty = (credentials ?? EmptyMetadataCredentials);Isn't it better to have this?Credentials empty = (credentials ?? Credentials.Empty);The variable names agentMetadata and lazyMetadata are not descriptive enough.This is somewhat confusing:return (lazyMetadata ?? agentMetadata).Value;Having names like the following might eliminate that confusion:var newLazyInstance = new Lazy<AgentMetadata>(() => GetCurrentInternal(credentials));var cachedLazyInstance = (Lazy<AgentMetadata>)MemoryCache.Default.AddOrGetExisting( (credentials ?? EmptyMetadataCredentials).ToString(), newLazyInstance, DateTime.Now.AddMinutes(5));return (cachedLazyInstance ?? newLazyInstance).Value;And here are the results of the test, which shows the GetCurrentInternal method is invoked a total of 5 times for all the 100 threads.The test created 100 threads which access the GetCurrent method randomly within 500 ms. There is another thread for removing the item from the cache in 100 ms continuously (This is because, the cache policy doesn't seem to effectively remove the item at the exact point of time of expiration).The GetCurrentInternal method lasts for a random time between 0 and 500 msWith the following results, it can be seen that the GetCurrentInternal method is invoked a total of 5 times (which is the number of the cache not containing a Lazy instance for the given credentials, first one because the cache is empty, and for 4 more times because it is removed from the cache) I can't help myself thinking why the AddOrGetExisting of MemoryCache returns null when the item is added to the cache. This would have been implemented differently (as returning the added instance as ConcurrentDictionary does) T:47 M:GetCurrent E:AddOrGetExisting.Added? R:False 145,68T:13 M:GetCurrent E:AddOrGetExisting.Added? R:False 146,44T:20 M:GetCurrent E:AddOrGetExisting.Added? R:False 146,54T:26 M:GetCurrent E:AddOrGetExisting.Added? R:False 147,45T:71 M:GetCurrent E:AddOrGetExisting.Added? R:False 148,46T:62 M:GetCurrent E:AddOrGetExisting.Added? R:False 148,55T:53 M:GetCurrent E:AddOrGetExisting.Added? R:False 149,37T:5 M:GetCurrent E:AddOrGetExisting.Added? R:False 146,68T:22 M:GetCurrent E:AddOrGetExisting.Added? R:True 150,66T:31 M:GetCurrent E:AddOrGetExisting.Added? R:False 150,18T:63 M:GetCurrent E:AddOrGetExisting.Added? R:False 154,58T:50 M:GetCurrent E:AddOrGetExisting.Added? R:False 157,15T:27 M:GetCurrent E:AddOrGetExisting.Added? R:False 182,13T:67 M:GetCurrent E:AddOrGetExisting.Added? R:False 182,33T:43 M:GetCurrent E:AddOrGetExisting.Added? R:False 182,18T:32 M:GetCurrent E:AddOrGetExisting.Added? R:False 182,13T:3 M:GetCurrent E:AddOrGetExisting.Added? R:False 187,24T:55 M:GetCurrent E:AddOrGetExisting.Added? R:False 194,8T:29 M:GetCurrent E:AddOrGetExisting.Added? R:False 197,13T:28 M:GetCurrent E:AddOrGetExisting.Added? R:False 214,17T:4 M:GetCurrent E:AddOrGetExisting.Added? R:False 221,11T:84 M:GetCurrent E:AddOrGetExisting.Added? R:False 228,12T:78 M:GetCurrent E:AddOrGetExisting.Added? R:False 232,09T:19 M:GetCurrent E:AddOrGetExisting.Added? R:False 235,1T:66 M:GetCurrent E:AddOrGetExisting.Added? R:False 238,1T:7 M:GetCurrent E:AddOrGetExisting.Added? R:False 242,14T:61 M:GetCurrent E:AddOrGetExisting.Added? R:False 245,11T:49 M:GetCurrent E:AddOrGetExisting.Added? R:False 246,15T:23 M:GetCurrent E:AddOrGetExisting.Added? R:False 250,08T:85 M:GetCurrent E:AddOrGetExisting.Added? R:False 251,1T:81 M:GetCurrent E:AddOrGetExisting.Added? R:False 254,1T:98 M:GetCurrent E:AddOrGetExisting.Added? R:False 254,12T:44 M:GetCurrent E:AddOrGetExisting.Added? R:False 254,13T:91 M:GetCurrent E:AddOrGetExisting.Added? R:False 255,25T:65 M:GetCurrent E:AddOrGetExisting.Added? R:False 257,2T:59 M:GetCurrent E:AddOrGetExisting.Added? R:False 259,33T:56 M:GetCurrent E:AddOrGetExisting.Added? R:False 269,1T:34 M:GetCurrent E:AddOrGetExisting.Added? R:False 271,11T:89 M:GetCurrent E:AddOrGetExisting.Added? R:False 271,12T:41 M:GetCurrent E:AddOrGetExisting.Added? R:False 279,12T:100 M:GetCurrent E:AddOrGetExisting.Added? R:False 280,12T:11 M:GetCurrent E:AddOrGetExisting.Added? R:False 290,11T:17 M:GetCurrent E:AddOrGetExisting.Added? R:False 292,08T:58 M:GetCurrent E:AddOrGetExisting.Added? R:False 293,1T:14 M:GetCurrent E:AddOrGetExisting.Added? R:False 297,09T:90 M:GetCurrent E:AddOrGetExisting.Added? R:True 303,12T:75 M:GetCurrent E:AddOrGetExisting.Added? R:False 312,15T:46 M:GetCurrent E:AddOrGetExisting.Added? R:False 318,17T:45 M:GetCurrent E:AddOrGetExisting.Added? R:False 321,11T:10 M:GetCurrent E:AddOrGetExisting.Added? R:False 322,1T:92 M:GetCurrent E:AddOrGetExisting.Added? R:False 328,1T:80 M:GetCurrent E:AddOrGetExisting.Added? R:False 339,12T:8 M:GetCurrent E:AddOrGetExisting.Added? R:False 341,11T:51 M:GetCurrent E:AddOrGetExisting.Added? R:False 342,11T:74 M:GetCurrent E:AddOrGetExisting.Added? R:False 343,11T:86 M:GetCurrent E:AddOrGetExisting.Added? R:False 351,12T:68 M:GetCurrent E:AddOrGetExisting.Added? R:False 352,11T:95 M:GetCurrent E:AddOrGetExisting.Added? R:False 354,1T:72 M:GetCurrent E:AddOrGetExisting.Added? R:False 359,11T:48 M:GetCurrent E:AddOrGetExisting.Added? R:False 360,11T:18 M:GetCurrent E:AddOrGetExisting.Added? R:False 364,1T:6 M:GetCurrent E:AddOrGetExisting.Added? R:False 374,12T:35 M:GetCurrent E:AddOrGetExisting.Added? R:False 377,09T:15 M:GetCurrent E:AddOrGetExisting.Added? R:False 377,11T:93 M:GetCurrent E:AddOrGetExisting.Added? R:False 378,1T:33 M:GetCurrent E:AddOrGetExisting.Added? R:False 379,1T:9 M:GetCurrent E:AddOrGetExisting.Added? R:False 390,12T:39 M:GetCurrent E:AddOrGetExisting.Added? R:False 391,09T:12 M:GetCurrent E:AddOrGetExisting.Added? R:False 395,11T:24 M:GetCurrent E:AddOrGetExisting.Added? R:True 401,11T:40 M:GetCurrent E:AddOrGetExisting.Added? R:False 402,11T:30 M:GetCurrent E:AddOrGetExisting.Added? R:False 404,09T:70 M:GetCurrent E:AddOrGetExisting.Added? R:False 411,11T:102 M:GetCurrent E:AddOrGetExisting.Added? R:False 412,1T:52 M:GetCurrent E:AddOrGetExisting.Added? R:False 413,11T:57 M:GetCurrent E:AddOrGetExisting.Added? R:False 417,1T:37 M:GetCurrent E:AddOrGetExisting.Added? R:False 426,13T:69 M:GetCurrent E:AddOrGetExisting.Added? R:False 428,1T:16 M:GetCurrent E:AddOrGetExisting.Added? R:False 432,11T:21 M:GetCurrent E:AddOrGetExisting.Added? R:False 438,13T:96 M:GetCurrent E:AddOrGetExisting.Added? R:False 441,11T:38 M:GetCurrent E:AddOrGetExisting.Added? R:False 442,09T:54 M:GetCurrent E:AddOrGetExisting.Added? R:False 456,18T:82 M:GetCurrent E:AddOrGetExisting.Added? R:False 465,12T:76 M:GetCurrent E:AddOrGetExisting.Added? R:False 466,11T:25 M:GetCurrent E:AddOrGetExisting.Added? R:False 470,11T:36 M:GetCurrent E:AddOrGetExisting.Added? R:False 477,09T:73 M:GetCurrent E:AddOrGetExisting.Added? R:False 483,12T:42 M:GetCurrent E:AddOrGetExisting.Added? R:False 483,13T:101 M:GetCurrent E:AddOrGetExisting.Added? R:False 491,13T:64 M:GetCurrent E:AddOrGetExisting.Added? R:False 494,12T:60 M:GetCurrent E:AddOrGetExisting.Added? R:True 502,11T:97 M:GetCurrent E:AddOrGetExisting.Added? R:False 528,2T:24 M:GetCurrentInternal E:Invoked R:AgentMetadata_ID=1 547,76T:24 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 548,3T:70 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 548,36T:40 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 548,32T:57 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 549,27T:21 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 561,53T:36 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 574,77T:102 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 548,4T:38 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 563,24T:101 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 575,21T:77 M:GetCurrent E:AddOrGetExisting.Added? R:False 579,16T:25 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 571,16T:54 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 567,6T:73 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 574,79T:42 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 574,8T:76 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 567,61T:37 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 554,64T:82 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 567,6T:52 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 548,38T:64 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 575,44T:16 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 557,74T:94 M:GetCurrent E:AddOrGetExisting.Added? R:False 587,92T:69 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 556,54T:96 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 562,78T:30 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=1 548,35T:88 M:GetCurrent E:AddOrGetExisting.Added? R:True 616,19T:103 M:GetCurrent E:AddOrGetExisting.Added? R:False 631,16T:90 M:GetCurrentInternal E:Invoked R:AgentMetadata_ID=2 642,16T:90 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=2 642,37T:80 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=2 644,54T:72 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 648,45T:47 M:GetCurrentInternal E:Invoked R:AgentMetadata_ID=3 645,16T:47 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 650,61T:26 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 652,63T:22 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 654,89T:9 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 654,43T:86 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 646,88T:18 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 648,75T:6 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 649,06T:15 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 650,84T:74 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 646,62T:53 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 658,94T:31 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 662,34T:46 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=2 642,4T:62 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 652,69T:95 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 646,88T:67 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 664,69T:71 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 660,52T:93 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 650,84T:45 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=2 642,4T:50 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 652,69T:23 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 677,68T:78 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 672,49T:66 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 672,5T:4 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 671,15T:10 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=2 642,44T:5 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 657,11T:65 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 678,46T:29 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 669,35T:81 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 679,46T:33 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 649,06T:92 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=2 642,46T:14 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 688,26T:13 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 652,62T:17 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 688,25T:28 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 671,12T:89 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 684,54T:11 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 686,67T:100 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 686,67T:58 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 688,25T:61 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 676,52T:20 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 660,51T:34 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 684,53T:68 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 646,88T:85 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 674,46T:44 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 677,66T:49 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 673,36T:32 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 667,5T:3 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 669,31T:41 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 686,66T:39 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 654,43T:7 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 672,5T:48 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 648,47T:12 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 655,34T:19 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 672,5T:83 M:GetCurrent E:AddOrGetExisting.Added? R:False 672,09T:98 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 680,64T:63 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 662,4T:91 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 677,96T:8 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=2 644,56T:75 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=2 642,38T:43 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 665,46T:27 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 662,37T:59 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 680,63T:55 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 669,34T:99 M:GetCurrent E:AddOrGetExisting.Added? R:False 695,73T:51 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 646,62T:56 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 684,53T:84 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 672,24T:87 M:GetCurrent E:AddOrGetExisting.Added? R:False 691,88T:35 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=3 650,56T:60 M:GetCurrentInternal E:Invoked R:AgentMetadata_ID=4 988,26T:60 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=4 988,88T:77 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=4 988,96T:94 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=4 989,03T:97 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=4 988,97T:88 M:GetCurrentInternal E:Invoked R:AgentMetadata_ID=5 1050,3T:88 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=5 1050,85T:83 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=5 1050,94T:87 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=5 1052,18T:103 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=5 1050,91T:99 M:GetCurrent E:Lazy.Value R:AgentMetadata_ID=5 1050,97
_webapps.90832
It's a workout of 11 minutes with As Many Reps As Possible.I have no coding experience and very primitive google functions knowledge.Column B is set 'number' of exercise.Column C is the 'running' time.Column D is the is how long I actually took time for given exercise.Both C&D are in hours, minutes, seconds and milliseconds format. I would prefer them only in mm:ss but for some reason values wouldn't hold to given function when I type in data despite me selecting duration and modifying it to mm:ss (it goes back to hr:mm:ss AM).Column E is also where I am having trouble in which I would like to know how many individual item/reps (i.e., pushups or burpees) I did per second I would also like to compare previous time versus.
Exercise breakdown in mm:ss and exercise/second
google spreadsheets
null
_webmaster.14997
I recently purchased a domain name from godaddy and am probably going to set up a dedicated server soon. Are there any articles that people know of which can help me set up my domain name on the dedicated server (peer1, rackspace, etc)?A sub-question would be: how do I like the godaddy domain name to my IP address that I get on my dedicated server from one of these hosting companies?Thanks!
How do I use a domain name on a dedicated hosting server?
domains;godaddy;webserver
Setup of your domain on your dedicated server depends on a few things:What control panel comes with your serverWhat webserver you'll be using (Apache, Nginx, etc)If you're going to use IIS, then this guide will walk you through the process of setting up more than one domain on your server. If you ever plan on adding a second domain, I suggest this route.To link your domain name to your IP address, you'll need to edit your DNS settings. Again, there are some choices for you to make such as who you will use for your nameserver. Once you know who your nameserver is, you'll need to point your GoDaddy domain to that nameserver. On that nameserver, you'll need to create an A record that links your domain name to the IP of your server.
_codereview.157299
This script implements a command line level trashcan system. Designed to work around rm doing the bits needed to implement a trashcan system at command line level. Any user flags hands the request over to rm. This means rm -f will still delete a file with no recovery. Prevents execution by root as too many dependencies at superuser level. Deployment by adding aliases rm=trash and rmoops=untrash. Only implemented on files, does not cater for directories.Appreciate any and all suggestions from formatting, expansion of the idea to glaring issues. Looking for an experienced eye to pin point potential issues./usr/local/bin/trash#!/bin/bash## Trashcan script# alias rm=trash#error() { echo $1 exit 1}bypass() { exec rm $*}file=$1trash=~/.trashkeep=5## bypass scenarios# root user[ $(id -u) = 0 ] && error Please do not run as root user.# special casescase $file in -*) bypass $* ;; $trash/* ) bypass $* ;;esac# only single filename supported[ $# -gt 1 ] && bypass $*# can we access the file?[ -f $file ] || error File not specified.# move existing versions out of the wayls $trash/$file.version.* 2>/dev/null | tac | while read name; do version=$((${name##*.}+1)) # keep the only the last number of versions if [ $version -gt $keep ]; then rm $name || error Unable to remove file $name. else mv $name $trash/$file.version.$version || error Unable to move version $version. fidone# move file into trash (volume friendly)cp --preserve $file $trash/$file.version.1 || error Unable to trash your file.rm -f $file || error Unable to cleanup the file.exit 0/usr/local/bin/untrash#!/bin/bash## Restore files from Trashcan# alias rmoops=untrash#error() { echo $1 exit 1}file=$1trash=~/.trashkeep=5## bypass scenarios# root user[ $(id -u) = 0 ] && error Please do not run as root user.# any flags so rm -rf actually does itcase $file in -*) error Flags are not supported. ;; $trash/*) error You cant restore from trash directly. ;;esac# only single filename supported[ $# -ne 1 ] && error File not specified.version=0# find latest versionfor name in `ls $trash/$file.version.* 2>/dev/null | tac`do version=${name##*.} breakdone[ $version -gt 0 ] || error Sorry I cannot find your file in the trash.echo I have versions 1 to $version, 1 being the most recent.echo -n Which would you like to restore? [1]:read user[ -n $user ] || user=1# check we have the actual file[ -f $trash/$file.version.$user ] || error Sorry I cannot find that version.# move existing file out of the way[ -f $file ] && mv $file $file.old# restore the filecp $trash/$file.version.$user $file || error Unable to restore file.echo Version $user restored.exit 0
trash script to alias rm
bash;linux;sh
null
_unix.321527
To make sure that my prerequisite knowledge is even correct, I'll first explain what I'm doing (or think that I'm doing).I'm going through a tutorial on running a Yocto Linux kernel inside the Qemu VM. In a directory, I have sourced an environment script (provided by said tutorial) and a built kernel (based on a config file also provided by the tutorial so I didn't have to choose any setup options- it was all automated once I executed make -j4 all)In that directory I ran Qemu using a long command that includes the port for running gdb in another terminal where I target rmote localhost:xxxx. Back in the original terminal with Qemu now running after continue in the gdb instance, I can login as root and play around with the environment.How do I gracefully exit the Qemu VM/terminal?When I run shutdown now the terminal hangs on Unmounting remote filesystem...
How to gracefully exit Qemu VM with running kernel
virtual machine;qemu;gdb
null
_softwareengineering.273930
I am trying to develop my first major MVC application, and as such I am new to doing this on a large scale. I've read as much as I can online and am continuously striving for making my code as clean as possible and not introduce any bad practices. Right now, I'm starting to question if my setup is the way it would be done in the real world.Right now, I have my main container class and sub classes that can appear inside.Lets pretend the view would look like this:-----------------------------------------------------| || Main |-----------------| || |------| |D |-----------| | || | A | | | B | C | | || | | | | | | | || |------| | |-----------| | || |-----------------| |-----------------------------------------------------Where Main is your main viewing container, and inside are elements A, B and C.The inner views can be dynamic, so lets say we have our own MVC's for A, B and C as well.1) Is it wrong for these classes to be connected to one other like so? /------A /Main \ /-B \ / \-----D---CBy this, I mean both connections are two way.To get to 'A' from B, I might call something like this.getD().getMain().getA() from object C.Does that indicate a design problem if it's a web where things can traverse back and forth?Is there any bad practice of passing it object A so the chain doesn't have to be done?This can assume there is no nulls for the sake of this example2) If the above is not okay, is it unreasonable to make a static singleton out of the Main object and have other classes access it through a static getter?I don't know if such a practice is bad or not if you know the main window is never meant to be instantiated twice (and you prevent it from doing so). An example would be to get A from C, something like Main.getA().I apologize if some of this is vague. As I said I'm new to this and would love nothing more than to cut out any bad habits. If there is anything wrong or right about what I've done? If there's anything wrong, how would you do it?
Proper MVC practice for a hierarchy of elements
mvc;static
Just some thoughts, continue to do your research on best practices. First, respect encapsulation and avoid tight coupling between views. Consider using dependency injection to inject inner views into outer views in your hierarchy - or decorating, etc. depending on the use-case. Is there a specific functional reason you want two-way chainable calling between the view components? If you want to know whether you are doing the right thing, ask yourself what the use-case is. Let's say you have a visible counter in A that increments/decrements based on data changing in C. Is there a reason for A (a counter control) to know anything about C (some kind of data entry control)? No. For communication in this case, you should use an event model where changing data in C fires an event that is picked up by the controller, which then updates the model with the new counter value. The model variable for the count should be data-bound to the control A. When the event fired from C updates the model, control A's value changes. C is informing the application of the data used by A, but A has no knowledge of C, its structure or its methods. Reading about patterns in OOP and MVC will help you. Specific questions like making a Singleton out of Main really depend on the specifics of your app.
_codereview.62133
I have been using $_SESSION, $_POST, $_GET, $_SERVER globals directly without ever knowing there was a concept/practice called, Wrapping globals in classes until, yesterday. As I did, I was able to immediately understand its effective benefit / usefulness. But I searched for a good wrapper, for $_SESSION, $_GET and $_POST but could not find anything simple/good. So, as a test I made this for the session global.<?php /** * A SESSION Wrapper class. * * @category Session * @version 1.0.0 * @Nile */namespace Nile\Lib; class Session{ protected static $sessionLife = 1200; public static function start() { if(!headers_sent() && !session_id()){ if(session_start()){ session_regenerate_id(); return true; } } return false; } public static function set($Key, $value) { $_SESSION[$Key] = $value; } public static function has($Key) { return (bool)(isset($_SESSION[$Key])) ? $_SESSION[$Key] : false; } public static function get($Key) { return (isset($_SESSION[$Key])) ? $_SESSION[$Key] : false; } public static function del($Key) { if(isset($_SESSION[$Key])){ unset($_SESSION[$Key]); return false; } } public static function destroy() { if(isset($_SESSION)){ session_destroy(); } } public static function dump() { if(isset($_SESSION)) { print_r($_SESSION); return ; } throw new \Exception(Session is not initialized); }}And this is the simple session initialization. Session::start(); // does session_start(); Session::set('user', 'isLogedIn'); //does $_SESSION['user'] = 'isLoggedIn'; Considering this is my first wrapper, I would like a review and what I could add next. Something that is not very complicated, just easy to understand and a useful feature for this class.
A wrapper class for Sessions in PHP
php;object oriented;session;wrapper
I have been using php for over 10 years and never found a need for a session wrapper, but maybe you find it easier/better.There are a few improvements that could be made with your codeIf we are checking to see if the session has a key, a simpler test ispublic static function has($Key){ // return (bool)(isset($_SESSION[$Key])) ? $_SESSION[$Key] : false; return array_key_exists($Key, $_SESSION); }The get function here would probably be more useful with a default value option then just returning false.public static function get($Key, $default=false){ return (self::has($Key)) ? $_SESSION[$Key] : $default; }The del function, I am unsure what you are trying to achieve by returning false, but returning nothing if it isn't set?Personally I wouldn't bother to check if it is set or not, just unset it, and return nothing. I would also call it delete, so it is painfully obvious to the use what it does public static function delete($Key) { if(isset($_SESSION){ unset($_SESSION[$Key]); // return false; } }The dump function, why do you throw an exception if the session doesn't exist, but not anywhere else if the session doesn't exist? I would re-write it like a guard clause rather then having a return halfway through the function.public static function dump(){ if(!isset($_SESSION)) { throw new \Exception(Session is not initialized); } print_r($_SESSION);}Another useful function you might add is get_once. I do something similar for when I store an error message in the session, then redirect to a new page and display the error message. After that the error message is no longer relevant so I remove it from the session.public static function get_once($Key, $default=false){ $value = self::get($Key, $default); self::delete($Key); return $value;}Other things you could do is to manage different namespaces (maybe not the best word to describe it) within a session.// keep in mind if you do this, you can't use static everywhere like you havefunction __construct($namespace) { $this->namespace = $namespace;}function set($key, $value) { $_SESSION[$this->namespace][$this->key] = $value;}// Then you can use simple keys that don't overwrite each other$user_session = new session('user');$user_session->set('name', 'Donald Duck');$page_session = new session('page');$page_session->set('name', 'Home Page');
_softwareengineering.289500
I know this is a language specific question and may not be suitable here but I would like to create an array of functions that will be called based on their index and I would like to know the difference between Delegates and Actions. I am aware that with delegates you string many of them in a single one and pass values between them etc. But are there any other differences? Also are their any differences regarding speed?
What are the differences regarding speed and functionality between using Actions vs Delegates?
c#
An Action is just a delegate thats predefined so you don't have to make them yourself when using Linq/PTLhttp://referencesource.microsoft.com/#mscorlib/system/action.cs,486d58da4553e12d
_cs.74881
There are plenty material about bidirectional search with non-negative edge weights. One example is this paper. I am looking for any improvements using a bidirectional approach for acyclic digraphs with possibly negative weight edges.The main approach for mono-directional search on possibly negative edge weights is Bellman-Ford. However, i don't think there's any valid early termination criteria that makes bidirectional Bellman-Ford better than the mono-directional.Can someone provide ideas or references for a bidirectional search on an acyclic digraphs with possibly negative edge weights?Thanks in advance.
Bidirectional Search on possible negative weight edges digraphs
algorithms;optimization;weighted graphs
null
_softwareengineering.261267
I'd like to create a simple class property which can contain multiple values set from the outside. (Values are of the same type.)Example of property name and contained items:KnownHidScanners\\?\ACPI#IDEA0100#4&b74a345&0#{884b96c3-56ef-11d1-bc8c-00a0c91405dd}\\?\HID#VID_045E&PID_071D&MI_00#9&e6a1767&0&0000#{884b96c3-56ef-11d1-bc8c-00a0c91405dd}Is there a standard approach in .NET for data type of such a multi-value class property?When searching MSDN for multi-value property, I can find Properties with Multiple Values concept article which suggests PropertyValueCollection data type. But if you check the data type closer, it is too tightly related to ActiveDirectory.Off hand I can implement property asarray of typecollection of typelist of typebut which one is the most used approach in out-of-the-box .net objects when they need multiple-value properties? I'm not long enough with .NET ecosystem to observe what is generally used.Note: this is not an opinion-based question. I'm asking developers who are long enough with the framework which approach is already used so I can get consistent with standards.Edit: clarity was improved after first answerer wrote he misread the question
.NET Framework standard container type for multi-value property?
.net;standards;properties
Okay, in the meantime I've invested additional time and made my own study on the topic.Findings:checking through Object inspector how properties containing word Items are declared in existing libraries reveals somewhat a surprise I found every form: array, collection, list and custom typesdominant types are these two: collections (generic or inherited) and lists (mostly generic)answers in this question shed light when to prefer collection and when list. The key is in character of the class: collections are recommended for exposed classes (API etc.), lists are more preferred for internal implementation stuffhere must be noted that Collection<T> is no longer part of System.Collections.Generic since .NET .3.5., it's been moved to System.Collections.ObjectModelif property will have some obvious benefit from some specific type, then ignore the above recommendation and use that type. Example is out-of-the-box String.Chars which is an array.defining custom multi-value container type (with item list based on some of the above) has its place if I want to add some custom properties/methods to items or for manipulation with the list etc.some Property design guidelines are also available from the Microsoft, but for this topic they are not very helpful
_unix.188433
Using OpenBSD's pf. Question: How can we modify the firewall of OpenBSD to allow ONLY a given group for network access? If somebody isn't in that group, it shouldn't have layer 3 or layer 2 network access.
How to only allow a group for network access?
group;openbsd;pf
null
_unix.150830
(I am a Linux only user & MELT software developer, and use Linux since kernel 0.99.15 e.g. the previous century)I have a VPS host ovh.starynkevitch.net (running Linux) which is also the MX host for my email. It is getting emails only for less than a dozen users (my relatives, i.e. kids, grand-children, wife). So a quite small traffic. the MTA is exim4 ...I specifically don't want to use gmailI have secure ssh (public-key, so without passwords) access to that machine (from every -latop or desktop machine I am working on).I want to be able to read (and post, but SMTP forwarding is not an issue) my email thru it.Up to now, I forwarded my email (to some home server). In practice, I want to run some graphical MUA (e.g. sylpheed or evolution ...) because I am tired of mutt...I also want to be able to use some spam filter like spamoracle; it has to run on my VPS host (not locally on the MUA machines). And I need to conveniently be able (with a single keystroke!) to improve (by the usual machine learning techniques) its filtering.I was thinking of installing some IMAP server (like the imapd from GNU mailutils). But then, it is less easy to use spamoracle with it. I thought of:Using imapd above SSL or some other secure transport. But then I am not familiar with the configuration, and I have no idea if the SSL should be related to my ssh settings or not.doing port redirection with ssh Any hints or some other solutions?
mailing thru ssh
email;imap;mail transport agent
null
_softwareengineering.284186
I'm looking at jBPM specifically, but I think this applies with other systems too.As I understand it..When placing an item into a workflow, one does not generally put a reference to that item, one puts the whole serialized item or document into the workflow. This means that the state of the workflow and the item undergoing workflow can all be kept in the workflow systems database, making it easy to make atomic updates.At the end of a workflow, or sometimes in explicit steps along the way, calls can be made to downstream services, and these may make changes to the state of their databases. How does one handle this?Is it:The workflow and services should all be deployed in a single application server container, and a transaction manager used to keep everything consistent?You call a service, then the workflow state is updated, but not in a transaction. Failure and restart of the workflow may cause duplicate calls to be made, if the state update failed.You decouple them using a transactional JMS queue. Then downstream systems can read from that queue in their own transaction. Updates happen transactionally and in the correct order.Or something else, or whatever combination of the above that you like.I should add, where I work we have micro-services (whatever that means). It means we do not have an application server, so global transactions accross service end-points are not possible (no ws-tx either, the services are REST/json). Is the JMS solution the only one that is open to us?
How do workflow tools (jBPM) keep transactional state consistent?
workflows;transaction
Not being a Java guy my answer may be disappointing.When placing an item into a workflow, one does not generally put a reference to that item, one puts the whole serialized item or document into the workflow I think the word generally is key here. The decision about placing the a reference or the whole item should be case by case. If the object is constant across workflow instances and the workflow needs to know when it changes, do not place it on the workflow. Use a reference instead. It's easier than creating a mechanism to notify instances abut the change. Registry data is my prime example.How you update the database from the workflow instance is also a case by case decision. Some updates should be atomic (the workflows should not be allowed to continue to order fulfillment if the database transaction updating the client balance fails).Even if not in a transaction the workflow can be modeled around it.Other times the workflow can continue even if database fails to update with just a warning to the user(We were unable to update your address in our registry at this time, but your order will be delivered to the new address ). You can also queue the update.So, use transactions when you must, fail gracefully when you need to and queue things that are not time sensitive.No silver bullet. Sorry.
_unix.363689
I got a requirement where I need to compare two files wrt to each columns and write the corresponding difference in another file along with some identification showing mismatched columns. Pointing out the mismatched columns is my main problem statement. For example we have files like:File 11|piyush|bangalore|dev1|piyush|bangalore|QA2|pankaj|bangalore|dev3|rohit|delhi|QAFile 21|piyush|bangalore|QA1|piyush|bangalore|QA2|pankaj|bangalore|dev3|rohit|bangalore|devThe expected output file looks somewhat like.File 11|piyush|bangalore|**dev**File 2 1|piyush|bangalore|**QA**File 13|rohit|**delhi**|**QA**File 23|rohit|**bangalore**|**dev**I want to achieve something like this where i can see the mismatched columns as well along with mismatched rows. I have tried diff File1 File2 > Diff_FileBut this is giving me only the mismatched records or rows. I am not getting any way to point out the mismatched columns as well. Please help me out if its possible to do is using shell script or awk command as i am very new to this. Thanks in advance.
Comparing two files and writing mismatched rows along with mismatched columns. Pointing out the mismatched columns is my main problem statement
shell script;text processing;awk;diff;file comparison
Python 3.x solution:diff_marked.py script:import sysfile1_name = sys.argv[1]file2_name = sys.argv[2]with open(file1_name, 'r') as f1, open(file2_name, 'r') as f2: f1_lines = f1.readlines() # list of lines of File1 f2_lines = f2.readlines() # list of lines of File2 for k,l in enumerate(f1_lines): f1_fields = l.strip().split('|') # splitting a line into fields by separator '|' if k < len(f2_lines) and f2_lines[k]: has_diff = False f2_fields = f2_lines[k].strip().split('|') for i,f in enumerate(f1_fields): if f != f2_fields[i]: # comparing respective lines 'field-by-field' between two files f1_fields[i] = '**' + f + '**' # wrapping differing fields f2_fields[i] = '**' + f2_fields[i] + '**' has_diff = True if has_diff: print(f1.name) # print file name print('|'.join(f1_fields)) print(f2.name) print('|'.join(f2_fields))Usage: (you may have another python version, the current case has been tested on python 3.5)python3.5 diff_marked.py File1 File2 > diff_outputdiff_output contents:File11|piyush|bangalore|**dev**File21|piyush|bangalore|**QA**File13|rohit|**delhi**|**QA**File23|rohit|**bangalore**|**dev**
_unix.37924
I am currently writing a small setup script for a Linux application that needs the user to edit a configuration file before the application is started. I've chosen to make the script simply open the configuration file in Nano, and resume the script afterwards. I do, however, need to detect whether the user saved the changes (to then continue starting the application), or whether he discarded them (which would indicate the user doesn't want to continue).I have already checked whether this is possible with the returned exit code from Nano, and it apparently isn't - it always returns 0 even if the changes were discarded. Is there another way to figure out whether the file was changed and saved, or will I have to do this in an entirely different way?
How do I detect whether changes in nano were discarded or saved?
bash;scripting;nano
You can use the stat command to check the file's modification time before and after nano. Something like:oldtime=`stat -c %Y $filename`nano $filenameif [[ `stat -c %Y $filename` -gt $oldtime ]] ; then echo $filename has been modifiedfiOf course, this won't detect whether nano modified the file, or some other program did, but that could be considered a feature. (You can use some other program to edit the file, and then exit nano without saving.)
_codereview.148445
I'm currently working on a website which provides content to users based on their region. As a result the website has subdirectories for the appropriate regions. For example, a pricing page would look something like this:http://mydomain.com/ie/pricinghttp://mydomain.com/gb/pricinghttp://mydomain.com/fr/pricingIn order to target the appropriate search engines and signify to Google that there are different versions of the page, I've implemented hreflang tags. A good example of a site that uses these would be view-source:https://www.tripadvisor.ie/. If you view the source you will see a long list of hreflang tags to denote multiple regional versions of the page.So, for my site, I decided to write a PHP script that would output the relevant hreflang tags on the correct page:<?php $url = $_SERVER['REQUEST_URI']; ?><?php if(strpos($url, pricing) == true): ?><link rel=alternate href=http://www.mydomain.com/ie/pricing/ hreflang=en-ie /><link rel=alternate href=http://www.mydomain.com/gb/pricing/ hreflang=en-gb /><link rel=alternate href=http://www.mydomain.com/fr/pricing/ hreflang=en-fr /><link rel=alternate href=http://www.mydomain.com/pricing/ hreflang=en /><link rel=alternate href=http://www.mydomain.com/pricing/ hreflang=x-default /><?php endif; ?>However, the problem I'm facing is that if the URL contains pricing ANYWHERE it will output the above which is not correct if the URL was something like: http://www.mydomain.com/pricing-types.To counteract this I added another if statement which checks to ensure that the word types is not present in the URL:<?php $url = $_SERVER['REQUEST_URI']; ?><?php if(strpos($url,pricing) == true && strpos($url, types) == false): ?><link rel=alternate href=http://www.mydomain.com/ie/pricing/ hreflang=en-ie /><link rel=alternate href=http://www.mydomain.com/gb/pricing/ hreflang=en-gb /><link rel=alternate href=http://www.mydomain.com/fr/pricing/ hreflang=en-fr /><link rel=alternate href=http://www.mydomain.com/pricing/ hreflang=en /><link rel=alternate href=http://www.mydomain.com/pricing/ hreflang=x-default /><?php endif; ?>Would anyone have any recommendations of how I could improve my script to read the URLs and match them to the appropriate hreflang tags but being more explicit in the URLs I read?
PHP script to generate hreflang tags
php;url;i18n
null
_codereview.23792
There are some things that I know that need to be fixed, such as mysql_* needing to be converted to PDO, and using a better hash. I am working on building a social networking site, and I've been having issues with some of the mysql like mysql_real_escape_string and implementing newer techniques. Any criticism and/or help would be very appreciated.register script<?$reg = @$_POST['reg'];//declaring variables to prevent errors//registration form$fn = (!empty($_POST['fname'])) ? $_POST['fname'] : '';$ln = (!empty($_POST['lname'])) ? $_POST['lname'] : '';$un = (!empty($_POST['username'])) ? $_POST['username'] : '';$em = (!empty($_POST['email'])) ? $_POST['email'] : '';$em2 = (!empty($_POST['email2'])) ? $_POST['email2'] : '';$pswd = (!empty($_POST['password'])) ? $_POST['password'] : '';$pswd2 = (!empty($_POST['password2'])) ? $_POST['password2'] : '';$d = date(y-m-d); // Year - Month - Dayif ($reg) { if ($em==$em2) { // Check if user already exists $statement = $db->prepare('SELECT username FROM users WHERE username = :username'); if ($statement->execute(array(':username' => $un))) { if ($statement->rowCount() > 0){ //user exists echo Username already exists, please choose another user name.; exit(); } } //check all of the fields have been filled in if ($fn&&$ln&&$un&&$em&&$em2&&$pswd&&$pswd2) { //check that passwords match if ($pswd==$pswd2) { //check the maximum length of username/first name/last name does not exceed 25 characters if (strlen($un)>25||strlen($fn)>25||strlen($ln)>25) { echo The maximum limit for username/first name/last name is 25 characters!; } else { //check the length of the password is between 5 and 30 characters long if (strlen($pswd)>30||strlen($pswd)<5) { echo Your password must be between 5 and 30 characters long!; } else { //encrypt password and password 2 using md5 before sending to database $pswd = md5($pswd); $pswd2 = md5($pswd2); $db->setAttribute( PDO::ATTR_ERRMODE, PDO::ERRMODE_WARNING ); $sql = 'INSERT INTO users (username, first_name, last_name, email, password, sign_up_date)'; $sql .= 'VALUES (:username, :first_name, :last_name, :email, :password, :sign_up_date)'; $query=$db->prepare($sql); $query->bindParam(':username', $un, PDO::PARAM_STR); $query->bindParam(':first_name', $fn, PDO::PARAM_STR); $query->bindParam(':last_name', $ln, PDO::PARAM_STR); $query->bindParam(':email', $em, PDO::PARAM_STR); $query->bindParam(':password', $pswd, PDO::PARAM_STR); $query->bindParam(':sign_up_date', $d, PDO::PARAM_STR); $query->execute(); die(<h2>Welcome to Rebel Connect</h2>Login to your account to get started.); } } } else { echo Your passwords do not match!; } } else { echo Please fill in all fields!; } } else { echo Your e-mails don't match!; }}?>login script<?//Login Scriptif (isset($_POST[user_login]) && isset($_POST[password_login])) { $user_login = preg_replace('#[^A-Za-z0-9]#i', '', $_POST[user_login]); // filter everything but numbers and letters $password_login = preg_replace('#[^A-Za-z0-9]#i', '', $_POST[password_login]); // filter everything but numbers and letters $password_login=md5($password_login); $db = new PDO('mysql:host=localhost;dbname=socialnetwork', 'root', 'abc123'); $db->setAttribute( PDO::ATTR_ERRMODE, PDO::ERRMODE_WARNING ); $sql = $db->prepare(SELECT id FROM users WHERE username = :user_login AND password = :password_login LIMIT 1); if ($sql->execute(array( ':user_login' => $user_login, ':password_login' => $password_login))) { if ($sql->rowCount() > 0){ while($row = $sql->fetch(PDO::FETCH_ASSOC)){ $id = $row[id]; } $_SESSION[id] = $id; $_SESSION[user_login] = $user_login; $_SESSION[password_login] = $password_login; exit(<meta http-equiv=\refresh\ content=\0\>); } else { echo 'Either the password or username you have entered is incorrect. Please check them and try again!'; exit(); } }}?>profile.php<? include(inc/incfiles/header.inc.php);?><?if(isset($_GET['u'])){ $username = mysql_real_escape_string($_GET['u']); if(ctype_alnum($username)) { //check user exists $check = mysql_query(SELECT username, first_name FROM users WHERE username='$username'); if(mysql_num_rows($check)===1){ $get = mysql_fetch_assoc($check); $username = $get['username']; $firstname = $get['first_name']; } else { echo <meta http-equiv=\refresh\ content=\0; url=http://localhost/tutorial/findfriends/index.php\>; exit(); } }}?><div class=postForm>Post form will go here</div><div class=profilePosts>Your posts will go here</div><img src= height=250 width=200 alt=<? echo $username; ?>'s Profile title=<? echo $username; ?>'s Profile /><br /><div class=textHeader><? echo $username; ?>'s Profile</div><div class=profileLeftSideContent> Some content about this person's profile </div><div class=textHeader><? echo $username; ?>'s Friends</div><div class=profileLeftSideContent><img src=# height=50 width=40/>&nbsp;&nbsp;<img src=# height=50 width=40/>&nbsp;&nbsp;<img src=# height=50 width=40/>&nbsp;&nbsp;<img src=# height=50 width=40/>&nbsp;&nbsp;<img src=# height=50 width=40/>&nbsp;&nbsp;<img src=# height=50 width=40/>&nbsp;&nbsp;<img src=# height=50 width=40/>&nbsp;&nbsp;<img src=# height=50 width=40/>&nbsp;&nbsp;</div>
Review my PHP login and register script, and profile page, and how to improve them
php;pdo
null
_unix.106825
Fedora suggests the Gnash plugin as a replacement for the Flash plugin and I am seriously considering it. My question is: how good is the Gnash plugin as a replacement for the Flash plugin?In my case, I'm not interested in anything advanced, just being able to watch youtube videos.
How good is gnash as a replacement for flash?
adobe flash;multimedia
null
_softwareengineering.313940
I have a class Car which has 2 properties: int price and boolean inStock. It also holds a List of abstract class State (empty class). There are 2 states which can be applied on the car and each is represented by its own class: class Upgrade extends State and class Shipping extends State.A Car can hold any number of each of the 2 states. The states have the following rules:Upgrade: adds 1 to the price for each state applied to the car after itself.Shipping: if there is at least 1 Shipping state in the list, then inStock is set to false.For example, starting with price = 1 and inStock = true:add Shipping s1 --> price: 1, inStock: falseadd Upgrade g1 --> price: 1, inStock: falseadd Shipping s2 --> price: 2, inStock: falseadd Shipping s3 --> price: 3, inStock: falseremove Shipping s2 --> price: 2, inStock: falseremove Upgrade g1 --> price: 1, inStock: falseremove Shipping s1 --> price: 1, inStock: falseremove Shipping s3 --> price: 1, inStock: trueI was thinking about the observer pattern where each add and remove operations notify observers. I had something like this in mind, but it doesn't obey the rules I posed:abstract class State implements Observer { public abstract void update();}class Car extends Observable { List<State> states = new ArrayList<>(); int price = 100; boolean inStock = true; void addState(State state) { if (states.add(state)) { addObserver(state); setChanged(); notifyObservers(); } } void removeState(State state) { if (states.remove(state)) { deleteObserver(state); setChanged(); notifyObservers(); } }}class Upgrade extends State { @Override public void update(Observable o, Object arg) { Car c = (Car) o; int bonus = c.states.size() - c.states.indexOf(this) - 1; c.price += bonus; System.out.println(c.inStock + + c.price); }}class Shipping extends State { @Override public void update(Observable o, Object arg) { Car c = (Car) o; c.inStock = false; System.out.println(c.inStock + + c.price); }}Obviously, this doesn't work. When a Shipping is removed, something has to check if there is another state setting inStock to false, so a removal of Shipping can't just inStock = true. Upgrade increases price at each call. I then added constants for the default values and attempted a recalculation based on those.I am by no means trying to impose any pattern, I'm just trying to find a solution for the above requirements. Note that in practice Car contains many properties and there are many states that can be applied in this manner. I thought about a few ways to do this:Since each observer receives Car, it can look at all the other observers currently registered and make a change based on that. I don't know if it's smart to entangle observers like this.When an observer is added or removed in Car, there will be a recalculation. However, this recalculation will have to be done on all observers regardless of the one which was just added/removed.Have an external manager class which will call the add and remove methods and do the recalculation.What is a good design pattern to implement the described behavior and how would it work?
Is the observer pattern suitable when the observers are not independent of each other?
java;design patterns;observer pattern
I ended up going with option 3 - using an external manager. The manager is responsible for adding and removing States from Cars and for notifying the observers when these changes happen.Here is how I modified the code. I removed the Observable/Observer of the JDK because I'm doing my own implementation.Each State keeps a reference to the Car its applied to.abstract class State { Car car; State(Card car) { this.car = car; } public abstract void update();}class Upgrade extends State { @Override public void update() { int bonus = car.states.size() - car.states.indexOf(this) - 1; car.price += bonus; System.out.println(car.inStock + + car.price); }}class Shipping extends State { @Override public void update() { car.inStock = false; System.out.println(car.inStock + + car.price); }}Car only holds its state (to avoid confusion: properties) and does not handle adding and removing States:class Car extends Observable { List<State> states = new ArrayList<>(); int price = 100; boolean inStock = true;}Here is the manager. It has overtook Cars (the observable) job of managing its States (observers).class StatesManager { public void addState(Card car, State state) { car.states.add(state); for (State state : car. states) state.update; } public void removeState(Card car, State state) { car.states.remove(state); for (State state : car. states) state.update; }}A few things to keep in mind:All observers are notified about every change. A more clever event distribution scheme can eliminate unneeded calls to observers' update method.The observers might want to expose more update-like methods for different occasions. Just as an example, they can split the current update method to updateOnAdd and updateOnRemove if they are interested only in one of these changes. Then the addState and removeState methods would be updated accordingly. Along with the previous point this approach can end up as a robust, extensible and flexible mechanism.I did not specify what gives the instruction to add and remove States and when that happens as it is not important for the question. However, in the case of this answer, there is the following point to consider. Since State now must be created with its Car (no empty constructor exposed) prior to calling the manager's method, the addState and removeState methods do not need to take a Car and can just read it from state.car.The observers are notified in order of registration on the observable by default. A different order can be specified.
_softwareengineering.116114
I've been reading about the C10K Problem, and of particular note is the part that refers to asynchronous server I/O. http://www.kegel.com/c10k.html#aioI believe this pretty much summarises what Node.js does on the server, by allowing threads to process user requests whilst relying on I/O interrupts (events) to notify threads of jobs that are completed, rather than have the thread be responsible for the full CPU job. The thread can get on with other things (non-blocking), and be notified of when a job is done (e.g. a file is found or a video is compressed).This subsequently means that a thread is more 'available' to sockets and therefore to users on the server.Then I found this: http://teddziuba.com/2011/10/straight-talk-on-event-loops.htmlThe writer here claims that although the event driven framework (interrupted threading), may free up threads, it doesn't actually reduce the amount of work a CPU has to do! The rationale here is that if, say, a user requests to compress a video they uploaded, the CPU still has to actually do this job, and will be blocking while it does it (for simplicity's sake, lets forget about parallelism here - unless you know better!).I'm a straightforward coder, not a server admin or anything like that. I'm just interested to know: is Node.js a gift from the gods of 'cloud computing' or is it all hot air, and won't actually save companies time and/or money by improving scalability?Many thanks.
Does Node.js actually increase scalability?
scalability;node.js;server
Of course any CPU bound work is going to utilize the CPU. It's going to block the CPU in whatever language or framework you write it in.Node.js is great for when you have I/O bound work, not CPU bound. I wouldn't do heavy lifting in Node, though it can be done. Node.js solves real problems, not fictional or imagined ones like fibonacci number servers. It's not hot air.
_unix.368884
tree shows... template_one_file.sh template_sed.sh testy.sh tmp file1.html.13567_old file2.html.13567_old file3.html.13567_old file4.html.13567_old use_case_to_add_etc_2_numbers vi vim_tips_and_chearsheet vi_stuff160 directories, 713 filesand I can ignore the tmp directory (for example) with tree -I 'tmp' -L 4... template_multiple_files.sh template_one_file.sh template_sed.sh testy.sh use_case_to_add_etc_2_numbers vi vim_tips_and_chearsheet vi_stuff157 directories, 709 filesbut why can I not ignore individual files such as the _old files withtree -I 'old' -L 4which still shows the _old files... template_multiple_files.sh template_one_file.sh template_sed.sh testy.sh tmp file1.html.13567_old file2.html.13567_old file3.html.13567_old file4.html.13567_old use_case_to_add_etc_2_numbers vi vim_tips_and_chearsheet vi_stuff160 directories, 713 filesI tried '.*old' and '.*old$' patterns but that didn't help.
Why does 'tree' command ignore directories but not files
tree
Use a wildcard:tree -I '*_old' -L 4Reference: man tree (my emphasis)-I patternDo not list those files that match the wild-card pattern.
_unix.203580
I'm writing a collection of scripts to talk to an external program. My problem is that the scripts are only alive for a short time (triggered by a keypress in a larger program) but the external program needs to stay running between calls, and was originally designed for interactive use (think debugger).If I just wanted to write a single script to run the program, I'd open a PTY to it and send/receive data on that. This works (my scripts are in LUA and lpty can deal with the PTY) but it can't keep the program running when the script terminates.If the external program offered me a socket to connect on (like gdb does), I could just save the name somewhere and have each script connect to that socket. But it was only meant to be used interactively.I could write a daemon that starts the external program, opens a PTY to it and then listens on a socket itself. The scripts could then connect to the daemon's socket and send data, which the daemon would forward to the program over the PTY and send the results back.The format of the data I'm exchanging with the external program is line-based, but I don't know in advance how many lines I'm going to get back from a single command. Not a problem with a PTY, but a bit more work with luasocket.I'm wondering if there isn't a better way to do this. Can I somehow open a PTY to a program and get its address, from which I can later connect and disconnect multiple times from my scripts? (Only one script can ever run at a time so concurrency is not a problem.) That would avoid using sockets at all.Or is there some combination of options in socat that does exactly this already?
using a pty like a socket?
socket;pty
You can't open a PTY to a program. A PTY is a pseudo-terminal; it requires a terminal end. If you want to communicate with a program directly via a terminal, the program would need to create a terminal (i.e. to behave like a terminal emulator).Instead, you can run the program in a terminal, such as screen. Screen makes it easy to inject input and read output from programs.However, for what you're doing, a terminal would introduce useless complications. A socket is exactly the right tool for what you want to do direct bidirectional communication between two programs. (For unidirectional communication, a pipe would be the right tool.) The only advantage of using a terminal is if you don't control one of the ends and it insists on buffering output by block rather than by line if the output isn't on a terminal.
_softwareengineering.226178
I'm moving over to work on a library that a fellow developer has been writing. It's full of == true and == false, which I find crazy frustrating to read.I've tried asking him to quit doing it, but he just says that it makes it more straightforward to understand what is going on. Is there a good reason why this isn't a great practice? Or is it purely a stylistic thing?
Why is it bad to use redundancy with logical operators?
operators
null
_softwareengineering.271605
I'm sure this is probably out there but I can't seem to find it.UpdatesI have tested some different code, and reduced my time to complete the task from 50/60 seconds to 10 seconds. This is what I changed:For k = 0 To 28 For v = 1 To countryLastRow(k) If Sheet3.Cells(v + 3, countryRanges(k) + 3) = Sheet1.Range(C6) Then Sheet3.Range(Sheet3.Cells(v + 3, countryRanges(k)), Sheet3.Cells(v + 3, countryRanges(k) + 6)).Copy Destination:=tbl.DataBodyRange(p, countryRanges(k) - 1).End(xlUp).Offset(1, 0) End If Next vNext kUsing the range to copy and paste whole rows at once rather than 1 cell at a time has drastically improved my macro.Still looking out their for any more performance increases people know with the code I have but this seems to be good now.Basic rundownIn Excel I have multiple rows and multiple columns that need to be searched through then depending on criteria, copy it and paste it into another tab using VBA.I want to find the fastest/most efficient way of coding this search/copy/paste function.What I've TriedSo I went for a simple brute force code where I search through all cells in the Sheet2 table columns, use if formula to match the criteria then .copy destination:= to copy the row into the next empty row of table in Sheet1 then move onto the next Sheet2 table and repeat until all 29 tables have been searched.Quite quickly I found I needed to improve the speed of this and I added the following:Application.ScreenUpdating = FalseApplication.Calculation = xlCalculationManualApplication.EnableEvents = Falsewhich improved the speed when I was only using one table. Since adding the second table it starting to become slower as 2148 rows were added.Now I am only using 2 tables worth of data (first table with 680 rows, second table with 2148 rows) and when running a wider search criteria rather than very specific search it takes around 30 seconds to complete. Considering this is only 2 tables, I believe that when all 29 tables are added it could take 10+ minutes to complete since these two tables are quite small compared to others (Biggest table will have around 8000 rows).Below is an example of the tables. Please see the example excel below for a better view of more rows.SKU, Count and Size aren't filled in since these are unimportant to the search, just copy and pasted where the Manufacturer, Brand, Sub Brand and Flavour match.This is a snippet of the search function:For k = 0 To 28 For v = 1 To countryLastRow(k) If Sheet3.Cells(v + 3, countryRanges(k) + 3) = Sheet1.Range(C6) Then Sheet3.Cells(v + 3, countryRanges(k)).Copy Destination:=tbl.DataBodyRange(v, countryRanges(k) - 1) 'Manufacturer Sheet3.Cells(v + 3, countryRanges(k) + 1).Copy Destination:=tbl.DataBodyRange(v, countryRanges(k)) 'Brand Sheet3.Cells(v + 3, countryRanges(k) + 2).Copy Destination:=tbl.DataBodyRange(v, countryRanges(k) + 1) 'SubBrand Sheet3.Cells(v + 3, countryRanges(k) + 3).Copy Destination:=tbl.DataBodyRange(v, countryRanges(k) + 2) 'Flavour Sheet3.Cells(v + 3, countryRanges(k) + 4).Copy Destination:=tbl.DataBodyRange(v, countryRanges(k) + 3) 'SKU Sheet3.Cells(v + 3, countryRanges(k) + 5).Copy Destination:=tbl.DataBodyRange(v, countryRanges(k) + 4) 'Count Sheet3.Cells(v + 3, countryRanges(k) + 6).Copy Destination:=tbl.DataBodyRange(v, countryRanges(k) + 5) 'Size End If Next vNext kwhereK = 0 to 28 scrolls through the country table columnsv = 1 to countryLastRow(k) scrolls through the rows for the country tablecountryLastRow(k) is the number of rows in the table on sheet3 so it won't just search through loads of blanks if the max number is more than the current table contains.countryRanges(k) is an array holding the starting column number for each tableThis snippet of code is 1 of 16 versions as there are multiple ways to define the criteria (used in a select case function) where it finds all cells in column 1 of a country table which matches the criteria of a ComboBox box on Sheet1 and the rest copy/pastes it.If the ComboBox on Sheet1 said WEETABIX LTD, All Brands, All Sub Brands, All flavours then it would copy both WEETABIX rows from AE Table and both WEETABIX rows from BE table leaving out the Kelloggs rows.I tried to change the way my search worked, attempted to only look in 1 column at a time rather than all rows in 1 column (for table1), next row then when ended going to next table however this was actually slower by a few seconds when I timed it.I've got an idea to do some data manipulation upon entering the data into tables (which uses a code to extract data from external Excel workbooks. So it is all alphabetical and find the first cell and last cell with the criteria and do a .range.copy .pastespecial but will take a while to write the code to do this as it will require multiple levels of sorting.Another Idea, where the condition is true, get the column1 value and .copy .pastespecial of the range using .offset to get the whole row rather than .copy destination each cell in the rowFull Code and excel table example links belowExample table in excel link here:http://www.filedropper.com/showdownload.php/example_6Full code in pastebin here:http://pastebin.com/WpeE84fXIf you want access to the full document let me know and I will send it privately.
Excel: Vba Copy and paste efficiency with 000's of rows and columns
excel;vba
null
_cogsci.8191
Most people are held accountable for their actions. If diagnosed with a mental disorder however, they are often forgiven/treated more leniently and exempted from punishment. The same thing couldn't be said about a bad personality trait however. No one is exempted from murder for say not being very compassionate or being an asshole.Mental disorders however only seem to be a term of describe a set of bad personality traits that are displayed concurrently though. So why the special treatment?What's the difference between a mental disorder and a bad personality trait?
What's the difference between a mental disorder and a bad personality trait?
personality;terminology;clinical psychology
null
_unix.216564
Is it possible to boot an install CD to RAM?I want to boot the CD and eject it before I proceed with the setup.And when not, where do the changes have to be made, to get a toram boot option?
Boot Debian netinst CD to RAM?
debian;boot;livecd;debian installer
It seems impossible to copy base packages from boot media to RAM to build an alternative APT repository for installation with current Debian Installer.But you might be able to eject the media after boot and continue installation using netboot image which would download everything from the internet, not out of boot media. You can remove it permanently once the netboot installer gets started. You will be requested to configure networking at the very early step of the whole installation process.You can find netboot ISO as mini.iso in the folder like http://ftp.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/ of the offcial repository.It's quite small in size compared to other installer images like netinst.
_webapps.77982
I have a Google Presentation template (the title page and a slide) and I want to extract the graphics from it (so that I can use it later with LaTeX/Beamer).How do I do that? PS. Note that this is not about an individual image but about a style template.
How do I extract the graphics from a Google Presentation template for use with Beamer (LaTeX)?
google presentations
If one could do that, I think Google Apps Script would be my best bet. Currently it is not possible in Google Apps Script to access a Google Slide that way. See this enhancement request in the Google Apps Script issue tracker, to make that possible: issue 1573As mentioned in the issue itself:To subsequent readers: If you are also interested in this requested feature, please click the star next to the issue number.
_webapps.105019
In the YouTube overview page, the individual videos are displayed as still images showing a frame which was taken from that video.Is there a way to find out from which timestamp of the video this preview thumbnail was generated?
From which timestamp of a YouTube video is the preview thumbnail taken?
youtube
null
_softwareengineering.144130
Let's assume you are convinced that the extra time spent unit testing has merit and improves production. Does that still hold up when everyone working on the same code doesn't use them? This question makes me wonder if fixing tests that everyone doesn't use is a waste of time. If you correct a test so the new code will pass, you're assuming the new code is correct. The person updating the test better have a firm understanding of the reasoning behind the code change and decide if the test or the new code needs to be fixed. This much inconsistency in a team when it comes to testing is probably an indication of other problems as well. There is a certain amount of risk involved that someone else on the team will alter code that is covered by testing. Is this the point where testing becomes counter-productive?
Testing loses its effectiveness if all programmers don't use them
unit testing;programming practices
One's tests must be trusted to be effective. A man with two watches does not know what time it is. That is: bad, inaccurate, unmaintained tests ruins the good stuff. Further, benign neglect is a death spiral, ultimately to the point that the test suite as a whole will be discredited and abandoned.A test may be bad because it is not current or it lies. It does not matter which, the damage to trust is the same.When it comes to tests, maintained accuracy and quality over quantity is key.
_webapps.52584
My case -These days I send my CV every day to lots of contact , such that it each time send same mail but different To address. In order to avoid re-upload the CV for each sent , I uploaded the CV (doc file) to Google Drive and when I want to send it I do - right click > Share > Email as attachment. Now, how could I create a fixed body for these mails instead of re-typing it each mail?
Gmail - how to create fixed mail?
gmail;google drive
In Gmail you can make templates.To enable templates you have to use a Labs Feature called canned responses.Click the Gear Icon in the top right.Select Settings from the menu.Click on the Labs tab. Look for Canned ResponsesEnable It Make Sure you click Save Changes at the bottomTo Set Up a Template:Compose a new message as per normal.Once done, click on the more options dropdown(next to the discard button)Select Canned ResponsesSelect New Canned ResponseGive it a nameDoneTo use a Template:Compose a new messageClick on the more options dropdownSelect Canned ResponsesInsert your templatePlease note: Gmail only allows you to save the body of an email as a template, and not the attachments, but these can be added before sending just as with a normal email.Source
_unix.20026
I did something likeconvert -page A4 -compress A4 *.png CH00.pdfBut the 1st page is much larger than the subsequent pages. This happens even though the image dimensions are similar. These images are scanned & cropped thus may have slight differences in dimensionsI thought -page A4 should fix the size of the pages?
convert images to pdf: How to make PDF Pages same size
pdf;conversion;imagemagick
Last time I used convert for such a task I explicitly specified the size of the destination via resizing:$ i=150; convert a.png b.png -compress jpeg -quality 70 \ -density ${i}x${i} -units PixelsPerInch \ -resize $((i*827/100))x$((i*1169/100)) \ -repage $((i*827/100))x$((i*1169/100)) multipage.pdfThe convert command doesn't always use DPI as default density/page format unit, thus we explicitly specify DPI with the -units option (otherwise you may get different results with different versions/input format combinations). The new size (specified via -resize) is the dimension of a DIN A4 page in pixels. The resize argument specifies the maximal page size. What resolution and quality to pick exactly depends on the use case - I selected 150 DPI and average quality to save some space while it doesn't look too bad when printed on paper. Note that convert by default does not change the aspect ratio with the resize operation:Resize will fit the image into the requested size. It does NOT fill, the requested box size. (ImageMagick manual)Depending on the ImageMagick version and the involved input formats it might be ok to omit the -repage option. But sometimes it is required and without that option the PDF header might contain too small dimensions. In any case, the -repage shouldn't hurt.The computations use integer arithmetic since bash only supports that. With zsh the expressions can be simplified - i.e. replaced with $((i*8.27))x$((i*11.69)).
_cstheory.34614
I am given a simple undirected graph $G(V, E)$. I want to partition $V$ into $b$ Maximal cliques: $\{C_1, C_2, ..., C_b\}$ such that the number of edges that cut across two cliques is the minimum. $b$ is arbitrary i.e. there is no restriction on $b$.I think the decision version of this problem is NP-Complete. It may be reduced to a weighted independent set problem. My question is:Is there any known approximate algorithm for this problem?Thank you for your answers!
Maximal Clique partition of vertices with smallest number of cut edges
clique;partition problem
This problem is the Cluster Edge Deletion problem.Given a graph $G = (V,E)$ and an integer, can we delete at most $k$ edges $F \subseteq E$ such that $G-F$ is a cluster graph?A cluster graph here is a graph whose connected components are cliques.The approximability of Cluster (Edge) Deletion, was studied by Shamir, Sharan, and Tsur (Cluster graph modification problems. Discrete Applied Mathematics, 144(1):173182, 2004). They show that the problem is NP-hard to approximate within a constant factor:Theorem 12. There is some constant $\epsilon > 0$ such that it is NP-hard to approximate Cluster Deletion to within a factor of $1 + \epsilon$.
_unix.33285
On an embedded Linux platform, I have a network adapter attached to an SDIO interface. There is no Card Detect signal on this particular bus. If for instance, I turn the network adapter power on or off, is there any way I can force a re-scan of the SDIO bus from user space?
How can one force a re-scan of an SDIO bus from Linux user space?
embedded;sd card;sdio
null
_unix.354893
I have a Unix server that has started rebooting every few minutes. I tried to trace the source of the problem by logging the process tree at the time reboot is called, as described by this question's answer.However, I don't understand where to look next.The log contains these lines (among many others):root 1 0 0 16:49 ? 00:00:00 /sbin/initroot 2894 1 0 16:53 ? 00:00:00 /bin/bash /sbin/shutdown -r now Control-Alt-Delete pressedTo me, it looks like the server's startup process is calling a reboot with shutdown -r. In the system log, all I see is this line:sshd[2433]: Received signal 15; terminating.Also, this is an Amazon Web Service Unix instance that only allows connections from my IP address. It's also protected by a private key.What are the next steps I can take to find the source of the problem?
Unix server constantly rebooting
bash;shutdown;reboot;aws
null
_unix.358096
I have seen advice in several places to use the following shebang line#!/usr/bin/env bashinstead of#!/usr/bin/bashMy knee-jerk reaction is, what if somebody substitutes this executable for their own in say ~/.local/bin ? That directory is often set up in the user's path before the system-wide paths. I see this raised as a security issue often as a side note rather than anything to take seriously, but I wanted to test the theory.To try this out I did something like this:echo -e #!/usr/bin/python\nprint 'Hacked!' > $HOME/.local/bin/bashchmod 755 $HOME/.local/bin/bashPATH=$HOME/.local/bin env bashThis yields/usr/bin/env: bash: No such file or directoryTo check whether it was picking up anything at all I also didecho -e #!/usr/bin/python\nprint 'Hacked!' > $HOME/.local/bin/perlchmod 755 $HOME/.local/bin/perlPATH=$HOME/.local/bin env perlwhich prints, as I expected,Hacked!Can someone explain to me why the substitute bash is not found, but the substitute perl is? Is this some sort of security measure that (from my point of view) misses the point?EDIT: Because I have been prompted: I am not asking how /usr/bin/env bash is different from using /bin/bash. I am asking the question as stated above.EDIT2: It must have been something I was doing wrong. Tried again today (using explicit path to env instead of implicit), and no such not found behaviour.
/usr/bin/env as a shebang - and its security implication
bash;security;environment variables
what if somebody substitutes this executable for their own in say ~/.local/bin ?Then the script doesn't work for them.But that doesn't matter, since they could conceivably break the script for themselves in other ways, or run another program directly without messing with PATH or env.Unless your users have other users' directories in their PATH, or can edit the PATH of other users, there's really no possibility of one user messing another one.However, if it wasn't a shell script, but something that grants additional privilege, such as a setuid wrapper for some program, then things would be different. In that case, it would be necessary to use an absolute path to run the program, place it in a directory the unprivileged users cannot modify, and clean up the environment when starting the program.
_reverseengineering.14599
I have an obfuscated .Jar file and I want to debug it. I tried debugging it with Eclipse's Bytecode Visualizer:But when debugging, I don't see stack windows. Where can I find them?
Where are the stack windows when debugging Java bytecode with Bytecode Visualizer?
obfuscation;java;byte code
null
_unix.362426
When I switch to vi mode in shell (bash or ksh) very useful to me shortcuts such as C-p and C-n to go back and forth in command history disappear. I don't want to rely on Up and Down for that. I don't want to add key bindings every time for every shell. I just want to know if there's alternative native Vi mode commands for navigation in history. Btw, C-l for clearing screen disappear too, is there a default key binding for clearing the screen in Vi mode?
History navigation in Vi mode of Bash shell
bash;shell;keyboard shortcuts;command history;vi
The default key-bindings for moving up or down in the command history in all shells that I know of that supports Vi key-bindings is k for the previous command and j for the next command.These are the same as the corresponding movement commands in the Vi editor.For them to work, you will have to be in normal mode, i.e. you have to press Esc once.To clear the screen, use the command clear.
_unix.198504
I have a folder full of text files named 000.txt to 181.txt. How can I process all of them with the same awk script (program.awk) and send them to their respective output files (output000.txt - output181.txt)?
Multiple input files and output files in awk
bash;shell;awk;perl;gawk
for f in ???.txtdo awk -f program.awk $f > output$fdonewill process all files whose names are three characters (any characters)followed by .txt. To restrict it to only files whose namesare three digits followed by .txt, usefor f in [0-9][0-9][0-9].txt
_unix.301324
I have a board with Cyclone V SE, which contains ARM CortexTM-A9 MPCore (single core). On this board I run linux 4.1.15 built using Buildroot. When testing USB it turned out that while Bulk OUT transfers run at about 20MB/s, Bulk IN transfers run at about 10MB/s. For this measurement I used g_zero on device and a simple libusb based program on host.The second measurement was done using g_mass_storage on device side and dd on host side. Same results.Last test was done using a combination of ConfigFS, FunctionFS and my userspace application that read/wrote data from/to RAM. There I got 10MB/s IN, but up to 40MB/s OUT. I expected the speeds to be roughly equal (at least when working with RAM).I checked the bulk in protocol in USB in a nutshell and don't see any obvious reason why IN should be significantly slower than OUT.Now I know that there are too many things that can cause this and I don't expect a The slowness is caused by... answer. But where should I dig and what tools should I use to track it down?
How to diagnose slow USB transfer from embedded linux?
usb;embedded
null
_codereview.102335
After writing a rather long document in Markdown and using pandoc to convert it to a PDF, I found, to my dismay, that many of the images were out of place, and that they all had their alternate text proudly displayed underneath them as captions. My document is rather instructional, so this rearrangement was harmful to its readability.I eventually found a way to display the images as inline. I still wanted to write the document in standard Markdown, though, so I wrote a Python script to convert all the standalone images in a document to this inline form.pandoc_images.py:import sys# Convert standalone images in standard Markdown# to inline images in Pandoc's Markdown# (see http://pandoc.org/README.html#images)with open(sys.argv[1], 'r') as markdown: lines = markdown.read().splitlines() for index, line in enumerate(lines): is_first_line = index == 0 preceding_blank = True if is_first_line else not lines[index - 1] is_last_line = index == len(lines) - 1 following_blank = True if is_last_line else not lines[index + 1] is_standalone = preceding_blank and following_blank is_image = line.startswith('![') and '](' in line and line.endswith(')') print(line + ('\\\n' if is_standalone and is_image else ''))Example (text.md):This is some text.![This is an image.](image.png)### This is a header.Running python3 pandoc_images.py text.md would produce:This is some text.![This is an image.](image.png)\### This is a header.It seems like a lot of mess (enumerate, bounds checking, etc.) for such a simple job, though. Is there any way I can improve any of this code?
Converting Pandoc Markdown images from captioned to inline
python;markdown;pdf
Instead of a ternary, use the or syntax for setting your booleans here. If is_first_line is True then True is returned, if it's not then not lines[index - 1] is evaluated and the result of that is returned whether it's True or False.preceding_blank = is_first_line or not lines[index - 1]But since you're setting is_first_line one line before and never using it again, I'd even fold that into this expression.preceding_blank = index == 0 or not lines[index - 1]I'd make both the same changes with is_last_line. Also I would substitute index for i, since i is idiomatic for index anyway and will save some characters.One last thing that you may want, adding .strip() to not line will strip out any whitespace on the line and make sure that even an empty whitespace line will be considered False because any character in a string will make it evaluate as True. This may or may not be beneficial to you, as whitespace could be meaningful.with open(sys.argv[1], 'r') as markdown: lines = markdown.read().splitlines() for i, line in enumerate(lines): precedingBlank = i == 0 or not lines[i - 1].strip() followingBlank = i == len(lines) - 1 or not lines[i + 1].strip() is_standalone = preceding_blank and following_blank is_image = line.startswith('![') and '](' in line and line.endswith(')') print(line + ('\\\n' if is_standalone and is_image else ''))
_unix.298530
I'm trying to convert character strings like 123456a to 123456A or test to Test, but leave existing uppercase as is, for example testHW becomes TestHW.I have tried many attempts circling around:sed 's/[[:alpha:]]./\u\1/'without luck - any ideas?
Using sed to uppercase the first non-numeric character, leave others as is
sed
\1 or general form \n where n is a digit shall be replaced by the text matched by corresponding back-reference expression, which you define by grouping the text between \(...\) with BRE or (...) with ERE.With GNU sed:$ echo 123456a | sed 's/\([[:alpha:]]\)/\u\1/'123456Aor:$ echo 123456a | sed -E 's/([[:alpha:]])/\u\1/'123456AYou can also use & to refer to the text matched instead of back-reference:$ echo 123456a | sed 's/[[:alpha:]]/\u&/' 123456ANote that [:alpha:] matches both lowercase and uppercase characters, so something like 123456Aa will be left as-is.If you want to replace the first lowercase with corresponding uppercase, you must use [:lower:]:$ echo 123456Aa | sed 's/[[:lower:]]/\u&/'123456AA
_unix.269070
I've used my own domain with the email service of yandex.com. Now I want to migrate to another email service provider. I've chosen zoho.1) How can I copy all my email from yandex to zoho? Or general, how can I copy all my email from a email service A to email service B?2) I'm concerned about privacy and I'd like not to store my email on a server anymore and I'd better store them locally. At the same time it's vital for me to be able to access my email from a laptop and phone and tablet. So should I keep using IMAP?
How to migrate from one email provider to another?
email;migration;data
null
_cs.45127
This is a somewhat big question since there are many variations on the VRP. The most studied seems to be the capacitated version, the CVRP, but variations considering time windows, backhaul/linehaul, etc. are also studied, and of course characteristics of the graph and number of vehicles matter too. The most general VRP with no capacity constraint and one vehicle is the TSP, which of course can't be approximated within any constant, so neither can the VRP.Is there research on the approximation complexity classes for variations? Such as where the graph is Euclidean, has triangle inequality, is symmetric v asymmetric, CVRP, VRPTW, DVRP, etc.I was specifically looking at the Clarke-Wright savings algorithm for these questions. I haven't found anything on its approximation ratio, and I realized it might be because the entire problem doesn't have approximation ratios.
Does the vehicle routing problem and variations have approximation ratios or a PTAS?
algorithms;approximation;heuristics
null
_webapps.50064
I know about doodle and I've used it for a while to schedule meetings. But it has one big drawback: If I'm available between 10-13 for a meeting that shoould take 1 hour, then I might schedule it like this:10-1111-1212-13But I'm also available for 11:30-12:30. That, however, doesn't show up. And If someone cant to 10-11 or 11-12 but is free for only 11:30-12:30 then the person can't say that.Thus, I want to draw to say I'm available between 10 and 13 for a 1 hour meeting. How can I do this?
Schedule meeting
scheduling;doodle.com
null
_cs.32275
I am quoting a paragraph from the book Operating System Principles by Galvin.Usually, each page-table entry is 4 bytes long, but that size can vary as well. A 32-bit entry can point to one of $2^{32}$ physical page frames. If frame size is 4 kB, then a system with 4-byte entries can address $2^{44}$ bytes (or 16 TB) of physical memory.Now, I know we have $2^{44}$ bytes of memory because we have $2^{32}$ page frames and each frame size is 4 kB, i.e. $2^{12}$ bytes of memory, so physical memory is $2^{32}\cdot2^{12} = 2^{44}$.Please help me to understand the following:How did we get the number of frames as $2^{32}$?If the logical memory space is $2^{32}$ then what should the physical memory space be (considering both the fully used and partially used logical address space concept)?
Operating System Paging concept
operating systems;memory management;virtual memory;paging
null
_softwareengineering.98836
I look for a tool or a best practice whose focus is on managing how the current staff of programmers can get assigned and re-assigned to different projects depending on changing priorities. Progress on projects should be tracked, too. Our department is not faced with a single, overly complex project (for which I'd know several good tools to manage it), but with a seemingly ever increasing stack of new project requests of small to moderate size. Most projects can be completed by a single programmer within 1 to 4 weeks, and only some rare beasts might take several months. It does happen that work on a project is interrupted because a more important job pops up. Usually a programmer is working on several projects at once (at example, one short-term, one medium-term, and one long-term project, or helping out at someone else's project because of his special knowledge). Most projects are done by a single programmer of the pool. I speak of a team of less than 10 programmers.Since most project requests are unrelated to each other, the order in which they are done is mostly decided arbitrarily or by how loud the request is shouted. The requests come from different in-house departments and are dictated by internal demand and not with a pareto optimum in mind. Needless to say this leads to constant discussions about why a certain project needs to be done first.I know that there is a management problem behind it, but my question is not about how to change the way the company works, but about a tool whichs helps to improve coordination and communication in the meantime.What I am looking for is a tool that can visualize the current assignments, give projections about how long a programmers is already booked up by projects, reflects how changing an assignment change the timeplan of projects etc. Its purpose is less about self-coordination of the team, but more to provide a high-level view that can be communicated to and is easily understandable by other departments. It should provide a solid foundation for discussing priorities and for deciding the order of projects. The ability to track time boundaries (like must be finished Oct 1st latest) and cost factors would be a bonus.Can a ticketing system fit the bill or is Microsoft Project the answer? Has anyone got any experience with such a situation and can suggest a good solution?
Any good planning tool or method for managing several competing projects?
productivity;team;project;planning
null
_unix.278879
I hope someone will be able to help here, I don't see much support about OpenElec... But let's try !SO. I'm buidling an HTPC and would like to multi-boot it with OpenElec + Mint; with EFI boot.I use the OpenElec 7 Beta 1 (OpenELEC-Generic.x86_64-6.95.2.img.gz) as my Intel NUC6I3SYH is not supported below.All my problems comes from this : the installation of OpenElec is quite simple (which is nice but), you don't have any choice : you have to format your disk. It creates 2 partitions :1 partition of ~500 Mo with the system (FAT16; SYSTEM)1 partition of the remainging space for the datas (EXT4; STORAGE).So when you wan to add another OS to it, it's quite a pain : you have to resize/reorder partitions, etc.During my last attempt, I started by installing OpenElec normally; and I made an image of the SYSTEM partition, that I stored on an external hard drive (using Mint live CD).I haven't made an image of the STORAGE partition since it was empty.sudo cat /dev/sda1 > /media/Mint/Backup/OESystem.isoI formatted the disk, and installed Mint regulary.I recreated the OpenElec partitions with gParted; and loaded the iso image on it.Note that the SYSTEM partition was FAT16 and so I had problems when extending it with gParted (a was warned about a cluster size problem or something); so I used EXT4 for it.Then; I installed reFIND to manage the UEFI boot; and when I boot OpenElec, I got this message : cp: can't stat '/flash/SYSTEM'I know my installation has been a little experimental, but I don't see why it should not work. I also made several other attemps, but this one was the best :)Also, I saw a lot of other people getting this error, but there were no answer about this particular problem.The only thing interesting I read was this on Github, but it's too technical for me and I didn't understood it.Can someone help here ?I suspect that the problem comes from partition access permissions, or from the OpenELEC boot system.Thanks !
OpenELEC : cp: can't stat '/flash/SYSTEM'
linux mint;dual boot;uefi;openelec
null
_softwareengineering.208136
I have a simple console application that will be deployed as a scheduled job. Below is its pseudo codeMain(string[] args){ //get xml string from database Reports reports = ReportsDB.GetReport(args[0].ToString()); //Generate xml file using xmwriter //post the file to sftp site}I was wondering how to include logging functionality for this job. Also, will it be suffice to use a text writer to write to text file every time it is run or use libraries such as log4netEditI should be able to log the execution of the job like start time, debug info, etc. that will be for my self.
How to do logging in console application
c#;logging
First, in general the answer is the same way you'd do logging in any other application.I would strongly recommend using something like log4net or nlog rather than rolling your own using a TextWriter for a number of reasons:There are great, free options out there that will probably result in you writing and maintaining less code -- just making logging target a configuration option is worth the price of admission alone.The great, free options are very performant -- logging is kind of easy until you get into how do I log this without blocking the execution?The great, free options are very reliable -- logging is easy until you get into how do I make sure this fatal exception that killed the program gets flushed to the log?
_unix.323362
I'm using U-Boot on a Raspberry Pi Compute Module. The boot process is :RPi firmware --> U-Boot --> LinuxI'm setting up some things about devices in the config.txt, that is used by the RPi firmware. But when I re-load the DTB with U-Boot, it actually erase the settings done by the RPi firmware, and some devices won't work in Linux.I boot using the bootz command, and I can't use it without giving it a DTB, or it will crash at booting Linux ...Do you guys have an idea of how I can boot without reloading a fresh DTB ?
U-Boot : boot without reloading DTB
boot;u boot;device tree
null
_webapps.107618
I have a Google Sheet that looks like this:That Google Sheet has some Google Script that looks like Google This:function myFunction() { var ss = SpreadsheetApp.getActiveSheet(); var rng = ss.getActiveCell(); Logger.log(Range + rng.getA1Notation() + value = + rng.getValue());}When I run that script with the sheet as seen above, the log has this text:Range C3 value = Charlie's HorsesThat's as expected. Now, I add a Filter View to the sheet so it looks like this:When I run the code again, I get the same output:Range C3 value = Charlie's HorsesHowever, when I sort that filter view, things go haywire.I've sorted by column A descending but I still have Charlie's Horses selected. When I run the script, though, the logger has an unexpected output:Range C9 value = Eye DoctorIt has the expected range address of C9 but the value is not what I'm seeing. As far as I can tell, that's because .getValue() is pulling the data from the original, unfiltered view of the sheet.Is there any way in Google Sheets / GAS to reference cells in a sorted filter view rather than the data in the unfiltered view?The script that raised this issue needs to collect the values for cells on the same row in column A and C as well as getting / updating the comment on the cell in column C. If I'm able to reference the correct range object, I could work it out from there.
Reference Cells in Sorted Filter View
google spreadsheets;google apps script
null
_unix.330404
i'm not getting audio from a guest os. i'm using virt-manager. following are my settings : Guest OS(Cloudera (CentOS 6.7)) , Display Spice, Sound ich6 and video QXL. can anyone help me?
qemu KVM audio not working on guest OS
centos;audio;kvm;qemu
After searching through http://www.linux-kvm.org/page/Guest_Support_Status#CentOS i found out that centOS(6.7) is not yet supported. So i have switched to VMWare Workstation.
_codereview.27506
All,I barely have a week of angular behind me and I am still struggling with some of the concepts. Below is a piece of code I came up with. The idea is to let the user add/delete entries in a model (in practice, the model is read from a json and is more complex than what is presented). While this code seems to work, I'm wondering whether it's proper angular:Is it OK to leave the add_item function in the controller ? I understand that any DOM manipulation should be put in a directive, but it's only manipulating the model, right?Shouldn't the del_item be in the controller too, then?Using the index to control which item to delete look a bit dangerous, what would be a better alternative?Thanks for any input.<html ng-app=main> <head > <script src=//ajax.googleapis.com/ajax/libs/angularjs/1.0.7/angular.min.js></script> </head> <body> <div ng-controller=BaseController> {{items}} <div> <button type=button ng-click=add_item()> New item </button> </div> <ul> <li ng-repeat=i in items> <display-item idx={{$index}}></display-item> </li> </ul> </div><script> angular.module(main, []) .controller(BaseController, function($scope){ $scope.items = [{key:1, a:'A'},{key:2, a:'B'},{key:3, a:'C'}]; $scope.last = $scope.items.length; $scope.add_item = function() { var item={key: $scope.last + 1, a:'?'}; $scope.items.push(item); $scope.last += 1; }; } ) .directive('displayItem', function() { return { restrict: 'E', transclude: true, scope: {idx: '@'}, template: '<strong>{{idx}}:</strong>'+ '{{$parent.items[idx].key}}={{$parent.items[idx].a}}'+ '<button ng-click=del_item(idx)>X</button>', link: function(scope, element, attrs) { scope.del_item=function(i){ scope.$parent.items.splice(i,1)} } } } );</script>
Manipulating model elements in angular
javascript;angular.js
Right, you're only manipulating the model, updating the model about a change (thus) updating it in the controller is perfectly fine, and it's what the controller does.del_item should be in the controller too, you are correct. That's where it belongs.Using the index to deleting the item is dangerous, what I'd do is pass this as a function and then use an indexOf to locate the element you're deleting and delete that.If I may ask, why are your display items in a directive in the first place and not in the DOM? You could use an ng-repeat in the DOM to repeat the items instead and drop the directive altogether.Directives are useful for code reuse, are you reusing this spefici component in a lot of other places through out your code? Behaviors affecting your models and not just your presentation logic should probably not be in the directive to begin with (like del_item in your example), a directive should encompass presentational behavior and not business logic.
_unix.9707
In my Linux box, sleep accepts seconds, minutes and hours. So:sleep 10mSleeps for 10 minutes (or 600s).sleep on Mac only accepts seconds as argument. sleep 10m doesn't work, only sleep 600s.What can I do? Create a function named sleep that converts when 10m or 10h is passed as a parameter to seconds, and calls the builtin sleep?
Different sleep binaries on Mac (Darwin) and in Linux. How to properly handle the differences?
linux;shell;osx;sleep
You could use homebrew for Mac OS X: https://github.com/mxcl/homebrew and install the coreutils package from there. That will allow you to install the GNU version of sleep that handles the same parameters as the linux version.Note that by default it installs the binaries with a 'g' prefix, so the command will actually be named gsleep, but the package provides a script file to alias all commands.
_codereview.129174
After reading noobs, I'm interested in possible way to improve readability of the following short snippet. a struct pictdb_file is a small file system containing a metadata array and a file descriptor fpdb.the function allocates the resources, and forwards initialization to db_create_logic.How can I improve the readability of this snippet ? Please comment about: the syntax, white-lines, comments, function extracting, etc.Also, which version do you prefer and why ?version 1:enum error_codesdb_create_resources(const char* db_filename, struct pictdb_file* db_file){ //allocate metadata db_file->metadata = calloc(db_file->header.max_files, sizeof(pict_metadata)); if (db_file->metadata == NULL) { return ERR_OUT_OF_MEMORY; } //open fpdb db_file->fpdb = fopen(db_filename, wb); if (db_file->fpdb == NULL) { free(db_file->metadata); db_file->metadata = NULL; return ERR_IO; } //forward enum error_codes e; e = db_create_logic(db_filename, db_file); //release resources free(db_file->metadata); db_file->metadata = NULL; db_close(db_file); return e;}version 2: (comment and blank-lines changes)enum error_codesdb_create_resources(const char* db_filename, struct pictdb_file* db_file){ db_file->metadata = calloc(db_file->header.max_files, sizeof(pict_metadata)); if (db_file->metadata == NULL) { return ERR_OUT_OF_MEMORY; } db_file->fpdb = fopen(db_filename, wb); if (db_file->fpdb == NULL) { free(db_file->metadata); db_file->metadata = NULL; return ERR_IO; } enum error_codes e; e = db_create_logic(db_filename, db_file); free(db_file->metadata); db_file->metadata = NULL; db_close(db_file); return e;}
Two variants of a db_create_resources() function
c;comparative review;database
There's no specification. What does this function do? What arguments should I pass? What are the preconditions on db_file? (It looks as if I might have to fill in some of the fields, but which ones?)resources is spelled thus.The function does not check the preconditions.Mixing fopen and db_close seems confusing and error-prone. fopen should be paired with fclose, and db_close should be paired with db_open.If the call to fopen fails, it looks as if you lose information about the cause of the error. fopen can fail due to permissions (EACCES), lack of disk space (ENOSPC), missing directory (ENOENT), and so on, but these all get converted to ERR_IO.The function uses db_file->metadata to hold a pointer to some memory, but this memory is not actually used by the function, and by the end of the function the memory is freed and the pointer set to NULL. It seems wrong to use the db_file structure this way. If db_create_logic needs some temporary memory, it should allocate it itself (and preferably in a local variable, not in the db_file structure).The cleanup logic for db_file->metadata appears twice. This is a bad idea: if you changed it in one of the places, would you remember to change it in the other? Consider refactoring the error handling code to look like this:enum error_codesdb_create_resources(const char* db_filename, struct pictdb_file* db_file){ enum error_codes e; db_file->metadata = calloc(db_file->header.max_files, sizeof *db_file->metadata); if (db_file->metadata == NULL) { e = ERR_OUT_OF_MEMORY; goto fail_calloc; } db_file->fpdb = fopen(db_filename, wb); if (db_file->fpdb == NULL) { e = ERR_IO; goto fail_fopen; } e = db_create_logic(db_filename, db_file); db_close(db_file);fail_fopen: free(db_file->metadata); db_file->metadata = NULL;fail_calloc: return e;}
_cs.77078
I'm learning about floating point numbers and I don't quite understand when one should interpret an exponent as moving the decimal point to the left.By book shows an example of converting -10.5 into binary in normalised form (using 8 bits):First we convert it into binary without normalisation and use a fixed floating point representation, giving 10101.100We know to normalise it, the decimal point must move four places to the left (so when we get the normalised number, the decimal point would having to move four places right)So in normalised form, we get a mantissa of 1.0101100 and an exponent of 4Now, my book represents four at 100, but it also says to use two's complement. Therefore, can't I interpret 100 as -4? When would I know to interpret an exponent as negative or positive? I.e., when would I know to move a decimal point to the left when given a mantissa?
When do you know whether an exponent represents a movement of the decimal point to the left?
floating point;real numbers
null
_unix.78630
After years of Linux I would like to give *BSD a try, but I'm uncertain if I should use FreeBSD or PC-BSD. With KDE out-of-the-box and graphical package-manager and such, PC-BSD strikes me as simple to set-up and use. On the other hand, more people uses FreeBSD and it seems to be better documented, so it sets the standard. I'm leaning towards FreeBSD, but would still like something similar to GUI-goodness from PC-BSD.So my question, can FreeBSD be set-up to work similar to PC-BSD? With KDE, GUI log-in prompt and GUI managing tools? Is it difficult, or just a matter of making the relevant packages from the port-tree? Would it perhaps even be possible to install PC-BSD packages directly on FreeBSD?
FreeBSD or PC-BSD?
freebsd;bsd
PC-BSD is FreeBSD with many enhancements to make a convenient, comfortable desktop environment, their Push Button Installer package management tool, a network utility and a unified control panel for easy access to admin utilities. So, yes, FreeBSD can be made to work like PC-BSD—that's exactly what the PC-BSD team have done!If you want a graphical desktop system to get you started learning *BSD, then I would think PC-BSD is the ideal place to start—it gets you up and running with one of several popular desktop environments from the get-go, so you can then focus on learning other aspects of the system. If, on the other hand, you want to get your hands dirty from the beginning, learning how to install FreeBSD and additional software, you can use the ports system to add the extras you want.As for the documentation, the vast majority of documentation relevant to FreeBSD will also apply to PC-BSD without modification, so the PC-BSD team focus their efforts on documenting the differences.You can install PBI packages on a FreeBSD system—simply install the ports-mgmt/pbi-manager port, which provides the command line utilities for managing PBI packages. There is also sysutils/easypbi, which aims to provide a simple interface for creating PBI modules from FreeBSD ports. There are also ports of the PC-BSD network utility, their warden jail utility and others.
_cogsci.223
Psychology in the time of Freud was occupied with dreams. Relaying these to one's analyst was an important part of treatment.Fast-forward to less than 100 years later, and we know so much about the importance of sleep in consolidating memories during REM [1][2] and slow-wave sleep [3] (among countless other references I have omitted). Let us assume dreaming is largely relegated to REM (ignoring dreams experienced during stage 2 sleep). If we recall our dreams at a later time then is it detrimental to any subsequent process of memory consolidation?If our dream represents a state through which our brain is passing to consolidate memory, what is the effect of recalling that intermediate state and potentially storing it back in long-term memory?ReferencesIshikawa A, Kanayama Y., et al (2006). Selective rapid eye movement sleep deprivation impairs the maintenance of long-term potentiation in the rat hippocampus. Eur J Neurosci.,24 (1),243-8.Louie K, Wilson MA. (2001). Temporally structured replay of awake hippocampal ensemble activity during rapid eye movement sleep. Neuron,29 (1),145-56. FREE PDFJi D, Wilson MA (2007) Coordinated memory replay in the visual cortex and hippocampus during sleep. Nat Neurosci, 10(1),100-7. FREE PDF
Does dream recall disturb the processes of memory consolidation?
memory;sleep;dreams
I am not aware of any study that specifically addresses dream recall, but there is a growing literature about memory reconsolidation or post-reactivation plasticity, the idea that memory reactivation (recall) can temporarily return a memory to a state of high fragility and susceptibility to interference, after which a process similar to consolidation makes it more resistent again. Most memory reconsolidation work has been with animals, but there are a few studies with human subjects [1][2]. [3] is an interesting paper about (re)consolidation of motor memory during slow-wave sleep. [4] and [5] are good reviews of the field.[1] Hupbach, A., Gomez, R., Hardt, O., & Nadel, L. (2007). Reconsolidation of episodic memories: A subtle reminder triggers integration of new information. Learning & Memory, 14(1-2), 47 -53. doi:10.1101/lm.365707[2] Forcato, C., Burgos, V. L., Argibay, P. F., Molina, V. A., Pedreira, M. E., & Maldonado, H. (2007). Reconsolidation of declarative memory in humans. Learning & Memory, 14(4), 295-303. doi:10.1101/lm.486107[3] Walker, M. P., Brakefield, T., Hobson, J. A., & Stickgold, R. (2003). Dissociable stages of human memory consolidation and reconsolidation. Nature, 425(6958), 616620. doi:10.1038/nature01930[4] Nader, K., & Hardt, O. (2009). A single standard for memory: the case for reconsolidation. Nature Reviews Neuroscience, 10, 224234. doi:10.1038/nrn2590[5] Schiller, D., & Phelps, E. A. (2011). Does Reconsolidation Occur in Humans? Frontiers in Behavioral Neuroscience, 5. doi:10.3389/fnbeh.2011.00024
_webapps.23762
When I load the contacts list in Gmail, I can view the list of contacts but cannot open any. Merge and delete appears to work, I can also access them on my Android device but not via the Gmail UI.Each time I try to load a contact I get the error There was an error loading the contact.How do I resolve this? I'm trying to avoid deleting everyone and re-importing them.I have tried on several computers (Windows 7 and XP) with various browsers (Chrome, IE8 and IE9).
Can't open any contacts in Gmail, merge and delete still work though
gmail;google contacts
This seems like it might be a temporary issue on Google's side, not your browser. How long has it happened? If it is recent, I would wait 20 min, an hour, several hours and try again.If this has been happening for a long time, there may be something wrong with your Gmail account specifically, and I would encourage you to send a bug report to Google directly.Another suggestion to try:There is an old version of the contacts manager at http://mail.google.com/mail/#cm1
_cogsci.3531
My research is on work related stress and burnout amongst healthcare professionals and the role that personality play in either reducing or increasing levels of stress and burnout. I'm using the big five personality factors to research how each of these factors affect the relationship between stress and burnout. How does personality affect levels of stress in individuals? Can a person's personality be a mediating or moderating factor?
How does personality moderate or mediate the relationship between stress and burnout?
io psychology;personality;well being;stress
null
_codereview.60650
I have made a simple MVC framework of my own for my personal website to learn a thing or two about how this whole thing even works. I think I've got the idea, but there's one thing I'm not sure about.Now I know that a model shouldn't need anything specific - everything has to be generalized so it's accessible from multiple controllers which do something different. My main issue, though is: How the hell do I tell the model to save something the way I want it to? If it's generalized, it should have no information about any save table, any output message or any saving pattern to begin with.Then I remembered that there's this new fancy thing in PHP called closures and this might be a good way to use them. I have decided to deal with validation in the model in this fashion:Controller:private function sendMail($data){ $rebuilt['name'] = array(type => string, value => $data['name'], required => true); $rebuilt['email'] = array(type => email, value => $data['email'], required => true); $rebuilt['subject'] = array(type => string, value => $data['subject'], required => false); $rebuilt['message'] = array(type => string, value => $data['message'], required => true); $callback = function() use (&$rebuilt) { $headers = array(); $headers[] = MIME-Version: 1.0; $headers[] = Content-type: text/plain; charset=utf-8; $headers[] = From: {$rebuilt['name']['value']} <{$rebuilt['email']['value']}>; $headers[] = Subject: {$rebuilt['subject']['value']}; $headers[] = X-Mailer: PHP/.phpversion(); mail(some@email, $rebuilt['subject']['value'], $rebuilt['message']['value'], implode(\r\n, $headers)); }; $this->model->validateInput($rebuilt, $callback, 'contact', SuccessEmailReceived, 'contact');}Model validateInput() method:public function validateInput(&$validation, $function, $failAnchor = 'top', $successMessage = GeneralActionSuccessful, $successAnchor = 'top'){ foreach ($validation as $key=>$value) { // trim the string before validation trim($validation[$key]['value']); if($value['required'] == true && !$value['value']) { $this->error = ErrorFieldMissing; $this->anchor = $failAnchor; $this->formFields = $validation; return; } switch($value['type']) { case 'email': if(!filter_var($value['value'], FILTER_VALIDATE_EMAIL)) { $this->error = ErrorInvalidEmail; $this->anchor = $failAnchor; $this->formFields = $validation; return; } break; } } // success, call the function passed $function(); $this->success = $successMessage; $this->anchor = $successAnchor;}Basically, what this does, is the controller calls the model validateInput() method while passing the list of data for validation as a reference. An anonymous callback function, which also uses the same validation input by reference, is then called in the model. This way, no matter how the validator deals with the input array, the function behaves accordingly.Please note that I only have email validation set up right now, because I don't need anything else. I just need to know if I'm doing things right, or if there's something I should re-learn.
MVC Model validation callback
php;mvc;validation;callback;closure
null
_codereview.30824
I am trying to evaluate if someone has read and seen all the questions of a book.I am using PHP to average everything out after I get the data. I know MySQL can avg, but I can't seem to wrap my head around how I can average the box number of the questions (similar to spaced repetition learning) if they haven't seen every question.How can this function be better? I feel there might be a more elegant way of doing it.SELECT COUNT(s.id), (SELECT COUNT(u.id) FROM user_sections u WHERE u.chpt_id = ? && u.user_id = ? ), (SELECT COUNT(q.id) FROM question q WHERE q.chapter_id = ? && (q.module = 1 || q.module = ?) ), (SELECT COUNT(v.id) FROM user_questions v WHERE v.chpt_id = ? && u.user_id = ? ), (SELECT AVG(v.box) FROM user_questions v WHERE v.chapter_id = ? && u.user_id=? ), (SELECT AVG(m.$mkts_col) FROM mkts m WHERE m.chapter_id = ? && (m.module = 1 || m.module = ?) ) FROM sections s WHERE s.chapter_id = ? && (s.module = 1 || s.module = ?)
Determining if someone has read and seen all the questions of a book
performance;sql;mysql;mysqli
null
_webapps.29748
Does anyone know of a site / service that provides an updated list of all sites which are known to be affected by some kind of malware?Ie sites that get the Chrome/ Firefox malware popup (This site has been affected by malware!)I asked this on SuperUser but was downvoted into oblivion because, apparently, some people have way too much time on their hands. Thanks in advance for your help!
Site affected by malware
malware
I dont know any (well known) site that does that, but there is Googles Safe Browsing API if you want to do something programmatically.
_codereview.97463
I am teaching a python class to some high school seniors. I was thinking of doing an oracle game where the computer would come up with a number, and the player would guess it. These are brand new coders, and I'll teach them what they need to know to get there, but is this a good first project? I have my code here as my implementation of this game. I want to make sure that I don't teach them anything bad or give them bad habits.My software requirements are:Must generate a random (or psuedo-random) numberMust import a libraryMust use a loopMust take user inputMust tell the user to guess higher or lower# importsimport randomimport timeimport sysimport os# set global variablesMIN_NUMBER = 1MAX_NUMBER = 10# important functionsdef get_random(min, max): print Coming up with a number. time.sleep(2) random_number = random.randint(min, max) print Got it! return random_numberdef to_int(x): try: x = int(x) return x except ValueError: print That is not a whole number. return Falsedef how_close(guess, right_answer): return right_answer - guessdef hi_low_done(delta): if delta < 0: print Guess lower. return False elif delta > 0: print Guess higher. return False else: print You got it! return True# main Program#clear screenos.system('cls') # on windowsprint Welcome. I am the Oracle.print I will think of a number between %s and %s. Guess it and you will win! % (MIN_NUMBER, MAX_NUMBER)while True: user_answer = raw_input(Ready? <y/n> ) if user_answer.lower() == 'y' or user_answer.lower() == 'yes': print Great! Let's go. break; elif user_answer.lower() == 'n' or user_answer.lower() == 'no': print Okay. Consult the Oracle when you are ready. sys.exit()oracles_number = get_random(MIN_NUMBER, MAX_NUMBER)number_of_guesses = 0while True: number_of_guesses = number_of_guesses + 1 user_answer = raw_input(\nGuess> ) user_answer = to_int(user_answer) # if user answer is outside min max... else... if user_answer: delta = how_close(user_answer, oracles_number) done = hi_low_done(delta) if done: break;print It took you %s guesses to guess my number. % (number_of_guesses)I guess my concerns with my code are:Is this simple enough? It isn't the first thing they will code, but the first project they will do. (maybe group project?)Did I make any blatant errors or In python, you should do this instead errors? Python isn't my first language so I'm not sure.I have created a gist file with my most up-to-date version of this.
Number guessing game for beginners
python;python 2.7;number guessing game
In my opinion I think this is a very very simple problem and much less for high school seniors. Also the amount of code in your project is too much. This will thwart away students and is not pythonic. My aim for the first time students should be to show them how beautiful coding can be and the sheer amount of power you can get with few lines of code. Below is how I will implement it. 1st Solution: Without loop. Uses recursion, use of global etc. import randomdef check(): global count # good example of use of global guess = int(raw_input(Take a guess\n)) if guess == no: print Good job, %s! You guessed my number in %d guesses! %(name,count) if guess < no: print Your guess is too low. count +=1 check() if guess > no: print Your guess is too high count +=1 check()name = raw_input(Hello! What is your name?\n)print Well, + name + , I am thinking of a number between 1 and 20no = random.randint(1,20)global countcount =1check()2nd Solution: With loops. Less code. Showcases importance of while loops. Satisfies all five of your software requirements.import randomname = raw_input(Hello! What is your name?\n)print Well, + name + , I am thinking of a number between 1 and 20no = random.randint(1,20)guess = int(raw_input(Take a guess\n))count =1while guess != no: if guess < no: print Your guess is too low. if guess > no: print Your guess is too high count +=1 guess = int(raw_input(Take a guess\n))print Good job, %s! You guessed my number in %d guesses! % (name ,count)Best would be you should showcase both the approach so that they can understand the difference, appreciate while loops, get a liking for python. Avoid using too much constructs (checking for their input, errors) etc. Testing, using OOPs, error handling etc. should be kept for further stages.
_webapps.15433
Is there a way to download all the messages from a Google Group? I've tried searching for a good solution, but it seems the only way is to actually download the individual .html files for each thread or use some screen-scraping method (which doesn't work so well). Any other ideas?Thanks.
Backup archive of Google Groups messages?
google;backup;google groups
null
_webmaster.8679
What should I do if I suspect that Google has Banned (refuses to list pages for) my Site?
What to do if Google has banned my site?
seo;google
null
_unix.154690
This question may look like a duplicate, but only at first glance.Of course, I would no longer need help in how to code a one-liner that extracts a fixed number of continuous lines (e. g. 5 in this example) from a data source, e. g. top:$ top -b -n1 | awk 'BEGIN {printf %23s %7s\n,cpu,mem} NR==8,NR==12 {printf %-16s %6s%% %6s%%\n,$12,$9,$10}'This is even a very handy one-liner that will show the processes in the system that take most CPU, with the memory usage being printed in an additional column.So far, so good ... however, it's not that trivial. To get this list, top is necessary and may (on low system load) show up itself as process in this list. I'd rather not want that, since these calls are done in intervals and would regularly spawn top (if only for a short while).It is known that we want to begin at line 8 (NR==8). However, what if a second top in another virtual desktop was forgotten about in a terminal which messes up the list as well? In this case, two top processes must be omitted, so the last line to process will be 14.So to improve this output and to filter out every top line that is in there, a counter seems mandatory (perhaps a for loop that we exit with a break?).Unfortunately my attempts with a for loop and i = <number> have been fruitless so far, because it would rather print every line as many times as i indicates.I've come up with a rather hackish solution, which works but may be unsuitable for more complex cases:top -b -n1 | grep -v ' \btop\b$' | awk 'BEGIN {printf %23s %7s\n,cpu,mem} NR==8,NR==12 {printf %-16s %6s%% %6s%%\n,$12,$9,$10}'(Note: This may give unwanted results if the user name in the second column happens to be top as well)Anyways, could I get a clue how to do that in awk please (and get rid of the grep)?Thanks in advance.
awk: Extracting a fixed number of rows where the last row number may vary
text processing;awk
null
_unix.288562
On Mac, I open a new shell and enter nettop. For a split second I see chrome (as I expected) and some other things going on (had to record screen to catch them); then I'm only shown data on UserEventAgent.11, configd.18 and mDNSResponder.40.Is this something I should be concerned about, for one? What do I need to do to view chrome network traffic with nettop?
Network traffic monitor nettop morphing away from expected results
shell;networking;terminal
null
_webmaster.55201
My homepage has several internal links which I would like to use as header tags. For example: <h2> <a href=http://www.example.com/about/> About example.com </a> </h2>Does this pass PageRank to these internal links and therefore take some away from the homepage? Is there any SEO downside to using header tags as links, as exemplified above?
Does using HTML header tags as links have any SEO downside?
seo;html;links;pagerank
null
_webapps.16706
What is the time limit to a video that I can upload to YouTube? Is there an upper video length that I have to make sure my uploaded videos are under?Are there other limits or restrictions (quality, resolution, etc) to a YouTube video?
Time and video quality limits on YouTube
youtube;limit
Generally the limit is 15 minutes, although YouTube has raised this for users who have a good history of compliance with their rules.
_webapps.69261
Is it possible to use the likes of a Facebook Page, that you do not own, as the target audience of a Facebook ad?
Use someone else's Facebook Page likes as target audience?
facebook;facebook pages;facebook ads
null
_codereview.12882
I am developing data-interfacing that converts between two different data models.However, I must be sure that all required fields exist. Therefore I have written this utility class that I can easily use to verify required fields.However I am unsure whether this is the best way because of the expression that needs to be compiled and the usage of reflection. Any feedback is welcome!Usage public OutputDataElement DetermineRetailTransactionShopperType(IHeaderEntity headerEntity) { Ensure.IsNotNull(() => headerEntity); Ensure.IsNotNull(() => headerEntity.ShopId, headerEntity); // Some mapping logic removed from the example } Utility/// <summary>/// Helper class able to ensure expectations/// </summary>public static class Ensure{ public static void IsNotNull<T>(Expression<Func<T>> property) where T : class { IsNotNullImpl(property, null); } public static void IsNotNull<T>(Expression<Func<T>> property, string paramName) where T : class { IsNotNullImpl(property, paramName); } private static void IsNotNullImpl<T>(Expression<Func<T>> property, string paramName) where T : class { // Compile the linq expression Func<T> compiledFunc = property.Compile(); // Invoke the linq expression to get the value T fieldValue = compiledFunc.Invoke(); // Check whether we have a value if ((fieldValue is string && fieldValue.ToString() == string.Empty) || (fieldValue == null)) { // We have no value. Get the initial expression var expression = (MemberExpression)property.Body; // log information about the expression that failed throw new ArgumentException(string.Format(Missing required field '{0}', expression.Member.Name), string.IsNullOrEmpty(paramName) ? null : paramName); } }}
Usage of Expression>
c#;linq;reflection
I don't see any unnecessary reflection in your code.One think that could be simplified is your check for string.Empty:if (fieldValue == null || fieldValue as string == string.Empty)
_unix.19168
Is there a linux command for getting only the files in a folder and inside its subfolders? I have used the find command, but it is displaying all the files and folders. I am executing this shell command using the php exec() function.
Listing only the files in a folder and inside its subfolders
files;find;utilities
You should use the -type f option to get only files.find /path_to_find -type f
_unix.371329
I'm using CentOS 7 and want to configure Bind 9 to work on both simple queries & Reverse DNS lookups.so far Bind works on queries but not on reverse ones.this is part of named.conf file: NOTE: sample IP: a.b.c.d sample doamin: example.comzone c.b.a.in-addr.arpa IN { type master; file rev.example.com.db; allow-update { none; };};and corresponding reverse zone filerev.example.com.db:$TTL 3600@ IN SOA ns1.example.com. admin.example.com. ( 2017061514 10800 1800 43200 3600)@ IN NS ns1.example.com.ns1 IN A a.b.c.dd IN PTR ns1.example.com.but using dig command, I get empty answers:$ dig -x a.b.c.d; <<>> DiG 9.9.5-9+deb8u11-Debian <<>> -x a.b.c.d;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 57861;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0;; QUESTION SECTION:;d.c.b.a.in-addr.arpa. IN PTR;; Query time: 3 msec;; SERVER: 192.168.43.1#53(192.168.43.1);; WHEN: Thu Jun 15 18:20:48 +0430 2017;; MSG SIZE rcvd: 45When I try to query directly from my server I get following results:$ dig @a.b.c.d -x a.b.c.d; <<>> DiG 9.9.5-9+deb8u11-Debian <<>> @a.b.c.d -x a.b.c.d; (1 server found);; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45281;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;d.c.b.a.in-addr.arpa. IN PTR;; ANSWER SECTION:d.c.b.a.in-addr.arpa. 3600 IN PTR ns1.example.com.;; AUTHORITY SECTION:c.b.a.in-addr.arpa. 3600 IN NS ns1.example.com.;; ADDITIONAL SECTION:ns1.example.com. 3600 IN A a.b.c.d;; Query time: 42 msec;; SERVER: a.b.c.d#53(a.b.c.d);; WHEN: Thu Jun 15 18:33:20 +0430 2017;; MSG SIZE rcvd: 116What am I missing in zone file?
Bind - proper reverse config
centos;dns;bind9
Your reverse seems to be well setup in your BIND/side.However, your problem about solving your own DNS reverse names is a very common doubt for small installation/customers/VPS customers (that I have seen countless times). If you do not own the reverse address space, or even if you own it, but have not request at country level to activate it by their own mechanisms, the root DNS server wont know to which name servers they have to talk ultimately, in an hierarchical process much similar to when you resolve your domain name.So ultimately the symptom of this is that locally when inside your local network/VPSs you can solve the reverse zone, outside you cannot with your reverse domain as you wished.I managed an ISP for years, and while small customers had their own domain(s)/DNS servers, for the reverse zones they had to ask us to register their names, as we were the owner of those netblocks. We obviously, complied, it their tier had a fixed/static IP address, or the possibility of buying one. Otherwise, we would advise them to upgrade to an upper tier, and if they wished, buying our DNS services. (actually the lower tiers with dynamic customer range had the SMTP port blocked to the world due to endemic malware/zombie abuse).From the IP address you gave me, I can see it belongs to Kabardian-Balkar Telecommunications Company, which is a similar story, and which is confirmed by the reverse that is defined by them as net-x-x-x-x.kbrnet.ru.As for running an email server with such setup, it would be necessary to ask to register the reverse with the correct name. When the name and reverse does not match, you will suffer in spam points, or with some more zealous sysadmins, your email can be refused in some email servers.In what touches DNS, that is the answer of the question.As an additional caution warning:Beware when using VPSes wether your terms of service allow email servers, and wether or not the SMTP port is blocked by default, or if you have to ask them to unblock it. It is also possible due to your provider being located in the Russian federation, that you have a somewhat bigger probability of their netblock being blacklisted. I would look for hosted domain services as a whole package, nowadays is too much a hassle to run your own email server.I actually have bad news; I ran your IP address in a known service for checking spam blacklists, and your IP address is on the BARRACUDA, Rats Dyna and Spamhaus ZEN blacklists. This issue will have to be dealt with with the heldesk of your provider.
_softwareengineering.350836
I'm trying to build a simple file transfer program that transfers a file from a server to a client. I gave it a try myself by simply writing the file to an ObjectOutputStream. This didn't work. I did some research and I found a whole swath of solutions that utilize a byte[] in which the contents of the file is written to that array and written to a OutputStream. Unfortunately none of the writers explain the code very well. I was curious if someone could explain to me why a byte[] is needed to read the file as opposed to my solution or even just reading the contents of the file into a String and sending that. What is it about a file and streams that require a solution like this? Thanks for any help.
File Transfer over Socket
java;sockets;files
A stream is simply a sequence of objects. If you are attempting to stream a file of unknown type, i.e. you are simply pushing raw data and not interpreting its contents, the usual representation is a byte stream.In Java, object streams are designed to send Objects. Even if you send a byte array, it will send an array object, not a sequence of bytes. They use serialization to send more complex objects which includes metadata.It sounds like you simply want the sequence of bytes that constitute the file contents.There are several ways to make this work, using either old-school sockets and streams or NIO. I suggest starting with the old way so you understand how everything works, then moving on to NIO (which is generally just an abstraction of lower level primitives anyway).Your server should open a ServerSocket, and the client should connect to the server. On the server you open an output stream from the socket, while the client opens an input stream. Note that the socket interface uses the interfaces, not anything like object streams. If you want different streams, you will need to wrap them.I suggest wrapping each in a buffered input or output stream, and sending the bytes across using that. The key here is that both ends of the stream need to use the same stream abstraction. If one end uses an object stream and the other does not, it will not work. By requesting streams from the sockets, it will work: they return private implementations that wrap the socket and are compatible.Once you understand how all the parts fit together, I suggest moving up to NIO which makes this a lot easier, but hides important details that it appears you need to learn about before you can safely abstract them away.
_unix.109403
I'm lost with this, hopefully I can find some help here:I have Ubuntu 12.04 LTS webserver.For example I have user myuser. There is a public directory which is my web root. I chowned it and all files in it as www-data:www-data. This enabled me proper work of PHP which can create/write files as PHP and web server run under www-data./home/myuser/public/My problem is that when I transfer files with rsync through ssh my permissions, user and group for the files are changed. This mean after every transfer I have to do my chown again.rsync -azv . -e ssh [email protected]/public/I'm not sure if chown-ing the files is the right way, but I followed this tutorial at DigitalOcean: https://www.digitalocean.com/community/articles/how-to-install-wordpress-with-nginx-on-ubuntu-12-04 and they do the following:sudo chown www-data:www-data * -R sudo usermod -a -G www-data usernameAs well as in other tutorials.I need some way when I will be not need to chown/chmod or whatever my files after each transfer (same as it works on shared hostings)Any ideas how to achieve that?
How to properly setup web server directory permissions
ubuntu;permissions;rsync;webserver;nginx
null
_computergraphics.370
Looking at Star Swarm, a demo for the Nitrous engine, I found this little line:Nitrous uses Object Space Lighting, the same techniques used in film, including real-time film-quality motion blur.I tried looking around for anything on object space lighting or object space rendering but couldn't come up with anything. When I hear object space I think of doing it per-object but I was hoping to find a more detailed description of the method.Does anyone know anything about object space lighting and if so could you go into some technical details(how its done, pros, cons, etc)?
What is Object Space Lighting?
rendering;real time;lighting;space
According to the Star Swarm developers this helps them with LOD and enables greater shading scaling. Based on this I guess its simply texture space lighting.Because we do what were calling object space lighting, we calculate the projected size of each of those objects on screen, and based on that we shade it in a priority manner based on how large they are. We can scale the shading quality at a different frequency than we scale the geometry level or something else. A deep dive into the making of the eye-popping Star Swarm demo (interview)
_cs.14458
I've converted an NFA to a DFA. But even after checking over it a few times, it still doesn't feel right. I'm sure this is trivial, but I'd like someone to give me an idea where I went totally wrong on this.NFA:DFA:
Converting NFA to DFA
automata;finite automata
Well, of course you have to merge the two empty states and there should be two transitions $(\{q_5\}, a, \emptyset)$ and $(\{q_5\}, b, \emptyset)$ if you want a complete automaton, but otherwise, it looks right and I agree with Subhayan: both automata accept $ab^+ \cup ab^+a$.
_unix.288858
I've installed ArchLinux on a Dell XPS 15 L521X. For some reason when Gnome boots it finds no audio devices.If I switch to a virtual terminal and use the terminal based music player cmus to play some music, all of a sudden they appear. The devices in question:mike@longshot:~ lspci -nn | grep -i audio00:1b.0 Audio device [0403]: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller [8086:1e20] (rev 04)01:00.1 Audio device [0403]: NVIDIA Corporation GK107 HDMI Audio Controller [10de:0e1b] (rev ff)Interestingly, even running that lspci -nn | grep -i audio command causes Gnome to suddenly find the audio devices. How can I get Gnome to reliably find these devices by itself on startup?
Gnome finds no audio devices
arch linux;gnome;audio
null
_webapps.25050
For some reason when I like something on some webpage and click post to Facebook, this is posted to my profile with a lock icon, meaning that only I can see it. I then need to manually go to Facebook, find this like on my profile page and share it.Where can I set the default visibility of Facebook likes so that my likes are shared immediately, without the need to go to facebook.com?
How to set default visibility of FB likes?
facebook;facebook like
The privacy of pages you Like is set on your timeline or profile. If you have timeline, go to your timeline and click on Likes, and then Edit in the upper right corner (Edit does not show up until you hover over it):Then you can set the privacy of each category of Likes using the button to its right. Most of them are in the category Other Pages You Like:If you still have the old profile, go to your profile by clicking your name at the top, then click Edit Profile, then the appropriate category that you want to change in the left sidebar. Other Pages you Like can be found on the Activities and Interests tab. Change the privacy setting using the button to its right:
_softwareengineering.261276
There are different conventions of representing the new line character in different types of OSes. Does the newline convention have nothing to do with what encoding is used?Is the newline convention not part of any(most) encoding method (e.g. ASCII, utf8, cp1253, ...)?
Is a newline convention not part of an encoding?
character encoding
While the said encodings specify the code for new line and carriage return, it is not part of the encoding whether or not you should use both, as is normal in a Windows environment, or just new line which is normal in a Unix environment.Also worthy of note is that on Unix, the newline is treated as a sort of end of line marker, and thus the last line in the file should also end with a newline. It is not common in Windows applications* to follow this practice, and thus normally text files created by a Windows application do not have an end-of-line marker after the last line.* unless that application is in fact an application written for a Unix environment, and ported to Windows, e.g. vim.
_softwareengineering.339468
I am developing a website where client needs that any notification should reach as soon as it is created. so i am using setinterval function of jquery and using ajax requests to get the notifications. the time interval I set is 2 seconds. and its not the only ajax request which is going this way. there are following ajax request being done within interval of 2 secget notificationsget messages.get counts.some other checksI am worried because i think sending this much request at very short time period may disturb the system. and worse if the number of users increase.Please tell me your opinions and solutions to this if this is wrong aproach
sending ajax request with setinterval . is it good?
php;jquery;ajax;notifications
null
_scicomp.17465
I have some vector $V$ which can be decomposed into the eigenspace of the hermitian sparse operator $M$:$V = \sum_i v_i \hat{m}_i$Is there a way to find the $\hat{m}_i$ (the eigenvector itself) that correspond to the largest $v_i$ (in magnitude)?I essentially want the largest few terms of the sum, including the eigenvectors of $M$, which I don't know ahead of time.Specifically, I want to simultaneously find the eigenvectors of $M$ that correspond to the largest $|v_i|$, along with finding the largest $v_i$. Preferably without finding the entire spectra of $M$ first.Some possibilities that I have been thinking about:We can inflate the matrix using the opposite of Wieldant's Deflation:$M_1 = M + \sigma \left[ \Sigma_i v_i \hat{m}_i \right] V^H = M + \sigma V V^H$The eigenvalues for different $\hat{m}_i$ are shifted $\lambda_i + \sigma |v_i|^2$. I believe we can then extract $\sigma$ and $v_i$ because the eigenvectors don't change. The problem is that the outer product of $V$ is dense.another possibility:The power method (keep multiplying $M$ by our vector $V$ until the convergence) finds the component of $V$ with the largest eigenvalue. The downside of this method is that we don't control for the magnitude of $v_i$, so we would end up finding ALL the components, and then finding the largest.Is there some way to control this so that we only converge on the largest component?
calculating eigenvector components of a given vector
sparse;eigensystem
null
_unix.362843
I have wrote following meta-project file for qmake that is designed to build all .pro files in the following subdirectories (for historical reasons and because of other toolchains, the file names do not match folder names). Which essentially boils down to something like this (throwing aside project-specific stuff)THIS_FILE=make-all.proTEMPLATE=subdirsFIND= find -name \'*.pro\' -printf \'%p\\n\'AWK= awk \'{$1=substr($1,3); print}\'RMTHIS= awk \'!/$$THIS_FILE/\'SUBDIRS= $$system($$FIND | $$AWK | $$RMTHIS )I need to get a list of folders that contain .pro files within a Bash script, so I decided to copy the method #!/bin/bashFIND=find -name '*.pro' -printf '%h\\nAWK=awk '{\$1=substr(\$1,3);printf}'SUBDIRS=$($FIND | $AWK)Apparently this doesn't work, awk was spewing error invalid char '' in the expression. Trying to execute same lines in Bash directly had shown that awk actually works only if double quotes are used find -name '*.pro' | awk {\$1=substr(\$1,3);printf}Replacing the line in question withAWK='awk {$1=substr($1,3);printf} 'gave no working result, the output of script is empty, unlike the output of manually entered command. Apparently find -name '*.pro' In script finds only files in current folder, while its counterpart in command line of bash find it in subfolders. What is wrong and why qmake works differently as well?
Escaping double quotes for variables in bash and qmake
bash;find;escape characters
I'm not familiar with qmake syntax, but from your sample it uses quotes and variables in very different ways than shells. So you can't just use the same code.http://unix.stackexchange.com/questions/131766/why-does-my-shell-script-choke-on-whitespace-or-other-special-characters covers what you need to know, so here I'll just summarize what's relevant for your question.In a nutshell, you cannot simply stuff a shell command into a string. Shells such as bash do not parse strings recursively. They parse the source code and build commands as lists of strings: a simple command consists of a command name (path to an executable, function name, etc.) and its arguments.When you use an assignment like AWK=awk '{\$1=substr(\$1,3);printf}', this sets the variable AWK to a string; when you use the variable as $AWK outside double quotes, this turns the value of the variable into a list of strings by parsing it at whitespace but it does not parse it as shell code, so characters like ' end up being literally in the argument. This is rarely desirable, which is why the general advice on using variables in the shell is to put double quotes around variable expansions unless you know that you need to leave them off and you understand what this entails. (Note that my answer here does not tell the whole story.)In bash, you can stuff a simple command into an array.#!/bin/bashFIND=(find -name '*.pro' -printf '%h\\n')AWK=(awk '{$1=substr($1,3);print}')SUBDIRS=$(${FIND[@]} | ${AWK[@]}) But usually the best way to store a command is to define a function. This is not limited to a simple command: this way you can have redirections, variable assignments, conditionals, etc.#!/bin/bashfind_pro_files () { find -name '*.pro' -printf '%h\\n'}truncate_names () { awk '{$1=substr($1,3);print}'}SUBDIRS=$(find_pro_files | truncate_names)I'm not sure what you're trying to do with this script (especially given that you change the find and awk code between code snippets), but there's probably a better way to do it. You can loop over *.pro files in subdirectories of the current directory withfor pro in */*.pro; do subdir=${pro%/*} basename=${pro##*/} doneIf you want to traverse subdirectories recursively, in bash, you can use **/*.pro instead of */*.pro, but beware that this also recurses into symbolic links to directories.
_cogsci.6419
Human muscles are controlled by action potentials that travel along the nerves. Below is an image of a train of action potentials that are decoded by the brain into a sensation or interpreted by a muscle group as a command.I'm interested if it is possible to record such action potential sequence and then play it back for that person. For example, a person is moving a leg, a recording of action potentials is made. Can such recording be transferred back to the nerves to create an illusion of movement or repeat the muscle movement?
Is it possible to imitate two way communication between a brain and a limb?
experimental psychology;cognitive modeling
null
_webmaster.86693
I have a Drupal 7 site served exclusively using SSL with the bare domain (no www).But, I see that in Google Webmaster Tools, I can add the following:http://example.comhttp://www.example.comhttps://example.comhttps://www.example.comI have added all of these for my site, accessed Site settings, and set it to Display URLs as example.com.This is true for the example.com + www.example.com pair and the https://example.com + https://www.example.com pair. Do I need to do anything more to mark my https sites as duplicates of the non-https sites?
Google Webmaster Tools- Do I need to mark my SSL domain as a duplicate of my non-SSL domain?
google search console;https;http;no www
SOURCEAdd all variations of your site to WMTWhile the site address move tool may not treat protocols, url changes and sub domains as new sites, the rest of Webmaster Tools does treat protocols and sub domains as separate sites. You should add all variations of your site, below is an example of my site BYBE added to WMT with all variations, you should do the same. (recommended by John Mueller from Google, See comments below this answer).301 redirects recommended by GoogleIf you plan to serve the website as partial ssl or complete then you should setup good redirects, as recommended by Google:SOURCEPrepare for 301 redirects Once you have a mapping and your new site is ready, the next step is to set up HTTP 301 redirects on your server from the old URLs to the new URLs as you indicated in your mapping. Keep in mind the following: Use HTTP 301 redirects. Although Googlebot supports several kinds of redirects, we recommend you use HTTP 301 redirects if possible. Avoid chaining redirects. While Googlebot and browsers can follow a chain of multiple redirects (e.g., Page 1 > Page 2 > Page 3), we advise redirecting to the final destination. If this is not possible, keep the number of redirects in the chain low, ideally no more than 3 and fewer than 5. Chaining redirects adds latency for users, and not all browsers support long redirect chains.Test the redirects. You can use Fetch as Google for testing individual URLs or command line tools or scripts to test large numbers or URLs. Setting up the redirect in ApacheSetting up redirects in Apache, ngInx, IIS is pretty straight forward, below is examples of redirecting 301 from HTTP to HTTPS in Apache2 .htaccess file.SOURCEEnforce SSL on specific pages and disable on restThis script will remove SSL on all other pages part from the login page and register page, you can add more just use | as the separator between file names.mod_rewrite: RewriteCond %{HTTPS} onRewriteCond %{SCRIPT_FILENAME} !\/(login|register)\.php [NC]RewriteRule ^(.*)$ http://%{HTTP_HOST}/$1 [R=301,L]Enforce SSL on the entire siteIf you want to enforce SSL on the complete site then you can use mod_rewrite to detect HTTPS off.mod_rewrite:RewriteEngine OnRewriteCond %{HTTPS} offRewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}Sitemap changesSince you are changing protocol you need to add a new property to Google as HTTPs, this will have no sitemap submitted as default, you will need to ensure that your sitemap contains all the new URLS and then submit it under the HTTPS property variation.You should also inform Google your preferred domain (variation):SOURCESpecify a preferred domain:On the Search Console Home page, click the site you want.Click the gear icon , and then click Site Settings.In the Preferred domain section, select the option you want.
_unix.164364
My unsuccessful proposalfind ./ -newerct '1 week ago' -print | grep TODONo output although should be. Files are text files likeLorem% TODO check this outLorem ipsunHow can you find less than 1 week old files matching TODO? Output should be the line after TODO. Perl solution is also welcome, since I am practising it too.
Find files less than 1 week old with match TODO in files
grep;find;perl
Change this:find ./ -newerct '1 week ago' -print | grep TODOto this:find ./ -newerct '1 week ago' -exec grep TODO {} +or this:find ./ -newerct '1 week ago' -print | xargs grep TODOExplanationYour grep doesn't interpret the output of find as a list of files to search through, but rather as its input. That is, grep tries to match TODO in the names of files rather than their contents.From the grep(1) man page:grep searches the named input FILEs (or standard input if no files are named, or if a single hyphen-minus (-) is given as file name)To match the line after TODO:find ./ -newerct '1 week ago' -exec grep -A1 TODO {} + | grep -v TODOThis assumes you have GNU grep.
_codereview.32863
I cache the data and use local database in my Windows Phone app. The algorithm is very simple:Get data from DB and show in UIGet data from a web service and show in UIUpdate data in DB from the web serviceSome data need to be saved in DB, and some do not need to.For these purposes, I have the following classes.GYDataAccessLayer is an entry point. This class sets and changes the data source. Example:GYDataAccessLayer da = new GYDataAccessLayer();da.GetInfo((result, error) => { if (error != null) { return; } // handling exception GYUser user = result; });public class GYDataAccessLayer{ public void GetUserInfo(Action<GYUser, Exception> act, string id) { GYDataSource dataSource = new GYDataSource(new GYLocalData()); dataSource.GetUserInfo(act, id); dataSource.SetDataSource(new GYWebData()); dataSource.GetUserInfo(act, id); }}public class GYDataSource{ IGYDataAccess _gyDataAccess; public GYDataSource(IGYDataAccess gyDataAccess) { _gyDataAccess = gyDataAccess; } public void SetDataSource(IGYDataAccess gyDataAccess) { _gyDataAccess = gyDataAccess; } public void GetInfo(Action<GYUser, Exception> act, string uid) { _gyDataAccess.GetUserInfo(act, uid); }}I have a two classes to work with Local and Web data source. GYWebData gets data from web and calls update method in GYLocalData.public interface IGYDataAccess { void GetUserInfo(Action<GYUser, Exception> act, string id); }public class GYWebData : IGYDataAccess{ public void GetUserInfo(Action<GYUser, Exception> act, string id) { GYUserAPI.GetInfo(id, (result, error) => { if (error != null) { act.Invoke(null, error); return; } act.Invoke(result.Result, null); GYLocalData gyLocalData = new GYLocalData(); gyLocalData.UpdateUserInfo(result.Result); }); }} public class GYLocalData : IGYDataAccess{ private const string ConnectionString = @isostore:/Cache.sdf; CacheDataContext DataBase; CacheDataContextProfiles DataBaseProfiles; private void CheckDataBase() { using (DataBase = new CacheDataContext(ConnectionString)) { if (!DataBase.DatabaseExists()) { DataBase.CreateDatabase(); } } } public void GetUserInfo(Action<GYUser, Exception> act, string id) { CheckDataBase(); using (DataBaseProfiles = new CacheDataContextProfiles(ConnectionString)) { try { var user = DataBaseProfiles.GetInfo(id); if (user != null) { act.Invoke(user, null); } } catch (Exception ex) { act.Invoke(null, ex); } } } public void UpdateUserInfo(GYUser user) { CheckDataBase(); using (DataBaseProfiles = new CacheDataContextProfiles(ConnectionString)) { DataBaseProfiles.UpdatePersonProfile(user); } } }With this architecture it is inconvenient to work and hard to support. You have to write a lot of code and be very attentive (eg do not forget to call the update data method in the DB). Are there good patterns for such problems? How can I improve this code?
Architecture to cache data
c#;design patterns;.net;cache
Let's go over the basics first!The indentation is quite a mess. But I suppose that might be related to the SE formatting. Otherwise, you should make your indentation coherent. Stuff like : public interface IGYDataAccess { void GetUserInfo(Action<GYUser, Exception> act, string id); }shouldn't be in production code, it looks bad and gives headaches to maintainer (that could be you!)Why are your classes prefixed of GY? Maybe there's a good reason, but I'm pretty sure your names would make just as much sense without that prefix. The pattern you're using for caching uses the Repository design pattern. It abstracts the implementation to access to data. So that pattern name should be found in your naming! IRepository, LocalDataRepository, WebDataRepository. Some might argue that it's useless, but I think it's good to see the pattern's name in the class name, so we know what we're dealing with.Multi-line lambdas are... okay, I guess. But I think in your case they should be in a method. A lambda is an anonymous function, something that doesn't matter that much in your code. But that code is important, don't let it crawl in the shadows, that code deserves a method. (The lambda in GetUserInfo)Don't keep your connection string as a const in your code, that's a very bad practice. It should be in a configuration file, and you should get it using the ConfigurationManager class (In System.Configuration).private members should be camelCased, not PascalCased, so DataBase -> dataBase (In GYLocalData). Also, you should specify the private accessor, so I know it wasn't forgotten or something like that. See, if it's not there. I don't know your clear intent. Maybe you wanted this field to be public but you forgot because you were out of coffee. I'll never know because you didn't specify it.I want to give a little warning that might be unnecessary. The CheckDatabase method isn't thread safe. That means if your GYLocalData object can be called from multiple threads at once, you might create the database twice. And that's a costly operation. That's bad.You should only use var when you can figure the Type by reading the code. In the example below, I can't figure it out, so it's harder to know what's happening and even harder to review! :)var user = DataBaseProfiles.GetInfo(id);You could use some Dependancy Injection in your GYWebData class. You want to store everything that is took from the web API in a another storage. So the GYWebdata class should receive in the constructor a GYLocalData instance where you would store the information : public class GYWebData : IGYDataAccess{ private GYLocalData storage; public GYWebData(GYLocalData storage) { this.storage = storage; } public void GetUserInfo(Action<GYUser, Exception> act, string id) { GYUserAPI.GetInfo(id, (result, error) => { if (error != null) { act.Invoke(null, error); return; } act.Invoke(result.Result, null); storage.UpdateUserInfo(result.Result); }); }}Now this isn't exactly Dependency Injection because I give a concrete class as parameter. You would need another interface, IDataUpdater, maybe : public interface IDataUpdater{ void UpdateUserInfo(GYUser user);}The LocalData class should implement that interface, and then : public class GYWebData : IGYDataAccess{ private IDataUpdater updater; public GYWebData(IDataUpdater updater) { this.updater = updater; } public void GetUserInfo(Action<GYUser, Exception> act, string id) { GYUserAPI.GetInfo(id, (result, error) => { if (error != null) { act.Invoke(null, error); return; } act.Invoke(result.Result, null); updater.UpdateUserInfo(result.Result); }); }}Boom, you now have zero dependency between the local storage and the web storage.Finally, looking at your workflow, you could use the Proxy design pattern to help a bit. The Proxy would be used to get info from the web and cache it in the local storage. The proxy would be the only class you use for storage. I don't want to give it all to you, because you'll learn better by yourself! :) But know that the Proxy is a good case for this!
_unix.149378
I have installed Linux (Debian) onto my computer (which is not partitioned). Want I want to do is to create an NTFS partition so that I can install Windows 7 onto my computer, so I can dual-boot into Linux/Windows 7.So is it possible to create an NTFS partition in Linux, without losing any Linux files?
Create NTFS partition (in Linux) for dual-boot into Linux/Windows 7
debian;partition;windows;dual boot;ntfs
null
_unix.19970
I have a text (configuration) file, but the program that reads the file unfortunately doesn't allow using any kind of variables. So I'd like to use a preprocessor that replaces a set of placeholders in the config file before passing it to the program.I can define the format of the variables any way I want (e.g. SOME_DIR). If I had just a few variables, I would probably use sed:sed -e s*SOME_DIR*$SOME_DIR*g my.conf | target_progBut the list of variables is pretty long, and it should be easy to configure - so I'd prefer to put the variables in a properties file likeSOME_DIR=...OTHER_DIR=......and then callsome_replace_tool my.properties my.conf | target_progI'm looking for some_replace_tool. Any ideas?
How to replace a list of placeholders in a text file?
text processing;search;replace
null
_cs.42780
Let us fix an alphabet $\Sigma$ of size $c$, then we have the finite language $\Sigma^n$ which is the set of all $n$ length words. For each $N,M$ how many words are there in $\Sigma^n$ such that no sequence of $N$ characters repeats $M$ times in a row?For instance with N=2 and M=3101010 would not be counted, but 111010 would beHere is some data I found on it with a simulation:c n N M total percent2,4,2,2,10,0.6252,5,2,2,16,0.52,6,2,2,26,0.406252,6,3,2,48,0.752,6,2,3,48,0.752,7,2,2,42,0.3281252,7,3,2,88,0.68752,7,2,3,88,0.68752,8,2,2,68,0.2656252,8,3,2,162,0.63281252,8,4,2,216,0.843752,8,2,3,162,0.63281252,8,2,4,216,0.843752,9,2,2,110,0.214843752,9,3,2,298,0.582031252,9,4,2,416,0.81252,9,2,3,298,0.582031252,9,3,3,416,0.81252,9,2,4,416,0.81252,10,2,2,178,0.1738281252,10,3,2,548,0.535156252,10,4,2,802,0.7832031252,10,5,2,928,0.906252,10,2,3,548,0.535156252,10,3,3,802,0.7832031252,10,2,4,802,0.7832031252,10,2,5,928,0.906253,4,2,2,66,0.81481481481481483,5,2,2,180,0.74074074074074073,6,2,2,492,0.67489711934156383,6,3,2,666,0.91358024691358023,6,2,3,666,0.91358024691358023,7,2,2,1344,0.61454046639231833,7,3,2,1944,0.88888888888888883,7,2,3,1944,0.88888888888888883,8,2,2,3672,0.55967078189300413,8,3,2,5676,0.86511202560585283,8,4,2,6318,0.96296296296296293,8,2,3,5676,0.86511202560585283,8,2,4,6318,0.96296296296296293,9,2,2,10032,0.5096784026825183,9,3,2,16572,0.84194482548392013,9,4,2,18792,0.95473251028806583,9,2,3,16572,0.84194482548392013,9,3,3,18792,0.95473251028806583,9,2,4,18792,0.95473251028806583,10,2,2,27408,0.46415688665345733,10,3,2,48384,0.8193872885230913,10,4,2,55896,0.94660366814001933,10,5,2,58158,0.98491083676268863,10,2,3,48384,0.8193872885230913,10,3,3,55896,0.94660366814001933,10,2,4,55896,0.94660366814001933,10,2,5,58158,0.98491083676268864,4,2,2,228,0.8906254,5,2,2,864,0.843754,6,2,2,3276,0.79980468754,6,3,2,3936,0.96093754,6,2,3,3936,0.96093754,7,2,2,12420,0.7580566406254,7,3,2,15552,0.949218754,7,2,3,15552,0.949218754,8,2,2,47088,0.7185058593754,8,3,2,61452,0.937683105468754,8,4,2,64704,0.98730468754,8,2,3,61452,0.937683105468754,8,2,4,64704,0.98730468754,9,2,2,178524,0.68101501464843754,9,3,2,242820,0.92628479003906254,9,4,2,258048,0.9843754,9,2,3,242820,0.9262847900390625The values seem to go to $c^n$ as $N,M$ increase and appear to be a function of $c$, $N+M$. It also seems to decrease as you increase $n$ and increase as you increase $c$. This all makes intuitive sense to me. If I had to guess I would say it is exponential in $n$, polynomial in $c$, and exponential in $N+M$A secondary question I have is how would you represent the language defined by just the strings counted.
How many restricted length strings are there without significant repetitions
formal languages;word combinatorics
null
_softwareengineering.349217
First is I'm starting to build a standard XML format/structure for our users. The objectives are:XML that can be used for multiple organisationsXML that can be used to map external system data to our system dataXML structure should be in best practicesXML should be adaptable to changeThe above are the objectives, so our initial structure, users XML, would look like this:<users> <user> <firstName></firstName> <lastName></lastName> <email></email> <!-- .. some user child node here --> </user></users>So I'm thinking what if this structure grows which might have different objects associated to user something like:<users> <user> <firstName></firstName> <lastName></lastName> <email></email> <element1> <child1></child1> </element1> <element2> <child1></child1> <child2> <innerChild1></innerChild1> </child2> </element2> </user></users>Then I would have to implement urn namespace to uniquely identify same named <element>. This is where namespace is useful.My questions are:Do I have to implement namespace by having it implemented on the initial XML sample?When to use attribute instead of creating elements as child node?Best practices that I could use for our XML to be adaptable to change?What are the things I should prevent when building or structuring XML?Note: we are using XML instead of JSON because most of our users use XML.
Tips and tricks in implementing XML or XML schema
design;data structures;xml;xsd
Honestly, my answer would initially be: don't use XML. I've been working with XML for many years and the reality is that it's a terrible format for data exchange. JSON has it's own flaws but it is much better. XML is actually not a bad way to create documents but even that usage is being replaced with HTML5. However, given that you are 'forced' to do this, here's my list of recommendations:XMLXML Namespaces suck to deal with but if might need them, you are better off using them from the start. Retrofitting them in is a huge pain in my experience.Use attributes only for metadata. Elements are much more powerful. When something you thought was simple becomes complex, it is still an element. An element can also be more than one thing depending on the context.Never ever ever allow mixed content. That is, don't allow text nodes and child elements as content at the same time. It's either or.Do not allow entity references. This is a serious security risk.Declare all namespaces in the root and use prefixes. Putting namespace declarations on every element will add a lot of bloat to an already bloated document.Remove all whitespace if you are doing any sort of encryption or signatures.XSDForget all the stuff about salami slices and venetian blinds. Create element definitions at the schema level only for those things that you want to use as the root of a document. Everything else should a type.Use sequences pretty much always. Choice elements can be useful but complicate things.Do not specify nillable=true. Use minOccurs=0. The element is there with a value, there and empty, or is not there. Introducing null values at the interface level is a bad idea.You can't say things like at least 2 of the following three options in XSD without getting nutty. Let it go and move on.I will add more if I can think of anything.
_unix.34712
Here is what I currently have online, as you can see there is no information about my debian server.(While was installing I tried to follow next instructions)What I have changed in default gmond.conf:cluster { name = dspproc owner = unspecified latlong = unspecified url = dspproc}udp_send_channel { mcast_join = 127.0.0.1 port = 8649 ttl = 1}udp_recv_channel { mcast_join = 127.0.0.1 port = 8649 bind = 127.0.0.1}And this is what I changed in gmetad.conf:data_source dspproc 10 127.0.0.1authority http://195.19.243.13/ganglia/trusted_hosts 127.0.0.1 195.19.243.13case_sensitive_hostnames 0My question is: what I do wrong , and how to make ganglia show info about current machine its installed on?UpdateFollowing this answer Changed to: udp_send_channel { host = 127.0.0.1 port = 8649 ttl = 1 } /* You can specify as many udp_recv_channels as you like as well. */ udp_recv_channel { host = 127.0.0.1 /* line 41 */ port = 8649 bind = 127.0.0.1 }got this on restart:Starting Ganglia Monitor Daemon: /etc/ganglia/gmond.conf:41: no such option 'host'and still Hosts up: 0 in web ui.Upadate 2:So... when I read the answer again and went on to the link made next changes into configuration and all worked out!) Thank you noffle!Now that block of gmod.conf looks like udp_send_channel { host = 127.0.0.1 port = 8649 ttl = 1 } udp_recv_channel { port = 8649 family = inet4 } udp_recv_channel { port = 8649 family = inet6 }and all seems to work...
How to configurate ganglia-monitor on a single debian machine?
debian;configuration;monitoring
I seem to remember having a similar problem when setting up Ganglia many moons ago. This may not be the same issue, but for me it was that my box/network didn't like Ganglia's multicasting. Once I set it up to use unicasting, all was well.From the Ganglia docs:If only a host and port are specified then gmond will send unicast UDP messages to the hosts specified.Perhaps try replacing the mcast_join = 127.0.0.1 with host = 127.0.0.1.
_unix.119345
Where do I can find the source code of the screens of the installation process of Debian?I've tried: apt-get source debian-installerBut in this package I do not see the source code.To be more specific I'm looking for the source code of this screen:
Source code of the screens of debian-installer
debian;source
null
_unix.258690
I recently upgraded my NAS to kernel 4.4 and now my md raid 5 array doesn't assemble itself at boot. dmesg reveals this: [ 2.299666] md: personality for level 5 is not loaded! and doing lsmod | grep raid reveals that neither md_mod nor raid456 is loaded. If I do mdadm --stop /dev/md0 and then mdadm --assemble /dev/md0 the array will successfully assemble and then doing lsmod | grep raid again shows all the required modules loaded. I've tried doing mkinitramfs -o /boot/initrd.img-4.4.0 4.4.0 with the array assembled and mounted but that doesn't seem to fix my issue. How can I make the module load at boot so my array assembles?
MD RAID 5 module doesn't load at boot
linux;mdadm;md
null
_unix.325863
Sometimes disk spins up when computer is inactive and it lasts and lasts and lasts. I have no idea how to figure out what it is.Usually I notice this while lying down and by the time I get there and start iotop it's gone. It feels like I am using windows and my data is just being read from the disk without my permission and sent God knows where.I have atop installed but I have no idea how to use it for this user case.atop -d shows data in real time and doesn't filter it for long continuous use.
How to log process that accesses disk for longer than 5 seconds continously?
disk usage;disk
null