id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.6075
I'm looking into encrypting my home folder with Truecrypt, and mounting it after I've logged in, which should be pretty straightforward. However, it occurred to me that it should be theoretically possible to mount it as I'm logging in, as long as my account password is the same as my Truecrypt password. Is there a way to get PAM to run a command on login and pass the command my password as an argument? Or is there some other way to accomplish the same effect, without me needing to provide the password multiple times?
Automatically mount encrypted home folder on login
encryption;pam;truecrypt;home
null
_webapps.19564
If I have multiple (independent) checklists on a card, is it possible to change the order in which they appear? I know I can change the order of items within a checklist.
Can multiple Checklists on a Trello card be re-arranged?
trello
null
_codereview.88914
Title says it all. I'm basically initializing a two-dimensional array and changing the fields one at a time.It might be a good idea to do this OO.A Grid class seems right, perhaps as a subclass of Array. So far I haven't been able to successfully port it. Pointers would probably be a good idea as well.Note this is kind of a sandbox project at the moment. I'm a novice in JS and trying to wrap my head around how I can handle data efficiently (particularly memory efficient, CPU is not as relevant).Partial reviews are also welcome.Array.initialize = function(iRows, iColumns, cInit) { var array = []; for (var i = 0; i < iRows; ++i) { var columns = []; for (var j = 0; j < iColumns; ++j) { columns[j] = initial; } array[i] = columns; } return array;}Array.shoot = function(row, column, array) { array[row][column] = H; return array;}var myGrid = Array.initialize(12, 12, U);myGrid = Array.shoot(5, 5, myGrid);myGrid = Array.shoot(0, 4, myGrid);console.log(myGrid);
Two-dimensional array, modified one field at a time
javascript;memory optimization
A few things:Someone's getting a head-start on Battleship, eh? :)Don't modify native objects/prototypes. Array is the native JavaScript array constructor (a.k.a. class), and you're adding initialize and shoot methods to it. Don't. Well, ok, you can extend native objects, but it's more hygienic to leave it alone. And in this case the methods you're adding are super-specific to your task, and not generic functionality, so that's a double don't.I'm pretty sure you have a bug: cInit isn't used for anything, but instead you have an previously undeclared (and therefor undefined) variable called initial. I'm guessing they were supposed to be one and the same.Now, in terms of memory efficiency: Don't worry about it. JavaScript has automatic memory management and garbage collection, but it's buried deep in the runtime, so it's not something you can fiddle with. It also has dynamic typing, so it's kinda hard to even guess at memory consumption at times. For instance, the number 32 is 8 bytes long, since all numbers are 64-bit IEEE floating point values, yet 32 == 32 happens to be true due to dynamic typing/type-coercion. So the string 32 is for some purposes it's the same number, but it's probably only 2-3 bytes long. Maybe? I don't know. Because behind the scenes, all JS runtimes do a ton of voodoo, so in the end it's really hard to tell what's going on. Besides the runtime's own memory consumption is probably way beyond whatever data you're handling. The runtime might well trade memory for speed somewhere. And another runtime might do the opposite, but they'll both run your code the same.Point is, memory just isn't a thing you really can worry about in JavaScript the same way you can worry about it in other languages. At least not on the same low level. Of course you can still run out of memory and things like that, but you'd have to try pretty hard. Especially with a simple little 2D array.Aside: Don't bother with Hungarian notation either. Well, again, you can, but it's a lot of busywork when there's dynamic typing and type-coercion. You have parameters like iRows, which, yeah, should be an integer, but as mentioned all integers are just the Number type - which is really a float. And you could also pass in a numeric string and have it work. So it just gets confusing in a hurry.Of course, you still want efficient data structures, but since the idea here is to construct a 2D array, well, you're probably best off using... a 2D array. So again, don't worry too much.As for the code, you have the basics well in hand. As mentioned, you shouldn't be appending functions to the native Array constructor, but otherwise it's fine. Fixing that little thing you get:function createArray(rows, columns, initialValue) { var array = []; for (var i = 0; i < rows; ++i) { var columns = []; for (var j = 0; j < columns; ++j) { columns[j] = initialValue; } array[i] = columns; } return array;}function shoot(row, column, array) { array[row][column] = H; return array;}Still, I'd use push to append stuff to an array, rather than assigning things to an index.But you said you might want a more OO-style implementation. However, sub-classing Array would be tricky. Especially as there are no classes in JavaScript, per se. It's easier to go with object composition; make your own constructor, and have your array be a property (aka member, aka instance variable) on the instances that constructor creates.So let's make a Grid constructor and let's give it a shoot method:function Grid(rows, columns) { var r, c, row; this.cells = []; // instance variable for(r = 0 ; r < rows ; r++) { row = []; for(c = 0 ; c < columns ; c++) { row.push(U); } this.cells.push(row); }}Grid.prototype.shoot = function (row, column) { this.cells[row][column] = H;};Now, much like before, you can say:var grid = new Grid(12, 12);grid.shoot(5, 5);grid.shoot(0, 4);console.log(grid.cells); // notice you still have access to the raw arrayOf course, you may want to do some bounds-checking in shoot to make sure the coordinates make sense, but otherwise... yeah, that's pretty much it.
_unix.179988
According to the systemd docs, journalctl is recommended for browsing logs, rather than the /var/log/* file tree.While man 1 journalctl describes how to use it, I still don't know what arguments it needs to give me the list of stuff I want. For example, I want to see a list of user logins. I'm aware sshd handles ssh logins, but what about local logins and general user authentication?Here's what I've tried:#shows all logs. hugejournalctl#limit history and search for loginjournalctl --since yesterday | grep login#a week ago rather than just a dayjournalctl --since `date +%Y-%m-%d --date last week` | grep login... systemd-logind[678]: New session 81 of user jozxyqk.This seems to give some indications but definitely not the most robust method. What's the correct way?
How can I get a list of login attempts with journalctl?
fedora;logs;journald
null
_unix.19250
I have 2 dedicated servers both running CentOS 5.3/Plesk 10. I've transfered a website (domain) from old server to a new one via Plesk migration manager and the website (domain) shows in the domain list on Plesk, and it's files are in /var/www/vhosts/domain.com/httpdocs/I tried to open the website like this: http://xx.xx.xx.xx/domain.com where xx.xx.xx.xx is a shared IP that I've assigned to the domain on a new server during transfer and domain.com is website's domain name. Instead of website loading I get a 404. How can I open the website and see if everything is ok? Essentially, what is the path (using IP) to the website until it gets DNS sorted out?UPDATE: apache error_log file shows:[Tue Aug 23 11:05:15 2011] [error] [client xx.xx.xx.xx] File does not exist: /var/www/vhosts/default/htdocs/domain.comThis is where the problem exists, I've expected it to follow path like this:/var/www/vhosts/domain.com/htdocs/and instead the server apparently tries this:/var/www/vhosts/default/httpdocs/domain.comNote the difference - htdocs and httpdocs, which is the actual folder on the server.I need to know that the website is running ok, otherwise I cannot assign DNS to make it live...UPDATE2: I am able to access the website if I edit hosts file on my PC with something like that:1.2.3.4 domain.comSo how come I can't load it like this: http://1.2.3.4/domain.com/ ?
can't open transfered website from old DS to new DS
dns;apache httpd;plesk
null
_webmaster.51878
I'm looking for a free hosting provider which provides an SSL Certificate at no additional charge. Does anyone know of a hosting provider which can provide such a service? Thanks in advance,Amani Swann
Free Basic Hosting w SSL Certificate
web hosting;https;free
null
_unix.156150
BackgroundI have an ssh and OwnCloud server. Frequently, my desktop's OwnCloud client disconnects. Attempting to ssh into my server results in Connection refused. I can ping the server, but cannot connect via ssh or the OwnCloud client. Oddly enough, I can connect to the OwnCloud webpage fine.After several minutes, I can connect via ssh. However, I sometimes get kicked from the session, and cannot connect again for a few minutes. This just happened, allowing me to look through /var/log for newly-modified logs. The following had all been modified recently, but none contained anything interesting: wtmp lastlog auth.log ufw.log syslog messages kern.log.After being locked out, I've also tried to restart, but this does not solve the problem immediately. In the past, I could always connect about 60 seconds after a restart. Now, I cannot connect for several minutes. As above, ping works, but I cannot connect immediately.How can I make my ssh/OwnCloud server work all the time?Other informationThe server is a Raspberry Pi running Raspbian Jessie (testing). I also use ufw and fail2ban.
Why is my server intermittently refusing ssh connections?
ssh;raspberry pi;raspbian
It seems that this problem was caused by another connected device competing for the same IP address. I'm not sure how it managed to confuse the router, but when I disconnected the other device (a Volumio Raspberry Pi), I could connect to the main server fine again.
_unix.121510
I have written a bash script to calculate the size of a PostgreSQL database and print the output along with the date when the script was executed in a text file. The script code is as follows:#!/bin/bashdate +%d:%m >> dbdata.growthpsql -h 192.168.2.173 -U postgres -c select pg_database_size('ddb'); | sed -n '3,3p' | numfmt --to=iec >>dbdata.growthpsql -h 192.168.2.173 -U postgres -c select pg_database_size('dpkidb'); | sed -n '3,3p' | numfmt --to=iec >>dbdata.growthThe script produces the following output in the format as shown below 26:03 134G 4.4M26:03 134G 4.4MThe issue for me is that I want all the three columns in the same row. How can I achieve that?
Bash Scripting : Printing column data in the same row
bash;sed;scripting;echo
Replace all output lines, such as:date +%d:%m >> dbdata.growthwith lines such as:date +%d:%m | tr -d $'\n' >> dbdata.growthThis uses tr to delete newline characters before they are put in the output file.tr is a translate or delete utility. In this case, the use of the -d option tells it to delete. The character that we ask it to delete is the newline character, expressed as $'\n'.
_codereview.158527
I have a few resources set in my class coins, diamonds, hearts, enum of these resources names, and method for increasing specified resource. I want to make single Increase method for all resources that wouldn't have switch statement. Sort of single-line not overloaded generic method independently on which property to increase. Below is the best what I achieved so far. Please advice on best approach to be taken for the described Increase method.public enum ResourceType { Coins, Diamond, Hearts };private static int Coins { get { return resourceBox[ResourceType.Coins]; } }private static int Diamonds { get { return resourceBox[ResourceType.Diamonds]; } }private static int Hearts { get { return resourceBox[ResourceType.Hearts]; } }private static Dictionary<ResourceType, int> resourceBox = new Dictionary<ResourceType, int>();private void Init(){ resourceBox.Add(ResourceType.Coins, 0); resourceBox.Add(ResourceType.Diamond, 0); resourceBox.Add(ResourceType.Hearts, 0);}public static void Increase(ResourceType _res, int _income){resourceBox[_res] += _income;}However this leads to situation where Dictionary is a primary resource storage, which may not be desired in future.I wonder what could be another (not obligatory dictionary) way to create Increase method as described above.
One method for changing one of the properties
c#;dictionary;enum;properties
null
_unix.230520
How can I change Ubuntu server IP Address to the web address? For example, the address that I need to access on the browser is 192.168.x.xxx. How can I change to dev.robi.local? Thanks!
How can I change Ubuntu server IP Address to the web address?
linux;ubuntu;ip;webserver;web
null
_datascience.22266
I have a datset with Scores and Categories and I would like to calculate the summary statistics for each of these categories. The data look something like this:Category Score AAAA 1AAAA 3AAAA 1BBBB 1BBBB 100BBBB 159CCCC -10CCCC 9What I would then like would be something like this Category Count Mean Std Min 25% 50% 75% Max AAAA AAAA AAAA BBBB BBBB BBBB CCCC CCCC I have been looking at using pandas with a combination of both .groupby() and .describe() like this df.groupby('Category')['Score'].describe()and this almost looks like what I want but when I come to view this as a Dataset, all of the stats are in the index. I would like the data to be in the form of a table so I can output it and create a visualization off of the back of it.Any ideas?Thanks
Summary statistics by category using Python
python;pandas
IIUC:In [80]: df.groupby(Category)['Score'].describe().reset_index()Out[80]: Category count mean std min 25% 50% 75% max0 AAAA 3.0 1.666667 1.154701 1.0 1.00 1.0 2.00 3.01 BBBB 3.0 86.666667 79.839422 1.0 50.50 100.0 129.50 159.02 CCCC 2.0 -0.500000 13.435029 -10.0 -5.25 -0.5 4.25 9.0
_codereview.80524
Control like if..else and whileIf \$f\$ is a numerical function and \$n\$ is a positive integer, then we can form the \$n\$th repeated application of \$f\$, which is defined to be the function whose value at \$x\$ is \$f(f(...(f(x))...))\$. For example, if \$f\$ adds 1 to its argument, then the \$n\$th repeated application of \$f\$ adds \$n\$. Write a function that takes as inputs a function \$f\$ and a positive integer \$n\$ and returns the function that computes the \$n\$th repeated application of \$f\$:def repeated(f, n): Return the function that computes the nth application of f. f -- a function that takes one argument n -- a positive integer >>> repeated(square, 2)(5) 625 >>> repeated(square, 4)(5) 152587890625 *** YOUR CODE HERE ***The solution is implemented using functional paradigm style, as shown below:from operator import mulfrom operator import powdef repeated(f, n): Return the function that computes the nth application of f. f -- a function that takes one argument n -- a positve integer >>> repeated(square, 2)(5) 625 >>> repeated(cube, 2)(5) 1953125 assert n > 0 def apply_n_times(x): count = n next_acc = f(x) count = count - 1 while count > 0: next_acc = f(next_acc) count = count - 1 return next_acc return apply_n_timesdef square(x): return mul(x, x)def cube(x): return pow(x, 3)print(repeated(square, 0)(2))My questions:If the code looks correct from a programming style perspective, can I further optimize this code written in function repeated?Can the naming conventions and error handling be better?
Functional programming approach to repeated function application
python;python 3.x;functional programming
I see that you have grasped the concept of higher-order functions. To be more generalized, though, your repeat function should be able to handle n = 0 correctly too. That is, repeat(square, 0)(5) should return 5.However, your solution is not written in the functional style, due to the use of count = count - 1. In fact, if you consider that such counting statements are forbidden in functional programming, you'll realize that iteration loops using while can't possibly be used in functional programming either. You are then left with recursion as the only viable approach. (Python isn't designed to do efficient or deep recursion, but as an academic exercise, we ignore such considerations.)An exception is probably more appropriate than an assertion. You should assert conditions that you know to be true, not ones that you hope to be true.It is customary to combine the two imports into a single import statement.from operator import mul, powdef repeat(f, n): def apply_n_times(n): if n < 0: raise ValueError(Cannot apply a function %d times % (n)) elif n == 0: return lambda x: x else: return lambda x: apply_n_times(n - 1)(f(x)) return apply_n_times(n)Note that lambda is merely a handy way to define an unnamed function on the spot. You could use explicitly named functions as well, but it would be less elegant.def repeat(f, n): def identity(x): return x def apply_n_times(n): def recursive_apply(x): return apply_n_times(n - 1)(f(x)) if n < 0: raise ValueError(Cannot apply a function %d times % (n)) elif n == 0: return identity else: return recursive_apply return apply_n_times(n)
_softwareengineering.157842
I mean everything, not just schema changes. Even a simple SELECT on a primary key can't go into production, even though it has been code-reviewed by other developers (in context), without a DBA review of every statement, extracted from the code and submitted with EXPLAIN output, details of how often it will be called, etc, etc.If you've been in such an environment, did you find it to be a net gain, or a drag on development? As someone who has been working with relational databases for years, I find the developers can't be trusted to use databases attitude unwarranted, and I'm curious as to how common this situation is. For what it's worth, this is just for web development, not something critical.
Every SQL statement has to be reviewed by a DBA -- common?
code reviews;dba
null
_codereview.159406
I have a wind rose with 72 evenly-spaced directions, spanning 360, each describing a direction-specific average wind speed and associated probability. We must condense this information to 36 evenly-spaced direction bins. probability is the normalized frequency of occurrences. The probabilities should sum to one. If 5 has a 0.1 probability. that means that the wind came from that direction 10% of the time. I average across the midpoints of each bin, which results in undesirably shifting the wind rose so that the center is at 2.5 degrees. Is there a more elegant way to do this in theory or implementation?import pandas as pdimport numpy as npimport wget# Load data!url = 'https://raw.githubusercontent.com/scls19fr/windrose/master/samples/amalia_directionally_averaged_speeds.txt'wget.download(url, 'amalia_directionally_averaged_speeds.txt')dat = pd.read_csv('./amalia_directionally_averaged_speeds.txt', sep=r'\s+', header=None, skiprows=[0])dat.columns = ['direction', 'average_speed', 'probability']# two steps# 1. average frequency across speeds# 2. compute average speed weighted by frequencies# (the scheme is to average across both midpoints. this# shift the centers by 2.5 degrees and bin by 10)directions, speeds, frequencies = [], [], []for i in range(dat.shape[0] / 2): directions.append(dat.direction[i * 2] + 2.5) frequencies.append(np.mean(dat.probability[[i * 2, i * 2 + 1]].values)) speeds.append((dat.direction[i * 2] * dat.average_speed[i*2] + dat.direction[i*2 + 1] * dat.average_speed[i * 2 + 1]) / (dat.direction[i * 2] + dat.direction[i * 2+1]))# save datadone = pd.DataFrame({'direction': directions, 'speed': speeds, 'probability': frequencies})done.to_csv('wind_rose.csv')
Average across direction-specific average speeds and probabilities
python;statistics;pandas
null
_cogsci.15647
I gave up long ago on formulating what seems like a new concept as I came to peace with the fact that billions of people alive and dead means there's very little room for new concepts. (just an anecdote, not the focus of my question)So I want to know what theory or concept describes the idea in which constantly comparing two things (or more?) in a yin and yang like opposition would associate rivalry between the two things even if non existed before the intentional creation of this comparison.This idea came to me through observation of how politics and media works, whether scientifically intended or just mere learned through experience, in instigating seemingly unrelated concepts in a comparison situation, and enforcing that comparison on the masses to create such rivalry between the two concepts; then have the ability to influence external factors to skew the mass opinion one way or the other. It works because they're using it then with the oldest trick in the books: divide and conquer. So they get those two things, put them in such a rivalry comparison where if one existed the other can not exist at the same time, creating division and teams of sort, then appealing to each team in some way or the other to win its loyalty.So the question is:What theory or concept describes the creation of rivalry between two ideas to instigate division?One idea I had was really just describing it as competition. But this doesn't work; In competition, there's variable degrees of existence of each competing idea. However in the case of my question, where one idea exists in some context, the other idea can't.
What is a principle that describes competitive advantage in this case
social psychology
Binary OppositionSystems are binary when they are composed of only two parts. It's easy to imagine things in opposition, like the Boston Red Sox and the New York Yankees, or the World War II alliances known as the Axis Powers and the Allies. For an opposition to be truly binary, however, the opposing classes of thing/idea must be mutually exclusive. That is, membership in one class must make impossible membership in the other. sourceMy observation is that this is being politicized and used in conjunction with other techniques to exert control. So they're creating Binary Oppositions.
_unix.184728
I would like to see what the actual contents of a directory file are (not the files that are contained within that directory).Specifically I would like to be able to do something like: od -c .I know this works (used to work?) on unix, as it is shown on page 59 of 'The Unix Programming Environment'Yet when I attempt on my version of Arch Linux I get the following:read error: Is a directoryIs there another command I should be using to inspect the bytes of a directory?
Look at the bytes in a directory?
linux;files;filesystems
null
_unix.81085
Getting the following error message:Fatal: Module ndiswrapper not foundProblem is I did install the ndiswrapper module. I even uninstalled and re-installed it. Still nothing. Any reason? On Linux Mint 14My network controller is a Realtek Semiconductor Co., Ltd. RTL8188CE and something about (rev 01)It is using module rtl8192ce. Not sure if it should be using _common or not.
Fatal: Module ndiswrapper not found
linux mint;wifi;ndiswrappers
null
_softwareengineering.100615
Assuming:The employee and company are parting on good termsThe new company is OK with a little contract work as long as it doesn't distract from the new jobThe former company wants the help and the employee is willing to work a few hours a weekIs it a good idea to do contract work for a former full time employer?Has anyone ever done this? How did it work out for you? I can see some possible pitfalls, so I wonder how good an idea it really is. What difficulties do you encounter doing development work outside the office and on a different schedule from the rest of the programming staff and how can they be overcome?If I do go forward, what would be a fair amount to request as compared to current salary?
Is it a good idea to do contract work for a former full time employer?
freelancing
Lots of people do this. We have two people that I know of doing it right now for our company. The key is to keep the work separate from the new job and make it clear you will be working on this only after hours and that you are not available for questions during hours when you are at your new job. Whatever you do, you don't want to risk your new job, so make it clear that if your new job tells you that you can no longer do this, you will be giving notice. Keep everything in Source control of course and make sure the repository is kept up-to-date in a branch that you update after each work seesion, so the old company can access the code you are working on if something becomes too urgent to wait until you can work on it that night. Negotiate how long this will go on if you only want to do this for the short-term until someone else gets up-to-speed. You can usually get them to give you a higher hourly wage than when you worked for them as well.
_unix.209630
So, here's my case: I do have a Writer file, in this Libreoffice Writer file, I copied a few tables from Libreoffice Calc(I actually did this for 6 hours) and pasted on Writer. When I went to see my file, there was a lot of tables that weren't there anymore. I do believe this happened after I renamed both files. Is there anyway I could recover those images/tables? Or how can I see where this images are linking? I tried to look on the Trash, rename again those files, But I simply can't find it. I have a lot 'empty images' on my file. I would not like to lose another 6 hours copying again.
Recover File Libreoffice
libreoffice
null
_reverseengineering.8116
Hey i have a very time consuming problem, and i thought i might find someone here with better experience than mine that could help me out.I am reverse-engineering an application which at some point uses the NdrClientCall2 api to use a remote procedure of some other service (which i dont know which one that is)Now before i hear comments about not trying anything my selfThere are some really good applications to accomplish what i want like NtTrace, Strace and roughly oSpy can achieve the same result aswell eventually. But my application has some really hard anti-debugging techniques which force me to do everything manually.What eventually i want to achieve is know what procedure is being called and on what service \ process.Here is the NdrClientCall2 Decleration by MSDNCLIENT_CALL_RETURN RPC_VAR_ENTRY NdrClientCall2( __in PMIDL_STUB_DESC pStubDescriptor, __in PFORMAT_STRING pFormat, __in_out ...);so it uses the PMIDL_STUB_DESC struct which its definition is as the following:typedef struct _MIDL_STUB_DESC { void *RpcInterfaceInformation; void* (__RPC_API *pfnAllocate)(size_t); void (__RPC_API *pfnFree)(void*); union { handle_t *pAutoHandle; handle_t *pPrimitiveHandle; PGENERIC_BINDING_INFO pGenericBindingInfo; } IMPLICIT_HANDLE_INFO; const NDR_RUNDOWN *apfnNdrRundownRoutines; const GENERIC_BINDING_ROUTINE_PAIR *aGenericBindingRoutinePairs; const EXPR_EVAL *apfnExprEval; const XMIT_ROUTINE_QUINTUPLE *aXmitQuintuple; const unsigned char *pFormatTypes; int fCheckBounds; unsigned long Version; MALLOC_FREE_STRUCT *pMallocFreeStruct; long MIDLVersion; const COMM_FAULT_OFFSETS *CommFaultOffsets; const USER_MARSHAL_ROUTINE_QUADRUPLE *aUserMarshalQuadruple; const NDR_NOTIFY_ROUTINE *NotifyRoutineTable; ULONG_PTR mFlags; const NDR_CS_ROUTINES *CsRoutineTables; void *Reserved4; ULONG_PTR Reserved5;} MIDL_STUB_DESC, *PMIDL_STUB_DESC;And here is how it looks like in windbg, when i put a breakpoint in the NdrClientCall2 function0:006> .echo Arguments:; dds esp+4 L5Arguments:06d9ece4 74cc2158 SspiCli!sspirpc_StubDesc06d9ece8 74cc2322 SspiCli!sspirpc__MIDL_ProcFormatString+0x17a06d9ecec 06d9ed0006d9ecf0 9164000006d9ecf4 916400000:006> .echo PMIDL_STUB_DESC:; dds poi(esp+4) L20PMIDL_STUB_DESC:74cc2158 74cc2690 SspiCli!sspirpc_ServerInfo+0x2474cc215c 74cca1cd SspiCli!MIDL_user_allocate74cc2160 74cca1e6 SspiCli!MIDL_user_free74cc2164 74ce0590 SspiCli!SecpCheckSignatureRoutineRefCount+0x474cc2168 0000000074cc216c 0000000074cc2170 0000000074cc2174 0000000074cc2178 74cc1c52 SspiCli!sspirpc__MIDL_TypeFormatString+0x274cc217c 0000000174cc2180 0006000174cc2184 0000000074cc2188 0700022b74cc218c 0000000074cc2190 0000000074cc2194 0000000074cc2198 0000000174cc219c 0000000074cc21a0 0000000074cc21a4 0000000074cc21a8 4800000074cc21ac 0000000074cc21b0 001c000074cc21b4 0000003274cc21b8 0078000874cc21bc 4108064674cc21c0 0000000074cc21c4 000b000074cc21c8 0002000474cc21cc 0008004874cc21d0 2150000874cc21d4 0008000c0:006> .echo PFORMAT_STRING:; db poi(esp+8)PFORMAT_STRING:74cc2322 00 48 00 00 00 00 06 00-4c 00 30 40 00 00 00 00 [email protected] ec 00 bc 00 47 13 08 47-01 00 01 00 00 00 08 00 ....G..G........74cc2342 00 00 14 01 0a 01 04 00-6e 00 58 01 08 00 08 00 ........n.X.....74cc2352 0b 00 0c 00 20 01 0a 01-10 00 f6 00 0a 01 14 00 .... ...........74cc2362 f6 00 48 00 18 00 08 00-48 00 1c 00 08 00 0b 00 ..H.....H.......74cc2372 20 00 2c 01 0b 01 24 00-a2 01 0b 00 28 00 b8 01 .,...$.....(...74cc2382 13 41 2c 00 a2 01 13 20-30 00 f8 01 13 41 34 00 .A,.... 0....A4.74cc2392 60 01 12 41 38 00 f6 00-50 21 3c 00 08 00 12 21 `..A8...P!<....!So how exactly do i figure out what is the remote process it is going to communicate with, or what pipe it is using to communicate?As far as i understand from the MSDN, it is supposed to call a remote procedure. if i understand that right, it means it should call a remote function as if its an exported dll function. How can i set a breakpoint there?P.S:The main reason im posing this function is because the NdrClientCall2 seems to be pretty huge.
at the rpcrt4!NdrClientCall2 function - how does it know which pipe to use in order to transfer data to another process?
windbg;anti debugging
So how exactly do i figure out what is the remote process it is going to communicate with, or what pipe it is using to communicate?The first step is to find the RPC client interface. This can be found via the first argument to NdrClientCall2(), named pStubDescriptor. In your question, pStubDescriptor points to SspiCli!sspirpc_StubDesc:And here is how it looks like in windbg, when i put a breakpoint in the NdrClientCall2 function0:006> .echo Arguments:; dds esp+4 L5Arguments:06d9ece4 74cc2158 SspiCli!sspirpc_StubDescSspiCli!sspirpc_StubDesc is a MIDL_STUB_DESC, and on my computer, here are its associated values (via IDA Pro):struct _MIDL_STUB_DESC const sspirpc_StubDesc MIDL_STUB_DESC< offset dword_22229B8, offset SecClientAllocate(x), offset MIDL_user_free(x), <offset unk_22383F4>, 0, 0, 0, 0, offset word_22224B2, 1, 60001h, 0, 700022Bh, 0, 0, 0, 1, 0, 0, 0>As documented on MSDN, the first field in the structure above points to an RPC client interface structure. Thus, we can parse the data at that address as an RPC_CLIENT_INTERFACE struct:stru_22229B8 dd 44h ; Length dd 4F32ADC8h ; InterfaceId.SyntaxGUID.Data1 dw 6052h ; InterfaceId.SyntaxGUID.Data2 dw 4A04h ; InterfaceId.SyntaxGUID.Data3 db 87h, 1, 29h, 3Ch, 0CFh, 20h, 96h, 0F0h; InterfaceId.SyntaxGUID.Data4 dw 1 ; InterfaceId.SyntaxVersion.MajorVersion dw 0 ; InterfaceId.SyntaxVersion.MinorVersion dd 8A885D04h ; TransferSyntax.SyntaxGUID.Data1 dw 1CEBh ; TransferSyntax.SyntaxGUID.Data2 dw 11C9h ; TransferSyntax.SyntaxGUID.Data3 db 9Fh, 0E8h, 8, 0, 2Bh, 10h, 48h, 60h; TransferSyntax.SyntaxGUID.Data4 dw 2 ; TransferSyntax.SyntaxVersion.MajorVersion dw 0 ; TransferSyntax.SyntaxVersion.MinorVersion dd offset RPC_DISPATCH_TABLE const sspirpc_DispatchTable; DispatchTable dd 0 ; RpcProtseqEndpointCount dd 0 ; RpcProtseqEndpoint dd 0 ; Reserved dd offset _MIDL_SERVER_INFO_ const sspirpc_ServerInfo; InterpreterInfo dd 4000000h ; FlagsFrom the RPC_CLIENT_INTERFACE struct above, we can extract the InterfaceId GUID: 4F32ADC8-6052-4A04-8701-293CCF2096F0We can now look up that interface GUID with RpcView to find the associated DLL, running process, and endpoints:To find out which specific endpoint is being used by the SSPI RPC server in the LSASS process, we can reverse engineer sspisrv.dll. In the exported function SspiSrvInitialize(), we see the following call:RpcServerUseProtseqEpW(Lncalrpc, 0xAu, Llsasspirpc, 0);To figure out which specific function is being called in sspisrv.dll, we need to look at the pFormat data passed to NdrClientCall2. In your example code above, the pFormat data is:00 48 00 00 00 00 06 00-4c 00 30 40 00 00 00 00 ...If we parse the pFormat data as an NDR_PROC_HEADER_RPC structure, we get:handle_type = 0x00Oi_flags = 0x48rpc_flags = 0x00000000proc_num = 0x0006stack_size = 0x004CFrom proc_num, we can see that this RPC call is calling the 6th RPC function in sspisrv.dll. We can use RpcView again to get the address for the 6th RPC function:And with IDA Pro, we can see the function in sspisrv.dll at address 0x7573159D:.text:7573159D __stdcall SspirProcessSecurityContext(x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x) proc nearRpcView also shows us a decompilation of that function's prototype:(Note that on your computer, the 6th function might not be at virtual address 0x7573159D, and furthermore, the 6th function might not be SspirProcessSecurityContext(), but this is the approach you would use nonetheless.)As such, we can now say the following:The RPC server code for your NdrClientCall2() call is in sspisrv.dllThe RPC server for your NdrClientCall2() call is running in LSASS's processThe endpoint for your NdrClientCall2() call is named lsasspirpcThe RPC server function called by your NdrClientCall2() call in sspisrv.dll is SspirProcessSecurityContext()
_unix.216913
I have just installed Linux Mint on my netbook (HP Compac Mini 700). I have managed to make it connect to WiFi, downloading driver, but it won't connect via cable. Strangely enough, the other day, it suddenly did connect via cable, me not having done anything. Then the next day it was back to normal again. No problem with cable or router as other computers connect easily. Please help!
Can't connect to ethernet Linux Mint 17.1 Xfce
linux mint;xfce;ethernet;internet;netbook
null
_webapps.54536
I've got a Google Voice account that I'm using for voicemail because I like features like web based voicemail. I am not using the number directly. So I activated Google voicemail for my mobile line, but have no forwarding turned on. But when I dial my mobile number, NOT my GV number, it rings a line that it is evidently being forwarded to.I know that using Google voicemail in fact works by forwarding the call, but the weird part is that the number it's trying to call is an old one that I removed from Google Voice. It's simply deleting, but Google Voice keeps forwarding to it.Why is it doing this, and how can I make it stop?
Mobile phone forwards even though I've turned forwarding off
google voice
null
_unix.234176
I have noticed I can't lock the screen because of vlock hangs.chvt also hangs on VT_WAITACTIVE and only attempts to switch the terminal (I see the image blinking).Pressing keys that in usually switches VT in text mode (Alt+arrows, Alt+Fx) also makes Xorg picture blink and hide cursor for a second. The keyboard input also flows to text console in addition to Xorg. I noticed a stray session at tty2. I can log in to tty2 without leaving Xorg:# ps aux | grep qwjerewqrvi 27051 0.0 0.0 3644 1144 tty2 S+ 20:38 0:00 grep qwjerewqrroot 27053 0.0 0.0 3644 1236 pts/27 SN+ 20:38 0:00 grep qwjerewqrSo,What may have caused this?How do I bring the system back to normal without rebooting (or restarting X applications)?Shall I report a bug report somewhere? Update: Finally just rebooted it (because of other unrelated problem).
Why can't I switch virtual terminal now?
linux;xorg;chvt
null
_unix.77235
Note: This situation is evolving. Please see the output at the end of this question for the current situation.I'm using Linux Mint 14. Recently, I repartitioned one of my hard drives, and I backed up all my data to an external USB 500GB drive. Now I am trying to copy that data back.However, the drive is behaving strangely. When I plug it in, it shows up on my desktop:... but when I right click and select to view Properties, underneath the Permissions tab, it says:The permissions of USB500 could not be determined.Also, it shows that 359.7 GB of the drive is used, which is about the right amount for the data I backed up:... but, when look into the drive, most of the files are not visible to me. For example, when I look at the properties of the dave directory, where most of my files are backed up, it says it's only 1.3 MB in size:There should be more like 300GB in that folder.Why are most of my files seemingly missing when the data used on disk is so much larger? Why is the permissions not determined?Most importantly, what is the safest process for me to diagnose problems and retrieve the data off this disk?Update: This output was requested in an answer below:$ mount | grep USB500/dev/sdd1 on /media/dave/USB500 type ext4 (rw,nosuid,nodev,uhelper=udisks2)When I ran the following command, I got a huge stream of lines that ended with Input/output error (too many to reproduce here, but they're all basically the same):$ sudo find /media/dave/USB500 -ls | lessfind: `/media/dave/USB500/dave/.guayadeque': Input/output errorfind: `/media/dave/USB500/dave/.compiz-1': Input/output errorfind: `/media/dave/USB500/dave/.anthy': Input/output errorfind: `/media/dave/USB500/dave/.compiz': Input/output errorfind: `/media/dave/USB500/dave/Apache_Logs': Input/output errorfind: `/media/dave/USB500/dave/.avidemux': Input/output errorfind: `/media/dave/USB500/dave/.dvdrip': Input/output errorOn the one hand, this seems to indicate a possible hardware failure(?). On the other hand, it is listing all the files I'm hoping to recover... so is it possible they are accessible? I really hope so...Update 2: I tried running fsck in hopes of repairing the drive and recovering at least some data, but I got this response:$ sudo fsck -y /dev/sdbfsck from util-linux 2.20.1e2fsck 1.42.5 (29-Jul-2012)fsck.ext2: Attempt to read block from filesystem resulted in short read while trying to open /dev/sdbCould this be a zero-length partition?Sometimes the drive does not seem to mount, I have not determined a pattern as to what causes it to become un-mountable. After turning it off and on, or after a reboot, I seem to be able to get it back.I removed the hard drive from it's USB interface and connected it to the computer's internal P/SATA bus, but it seemed that the drive behaved exactly the same, with identical symptoms.I believe this drive is dying, so right now the goal is to try to find some way to access the data that should be there just long enough to copy as much of it as possible. Suggestions on how I might do that would be most appreciated. There are actually just a few key directories I would like to access, so hopefully, if the failure is not too global, I might be able to get what I need.Update 3: I'm currently running the following command. I started it a day ago, and it's still going. I hope it is doing something useful. If anyone can confirm for me what the output below indicates about its progress, that would be very helpful.$ sudo ddrescue -r3 /dev/sdb /home/dave/RECOVERY/usb500.image/home/dave/recovery_usb500.logfilePress Ctrl-C to interruptInitial status (read from logfile)rescued: 0 B, errsize: 0 B, errors: 0Current statusrescued: 0 B, errsize: 500 GB, current rate: 0 B/s ipos: 5366 MB, errors: 1, average rate: 0 B/s opos: 5366 MB, time from last successful read: 1 dSplitting failed blocks...
USB 500GB external drive mounted with undetermined permissions, missing data
permissions;usb;hard disk;data recovery;external hdd
null
_unix.168608
I want to develop a C++ library that can be used on multiple Linux distributions like RHEL, Suse, Ubuntu etc...If I compile my source code into a .so (shared library) on one Linux environment(say RHEL), will it work on other environments also without being recompiled? Are the gcc and C/C++ libraries different in different environments?
Common library for all linux distributions
gcc;libraries;c++;shared library
If I compile my source code into a .so (shared library) on one Linux environment(say RHEL), will it work on other environments also without being recompiled?In general, no. You want to use a build system that supports portability. Autotools is the standard. An alternative is Cmake.
_softwareengineering.197531
What work path do there exist to identify reasonable vertical slices of a classic n-tier platform code base and infrastructure (enterprise size)? Regardles of refactor or a new solution I think there will be the same approach (more or less?) So i'm primary interested in steps and definitioning of modules, functions, then test them into a function-unit or if it's appproved as a vertical.To extend my thoughs..I understand that a vertical should be a actor, a kind of business value. But is a communication system a actor, or a functional unit in a vertical slice? Probably yes And no (depends on enterprise), but how to decide? Is shipment a part of Order, or are both Order and Shipment different verticals? What should a vertical slice consist of, to be a true vertical? In terms of Levels of functions, like an own data store, an own client configuration tool, an own business intelligence calculation and so on. Btw: It's a requirement that the verticals should be deployed and operatable independently of each other.Maybe there is a kind of yes/no schema to use? Is this X used for Y? Yes - then it's a vertical. Is this X used with Z? - Yes, then it's anti-pattern for vertical. Is X depend on the funtion YY? If yes, then.. And so on. Links to success stories when define verticals is also of interest.
Best practice refactor n-tier into vertical slices architecture
architecture;refactoring;enterprise architecture
null
_webapps.12427
How do I set up a filter in Gmail account so that invitations from people not in my contact list are automatically deleted (and the event removed from my calendar)?
Filter out Google Calendar invites from people not in my contact list
google calendar;google contacts
null
_codereview.89976
I will start by explaining the situation and then show some code. I have an app that has an array of default settings to display on a user's dashboard. The user can modify these settings to enable or disable certain items that appear on their dashboard.I want to be able to update the default settings and if I add a new item then it will automatically update the user's settings as well, but without overwriting their personal enable/disable setting.I have this array of dictionaries, the default:dashboardItems = @[ @{ @id : @1, @order : @0, @title : @Steps Traveled, @unit : @, @type : HKQuantityTypeIdentifierStepCount, @sampleUnit : @count, @enabled : @1 }, @{ @id : @2, @order : @1, @title : @Flights Climbed, @unit : @, @type : HKQuantityTypeIdentifierFlightsClimbed, @sampleUnit : @count, @enabled : @1 }, @{ @id : @3, @order : @2, @title : @Distance Traveled, @unit : @mi, @type : HKQuantityTypeIdentifierDistanceWalkingRunning, @sampleUnit : @mi, @enabled : @1 }, @{ @id : @4, @order : @3, @title : @Active Calories, @unit : @, @type : HKQuantityTypeIdentifierActiveEnergyBurned, @sampleUnit : @kcal, @enabled : @1 }, @{ @id : @5, @order : @4, @title : @Weight, @unit : @lbs, @type : HKQuantityTypeIdentifierBodyMass, @sampleUnit : @lb, @enabled : @1 }, @{ @id : @6, @order : @5, @title : @Cycling, @unit : @mi, @type : HKQuantityTypeIdentifierDistanceCycling, @sampleUnit : @mi, @enabled : @1 }, @{ @id : @7, @order : @6, @title : @Heart Rate, @unit : @BPM, @type : HKQuantityTypeIdentifierHeartRate, @sampleUnit : @count/min, @enabled : @1 }, @{ @id : @8, @order : @7, @title : @BMI, @unit : @, @type : HKQuantityTypeIdentifierBodyMassIndex, @sampleUnit : @count, @enabled : @1 } ];So the user can use a switch to change the enabled from a 1 to a 0 or vice-versa to turn on or off the item, with me so far?When the user signs up the app saves the default settings to NSUserDefaults and also to the live server to backup in case the user ever deletes the app.If the user logs out or deletes the app, on next login we check to see if they currently have and NSUserDefaults saved, if not we pull from the server.Then I compare the 2 arrays to see if any items were changed or added to the defaults, is this the best way to go about doing it?//here we need to check if the user does not have any preferences saved, if not pull them from Parse.NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults];NSMutableArray *dashboardItems = [defaults objectForKey:@dashboardItems];if (dashboardItems == nil) { //user has no preferences so load them in dashboardItems = [NSJSONSerialization JSONObjectWithData:[user[@preferences] dataUsingEncoding:NSUTF8StringEncoding] options:0 error:nil]; [defaults setObject:dashboardItems forKey:@dashboardItems]; [defaults synchronize];}//compare current preferences against default to see if any new items were changedNSArray *defaultPreferences = [[PreferencesManager sharedManager] dashboardItems];for (NSDictionary *item in defaultPreferences) { for (NSDictionary *userItem in dashboardItems) { if ([item[@id] isEqualToString:userItem[@id]]) { //we have a match, compare name and change if necessary if (![item[@title] isEqualToString:userItem[@title]]) { //set user's item title to default title [userItem setValue:item[@title] forKey:@title]; } } else { //we did not find this in user preferences, so add it [dashboardItems addObject:item]; } }}//save defaults[defaults setObject:dashboardItems forKey:@dashboardItems];[defaults synchronize];
Comparing 2 arrays of dictionaries and saving user preferences
objective c;ios
//we have a match, compare name and change if necessaryif (![item[@title] isEqualToString:userItem[@title]]) { //set user's item title to default title [userItem setValue:item[@title] forKey:@title];}This sort of code is a bit redundant, and also shows inconsistency between using and not using the modern syntax for dictionaries.We can rewrite it as simply:userItem[@title] = item[@title];We don't need to check whether or not they're different titles.The amount of vertical white space in your code is excessive and distracting. We can eliminate most of these empty lines. I don't think they add anything other than making me work my scroll wheel a bit more.Calling synchronize on NSUserDefaults is largely unnecessary. NSUserDefaults loads the defaults into memory. When you set a value, it sets it in memory (but doesn't save it to permanent storage yet). Next time you access the value for that key, it will serve you the value you set (which it has in memory), regardless of whether or not it has taken the time to write it to permanent storage yet or not.NSUserDefaults tries to find opportune, low-cost times to write to permanent storage when it knows it has pending changes. Worst case scenario, it will be sure to write to permanent storage just before your app is exited.The only way to lose something that is in NSUserDefaults in memory but not yet written to permanent storage is if your app crashes without NSUserDefaults having an opportunity to write.All that calling synchronize does is make your app take the time to force NSUserDefaults to write its current state to permanent storage. That's it. It's completely unnecessary, and your app shouldn't be crashing and losing its temporary storage stuff anyway.
_unix.26879
I have a home network under the router, and there is my proxy server with ports forwarded through the router. Some clients connects to the Internet via my proxy, but there is a little problem - sometimes they cannot send files on email or html form (I'm using Squid as a proxy).So, I've tryed to set up PPTP VPN with PoPTop, but connection through my router causes 619 error.Server has Fedora 16. What is the best way to provide Internet connection to clients (VPN, another proxy configuration or something else)?
Proxy or VPN behind the router
fedora;vpn;openvpn;squid
null
_webapps.62752
What is the shortcut keyboard for filter message like this in Gmail?
What is hot key for gmail `filter message like this`?
gmail
As mentioned already, there is no direct shortcut for the Filter messages like this menu option.However, providing you have Keyboard Shortcuts enabled in settings, you can Hit . (period / full stop) to drop down the More menu Then use the cursor keys to navigate down to the relevant menu optionThen hit Enter
_unix.185444
Thank you for the reply.This is not what I am after. The ssh from server 0 to (1,2,3) (no password) but a remote script on server (1,2 & 4 ) which starts a service/application needs a password for it to start. I want to be able to ssh and run this automatically, so ssh would expect a password from the remote script and answer with a password. Sal Allan32 mins agoI want to create the cron job to run every Monday morning at 5AM.Examplessh -n -o StrictHostKeyChecking=no server1 /app/pkg/solaris/start_scriptssh -n -o StrictHostKeyChecking=no server2 /app/pkg/solaris/start_scriptssh -n -o StrictHostKeyChecking=no server3 /app/pkg/solaris/start_scriptWhen I run the above script from a cron job on a remote server every Monday morning, the application/start_script will prompt for a password to start the script. This is the application password. say password is BOB.Please enter password:Is there a way to answer the password prompt (by the start_script) without any manual interaction. The password prompt is for the application and not server I am ssh to. Its all trusted domain from one admin group.
I want to create a cron job that runs a command on a remote linux box using ssh but I want ssh to answer password
bash;shell script;ssh;autocomplete
null
_unix.386763
I have a service file in systemd that has this:ExecStart=/sw/consumerlag/prometheus-kafka-consumer-group-exporter --consumer-group-command-path=/sw/confluent/bin/kafka-consumer-groups --extra-kafka-args '--command-config /sw/config/ssl.properties' --max-concurrent-group-queries 4 --listen ':9099' node01:9092but I keep getting this error: Aug 17 18:23:01 node01 prometheus-kafka-consumer-group-exporter: time=2017-08-17T18:23:01Z level=fatal msg=listen tcp: address tcp/9099\: unknown portbut if i just run this in terminal itself, it works: /sw/consumerlag/prometheus-kafka-consumer-group-exporter --consumer-group-command-path=/sw/confluent/bin/kafka-consumer-groups --extra-kafka-args --command-config /sw/config/ssl.properties --max-concurrent-group-queries 4 --listen :9099 node01:9092why can't systemd run this though?
systemd not reading command correctly
linux;systemd
null
_unix.139539
So, I am installing a java app that I wrote which uses jnetpcap. This requires libpcap of at least v1.0.0. My CentOS 5.8 only has libpcap 0.9.4, which is required by other installed packages. I've got the RPM for libpcap 1.4.0 built but when I try to install it, I get the following:# rpm -Uvh /root/rpmbuild/RPMS/i386/libpcap-1.4.0-1.i386.rpmerror: Failed dependencies: libpcap.so.0.9.4 is needed by (installed) ppp-2.4.4-2.el5.i386 libpcap.so.0.9.4 is needed by (installed) isdn4k-utils-3.2-56.el5.i386 libpcap >= 14:0.8.3-6 is needed by (installed) ppp-2.4.4-2.el5.i386and checking the dependencies of one of those:# rpm -qR ppp-2.4.4-2.el5.i386...libpcap >= 14:0.8.3-6libpcap.so.0.9.4Updating the OS is a non-starter, plus, it's a closed system, never net connected, so it matters little. Now, I may be able to remove the packages which are stalling things, but, assuming I cannot, how can I force the install of this package, such that it will satisfy the older dependency requirements? ie: have it provide libpcap 0.9.4 soas to satisfy the requirements of already installed software.
Make RPM Satisfy Dependency Requirements of Other Installed Packages
linux;centos;rpm;dependencies
null
_codereview.160946
I just started picking up Elixir (I don't have any functional programming background either), I would appreciate any feedback you have!The kata is about writing two functions:uppercase/1: takes a string and upcases every other wordunvowel/1: takes a string and removes the vowels from every other wordAnd here's my solution:defmodule Schizo do @moduledoc false require Integer @vowels ~w[a i u e o] @doc ~S Upcases every other word in a sentece. ## Examples iex> Schizo.uppercase(elixir is pretty rad) elixir IS pretty RAD def uppercase(str) do str |> transform_every_other_word_with(&String.upcase/1) end @doc ~S Removes the vowels from every other word in a sentence. ## Examples iex> Schizo.unvowel(how are things really?) how r things rlly? def unvowel(str) do str |> transform_every_other_word_with(&(String.replace(&1, @vowels, ))) end defp transform_every_other_word_with(str, transformer) do str |> String.split( ) |> transform_every_other_item_with(transformer) |> Enum.join( ) end defp transform_every_other_item_with(list, transformer) do list |> Stream.with_index |> Enum.map(transform_odd_index_item_with(transformer)) end defp transform_odd_index_item_with(transformer) do fn ({item, index}) when Integer.is_even(index) -> item ({item, index}) when Integer.is_odd(index) -> transformer.(item) end endend
Schizo kata in elixir
beginner;elixir
null
_cs.2792
I have recently implemented the Paull's algorithm for removing left-recursion from context-free grammars:Assign an ordering $A_1, \dots, A_n$ to the nonterminals of the grammar.for $i := 1$ to $n$ do begin $\quad$ for $j:=1$ to $i-1$ do begin $\quad\quad$ for each production of the form $A_i \to A_j\alpha$ do begin $\quad\quad\quad$ remove $A_i \to A_j\alpha$ from the grammar $\quad\quad\quad$ for each production of the form $A_j \to \beta$ do begin $\quad\quad\quad\quad$ add $A_i \to \beta\alpha$ to the grammar $\quad\quad\quad$ end $\quad\quad$ end $\quad$ end $\quad$ transform the $A_i$-productions to eliminate direct left recursion endAccording to this document, the efficiency of the algorithm crucially depends on the ordering of the nonterminals chosen in the beginning; the paper discusses this issue in detail and suggest optimisations.Some notation:We will say that a symbol $X$ is a direct left corner of a nonterminal $A$, if there is an $A$-production with $X$ as the left-most symbol on the right-hand side. We define the left-corner relation to be the reflexive transitive closure of the direct-left-corner relation, and we define the proper-left-corner relation to be the transitive closure of the direct-left-corner relation. A nonterminal is left recursive if it is a proper left corner of itself; a nonterminal is directly left recursive if it is a direct left corner of itself; and a nonterminal is indirectly left recursive if it is left recursive, but not directly left recursive.Here is what the authors propose:In the inner loop of Paulls algorithm, for nonterminals $A_i$ and $A_j$, such that $i > j$ and $A_j$ is a direct left corner of $A_i$, we replace all occurrences of $A_j$ as a direct left corner of $A_i$ with all possible expansions of $A_j$.This only contributes to elimination of left recursion from the grammar if $A_i$ is a left-recursive nonterminal, and $A_j$ lies on a path that makes $A_i$ left recursive; that is, if $A_i$ is a left corner of $A_j$ (in addition to $A_j$ being a left corner of $A_i$).We could eliminate replacements that are useless in removing left recursion if we could order the nonterminals of the grammar so that, if $i > j$ and $A_j$ is a direct left corner of $A_i$, then $A_i$ is also a left corner of $A_j$.We can achieve this by ordering the nonterminals in decreasing order of the number of distinct left corners they have.Since the left-corner relation is transitive, if C is a direct left corner of B, every left corner of C is also a left corner of B.In addition, since we defined the left-corner relation to be reflexive, B is a left corner of itself.Hence, if C is a direct left corner of B, it must follow B in decreasing order of number of distinct left corners, unless B is a left corner of C.All I want is to know how to order the nonterminals in the beginning, but I don't get it from the paper. Can someone explain it in a simpler way? Pseudocode would help me to understand it better.
Removing Left Recursion from Context-Free Grammars - Ordering of nonterminals
algorithms;context free;formal grammars;efficiency;left recursion
This is actually not very complicated. I'll assume that epsilon productions have already been eliminated from the language, because that will only obscure the key concept of left corner.Form a graph G where the vertices are all the non-terminals of the grammar. Now draw a directed edge from A to B if there is any production rule that looks like A -> B[...]. The paper calls B a direct left corner of A. More generally, some other non-terminal C is called a left corner of A if there is some path from A to C along the edges of this graph. This can be done by computing the transitive closure of G, call it H.The paper suggests ordering the vertices by counting the number of left corners each vertex A has (i.e. how many other non-terminals you can reach from A, or the out-degree of A in the graph H), then sorting them in decreasing order by this number.One handwavy motivation for this policy is that if there is an important non-terminal (say the starting symbol S) with connections to many other symbols), then it makes sense to purge left-recursion from S early on, because if you leave it till later there will be more copies of S that you need to expand out. I think the explanation in the paper is more convincing, but perhaps less obvious.
_webapps.62531
Say I'm an admin of Page X on Facebook, i.e. I can go to it and see:You are posting, commenting, and liking as Page X and there's a button to go back to using my own account.The Wordpress Sharing settings have one simple button to Connect to Facebook. If I click it, it offers to connect the Wordpress blog to my own personal account, not Page X.Is there any way to share Wordpress/Facebook without logging out as me and logging in as Page X? I don't have the login details at the moment.
Connect Wordpress blog to a Facebook account where I'm an admin
facebook;wordpress
null
_unix.34660
I work on Linux on my laptop, I could not access a particular website using the URL, so I used sudo /etc/init.d/nscd restart in order to clear the DNS cache, but the URL is still throws 'Server Not Found' in Firefox. I have tried also Chrome, it still not working. Other friends can see the web page, but I can not. So what would be the main cause of this? I can surf other sites nicely. Weirdly enough when I even try the IP address of that particular URL, it shows me a different page than what other people see.I appreciate any help on this matter.
Clearing DNS Cache in Linux
linux;dns
Unless you are running bind by accident you should check your nscd configuration file located at /etc/nscd.conf.It will list the caches that are kept. enable-cache hosts yes positive-time-to-live hosts 3600 .......# nscd -?-g, --statistics Print current configuration statistics-i, --invalidate=TABLE Invalidate the specified cachenscd -g hosts cache: yes cache is enabled no cache is persistent yes cache is shared 211 suggested size 216064 total data pool size 384 used data pool size 600 seconds time to live for positive entries 0 seconds time to live for negative entries 0 cache hits on positive entries 0 cache hits on negative entries 128 cache misses on positive entries 0 cache misses on negative entries 0% cache hit rate 3 current number of cached values 7 maximum number of cached values 2 maximum chain length searched 0 number of delays on rdlock 0 number of delays on wrlock 0 memory allocations failed yes check /etc/{hosts,resolv.conf} for changes# nscd -i hostsThis will invalidate the cache.But, after doing it there was no change to the hosts entries in nscd -gAfter restarting nscd it was flushed.service nscd restart hosts cache: yes cache is enabled no cache is persistent yes cache is shared 211 suggested size 216064 total data pool size 0 used data pool size 600 seconds time to live for positive entries 0 seconds time to live for negative entries 0 cache hits on positive entries 0 cache hits on negative entries 0 cache misses on positive entries 0 cache misses on negative entries 0% cache hit rate 0 current number of cached values 0 maximum number of cached values 0 maximum chain length searched 0 number of delays on rdlock 0 number of delays on wrlock 0 memory allocations failed yes check /etc/{hosts,resolv.conf} for changesUnless you are running bind this is the only way to clear the cache short of finding the database for nscd and deleting it which could cause other issues. I would follow the troubleshooting procedures for IP resolution. I outlined some in the comments to your question.This is a link to a pretty good Linux Journal article on Troubleshooting Network Problems.
_unix.299076
I am currently writing something to parse some apache logs, yet the system command seems to be putting itself above print. Haven't done a whole lot with awk, so it could be something very simple.IFS=$'\n' for ja in `cat test.apache.access_log | awk '{print $1}' | sort -n | uniq -c | sort -rn | head -3`do echo $ja|awk '{print Count\tIP\t\tNSLookup}{print $1\t,$2,\t\t,system(nslookup $2|grep name)}'doneWhat I get :Count IP NSLookupRR.ZZ.YY.XX.in-addr.arpa name = ja.server.net.241 XX.YY.ZZ.RR 0 What I would like to see is:Count IP NSLookup241 XX.YY.ZZ.RR RR.ZZ.YY.XX.in-addr.arpa name = ja.server.net.
AWK output help
awk
null
_cs.73992
Problem:Let $G = (V,E)$ be an infinite digraph, such that $V = \mathbb{N}$, and $E \subset \mathbb{N}\times \mathbb{N}$ is decidable set. Does it imply that $\delta (i,j)$ is a total function? *(where $\delta (i,j)$ is the shortest path function between vertex $i$ and $j$).I'm having a hard time trying to understand the problem, so any help would be appreciate.Well, this is what I know (hopefully I'm not misunderstanding anything here):A language (or set) $L$ is decidable if $\exists$ and algorithm $A$, such that if $v \in L$, then $A(v) = \text{Accept}$ and halts and if $v\not\in L$, then $A(v) = \text{Reject}$ and halts.A function $f$ is total if $\exists$ and algorithm $B$ that computes it $\forall v\in \mathrm{Dom}(f)$ and always halts.My attempt:Suppose $\delta (i,j)$ is a total function and let algorithm $B$ be the Bellman-Ford algorithm ($BFA$).The relaxation step in $BFA$ is given by// Step 2: relax edges repeatedly for i from 1 to size(vertices)-1: for each edge (u, v) with weight w in edges: if distance[u] + w < distance[v]: distance[v] := distance[u] + w predecessor[v] := uBecause $|V| = \infty$ we have that the algorithm never halts, since size(vertices) - 1 $= |V| -1 = \infty$. This implies that $BFA$ doesn't compute $\delta (i,j) \implies \delta (i,j)$ is not a total function (So, to my understanding it would be only a partial function, since $\exists$ some vertices $u,\ v$ for which the function $\delta (u,v)$ is undefined).Although to me at first glance it makes kind of sense, I guess I'm wrong, mainly because I didn't consider the fact that $E$ is a decidable set.
Let $G = (V,E)$ be an infinite digraph: $E \subset \mathbb{N}\times \mathbb{N}$ is decidable. Does it imply that $\delta (i,j)$ is a total function?
algorithms;graph theory;computability
This function is not necessarily total, even when the set of edges is decidable. Note that you haven't proved it, but simply showed why we can't directly use some classic algorithm for finite graphs, though perhaps some smarter algorithm exists?Imagine $G$ is a universal configuration graph. To explain what i mean, lets first discuss what is the configuration graph of a Turing machine $M$. Given a Turing machine $M$, let $G_{M}$ denote the graph whose vertices are configurations of the form $C=(q,s_1,s_2)$ where $q$ is the machine state, and $s_1,s_2$ are the strings to the right and left of the read/write head correspondingly (the first symbol of $s_2$ is the current symbol under examination). $(C_1,C_2)\in E$ if you can go from $C_1$ to $C_2$ in a single step. The number of vertices in $G_M$ is countable (since we go over all possible contents of the tape).Now I want to define $G$ as $\bigcup\limits_{M} G_{M}$. The graph $G$ is the disjoint union of all possible configuration graphs (so we go over all Turing machines, note that the number of vertices remains countable). To identify each vertex (configuration) in $G$ to the graph $G_{M}$ it's belonging to, we can add the encoding of the machine to the description of each vertex. The set of edges of $G$ is clearly decidable, since given two vertices$v_1=\left(\langle M\rangle , C_1\right)$ and $v_2=\left(\langle M'\rangle , C_2\right)$, we check if $\langle M\rangle=\langle M' \rangle$, and if so, we check if $M$'s transition function allows us to move from $C_1$ to $C_2$ in a single step.If reachability in $G$ is decidable, then you can decide the halting problem. Given a Turing machine $M$ and input $x$, check if the accepting configuration of $M$ is reachable from the initial configuration of $M$ on input $x$ (for simplicity, you can assume there is a unique accepting configuration).
_cs.3273
Updated Algorithm: There was a major flaw in my original presentation of the algorithm which could have impacted the results. I apologize for the same. The correction has been posted underneath.The original algorithm posted had a major flaw in its working. I tried my best but could not get the desired accuracy in presenting the algorithm in pseudo code and/or Set Theory notation. I am thus posting python code, which has been tested and produces the desired results.Note that my question, however, remains the same: What is the time complexity of the algorithm (assuming that powersets are already generated)?# mps is a set of powersets; below is a sample (test case)mps = [ [ [], [1], [2], [1,2] ], [ [], [3], [4], [3,4] ], [ [], [5], [6], [5,6] ] ]# Core algorithm# enumerate(mps) may not be required in languages like C which support indexed loopslen = mps.__len__()for idx, ps in enumerate(mps): if idx > len - 2: break; mps[idx + 1] = merge(mps[idx], mps[idx+1]) # merge is defined below# Takes two powersets and merges themdef merge (psa, psb): fs = [] for a in psa: for b in psb: fs.append(list(set(a) | set(b))) return fsOutput: mps[-1] #Last item of the listRunning the above example will result in listing out the powerset of $\{1,2,3,4,5,6\}$.
What is the complexity of this subset merge algorithm?
algorithms;time complexity;algorithm analysis;check my algorithm
null
_softwareengineering.251185
TLDR;Where and possibly how should I implement scope based logic in the example code?I have got a ASP.NET Web Api.The Api uses OData (on top off REST) for data endpoints and OAuth 2.0 authentication.Now I want to add functionality to handle an Access Token in several different scopes.Some of the scopes are:UserOrganizationCustomerThe result set the API returns should be based on this scope.The same applies if someone wants to add, update or delete an entity.Now take for example the following endpoint:http://myapi/EmployeesSomeone with a Token in the Organization scope does a GET and should only get the employees that are in the Organization that is defined within the Token. An user in the Customer scope should get the employees for all organizations within the customer defined in the Token. This functionality however is not available for someone with a Token in the User scope. The same applies to a POST. Someone within the organization scope should only be allowed to add a new employee to his own organization.The functionality for the different scopes will usually only differ a little bit.In the case of getting the employees the difference is only a where on organization or otherwise on customer. In the case of a post their should be a check to see if for example the organization is part of the customer that is bound to the customer Token.Current implementation:Currently we have a base ApiController which is generic typed.The base controller calls an Query or CommandHandler with the entity type and all the implementation details are inside this handlers.ApiControllerpublic abstract class ApiController<TEntity> : EntitySetController<TEntity, Guid> where TEntity : class, IApiEntity{ protected readonly IQueryProcessor QueryProcessor; protected readonly ICommandProcessor CommandProcessor; protected ApiController(IQueryProcessor queryProcessor, ICommandProcessor commandProcessor) { this.QueryProcessor = queryProcessor; this.CommandProcessor = commandProcessor; } [HttpGet] public override IQueryable<TEntity> Get() { var command = new GetEntityCommand<TEntity>(); return this.QueryProcessor.Process(command); }}EmployeeControllerpublic class EmployeeController : ApiController<Employee>{ }GetEmployeeHandlerpublic class GetEmployeeCommandHandler : IQueryHandler<GetEntityCommand<Employee>, IQueryable<Employee>>{ public IQueryable<Employee> Handle(GetEntityCommand<Employee> command) { // Code that knows how to get a collection of employees. // This collection contains all employees }} PostEmployeeHandlerpublic class PostEmployeeCommandHandler : ICommandHandler<PostEntityCommand<Employee>, Employee>{ public Employee Handle(PostEntityCommand<Employee> command) { // Code that knows how to add an employee to the database. }} The questionWhere and possibly how should I implement the scope based logic?I know there has to be a smart way to do this and a good place to implement this scope based logic, I just don't see it (yet).
API - How to handle scope based functionality?
design;design patterns;separation of concerns;aspect oriented
null
_unix.327364
My C-program uses fork() to create new processes and I measure the time it takes for each one of these processes to do its work. In fact, I let them do the work 10000 times and measure that time. The measurements are done by gettimeofday(...) and the results are printed after each process if finished. To the point: By running 24 parallel processes these times are centered around 180 000 usec. But when I increase to 25 processes the time drops down and centers around 100 000 usec. Is this something Linux specific? That by creating more processes I am given more resources from the OS?Or is it completely impossible and I am measuring the time wrong? It should be noted that the measurements are increasing for 1->24 processes and drops on the 25th.
C on Linux: Running 25 parallel processes is significantly quicker than running 24
linux;process;c
null
_unix.40082
I went through examples and man page but couldn't figure out difference between signalfd and sigwaitinfo Apart from syntax both are doing same thing i.e. waiting for signal storing it details into some structure.in man page it is written.This provides an alternative to the use of a signalhandler or sigwaitinfo(2), and has the advantage that the file descriptor maybe monitored by select(2), poll(2), and epoll(7).Can anyone please explain me what does it exactly means?
difference between signalfd and sigwaitinfo?
signals
null
_webmaster.72305
I'm launching a website as a hobby, it is somehow a stackoverflow clone.I noticed a very weird thing that I can't understand, I don't know if this is normal because I'm pretty new to the web mastering thing.Directly after I launched the site by two hours (still has no posts) some users started registering. What I couldn't know is how did they reach the site: I didn't tell anyone about, it didn't appear in the search engine...These users don't do anything, they just logged in and left. (Except for one who posted a spam so I deleted it)Some of them confirm their emails and some don't. There isn't any relation between them: From their ip addresses I know that they are from different countries such as United States, Canada, Switzerland...I tried to message them but they didn't reply, I'm dying to know who are these guys and how did they discover this site. Why do they have interest in an empty site?My questions:Is it possible to know how did they reach the site (what are the possibilities)Is it safe to keep them? What should I do?Note: To request any further information please comment.
How did new users come to my site and is it safe to keep them?
web hosting;analytics;users
You just got introduced to bots :)Looks like you are running a vanilla version of software like phpBB or WordPress etc.What are bots?(Source: Wikipedia)Bots are an army of (mostly compromised) machines doing whatever their bot head asks them to do. In most cases, they just post spam messages with links. Read more about the command and control botnets at this Wikipedia page.Is it possible to know how did they reach the site (what are the possibilities)?Constant crawling/scanning networks, WHOIS databases and many other sources can reveal your site.Once I hosted one of my websites on Amazon EC2. The elastic IP address that I got assigned was one of the IP addresses that pinterest.com was using before. As soon as my site was up, I got hundreds and thousands of hits from some sort of a Pinterest desktop client which (for some reasons) was using the IP address (rather than the domain name) to reach the website. This can be just one of many examples of how they are reaching out to you. Is it safe to keep them? What should I do?The primary aim for most of them is to post links to website, to get SEO karma on those links or they are links to malicious websites and the aim is to lure your website's users to click on it and get infected.They do not pose a direct threat to your server's security, however posting a tonne of spam goes against a website's reputation and usefulness.There are a plugins for WordPress/phpBB softwares (Akismet comes on top of my head) which help prevent/block spam.Also, consider putting a CAPTCHA on your registration form. This alone will reduce 90% of the automated/bot registrations.Finally, welcome to the Wild Wild Web (WWW)
_cs.65272
I don't know how to solve this. The main thing I do know how buffers work but I don't know about display locations and lookup tables.A color frame buffer has 3 bits per display location and 6 bits per entry in three lookup tables. What is total number of colors in the palette from which this frame buffer has to choose?
Number of colors in frame buffer
graphics
null
_codereview.166165
I used this method as a platform independent tick counter. Unfortunately, it consumes ~10 % cpu time in one of my methods. Is there a faster way to solve the issue?inline uint32_t GetTickCount() { using namespace std::chrono; return duration_cast<milliseconds>(steady_clock::now().time_since_epoch()).count();}
Optimize platform independent GetTickCount()
c++;performance;datetime;linux;windows
null
_unix.382693
Looking for the largests among the open files by all processes. lsof already has the open files with their sizes. It may be passing the right parameters to lsof and processing the output.
How to find the largest open files?
files;sort;size
You can use the -F option of lsof to get almost unambiguous output which is machine-parseable with only moderate pain. The output is ambiguous because lsof rewrites newlines in file names to \n.The lsof output consists of one field per line. The first character of each name indicates the field type and the rest of the line is the field value. The fields are: p=PID (only for the first descriptor in a given process), f=descriptor, t=type (REG for regular files, the only type that has a size), s=size (only if available), n=name. The awk code below collects entries that have a size and prints the size and the file name. The rest of the pipelines sorts the output and retains the entry with the largest size.lsof -Fnst | awk ' { field = substr($0,1,1); sub(/^./,); } field == p { pid = $0; } field == t { if ($0 == REG) size = 0; else next; } field == s { size = $0; } field == n && size != 0 { print size, $0; }' | sort -k1n -u | tail -n42 | sed 's/^[0-9]* //'
_unix.216891
How do I make sudo remember my password for longer so that I don't have to keep typing it? I do not want to sudo su and execute commands as root all the time.I am on Arch Linux and have tried to google this but I get the change command which is not what i'm after.
How do I make sudo remember my password for longer?
sudo;password
There is timestamp_timeout option in your /etc/sudoers. You can set up this option to number of minutes. After that time it will ask for password again. More info in man sudoers.And make sure you edit your sudoers file using visudo, which checks your syntax and which will not leave you with wrong configuration and inaccessible sudo.
_webmaster.86309
Will there be any harm to google indexing or google ranking on showing random posts/ad-listings on front page of website? 5-6 posts on front page will change every time a user refreshes his page.
Can it be a harm to google indexing or ranking on random posts?
google search console
In a nutshell, ideally ads should have no effect (good or ill) on rankings (which should be a measure of the site content only).Ultimately, though, you need to avoid ads that may affect your search engine rankings. Otherwise, ads are fine. This article is reasonably current and covers the basics, including an extended section on general ad types and considerations for ranking considerations. As it statesAny advertisement that passes pagerank is against the Google guidelines.FurthermoreA paid link or advertisement must have a nofollow or must be redirected through a file that is excluded by your robots.txt file. If you know what those are, you can use the varvy.com nofollow tool to check your pages.Here is a link to the current Google Webmaster Guidelines:Make reasonable efforts to ensure that advertisements do not affect search engine rankings. For example, Google's AdSense ads and DoubleClick links are blocked from being crawled by a robots.txt file.
_opensource.719
Suppose I am working on a project. I publish this project on a website. I release it without a copyright notice OR a licence. But in the project title /description I say it is open source.My question:Under these circumstances is the project open sourced? If yes, under what conditions?
Can a project be open source even if you don't have a license?
unlicensed code
No.You can claim it's open source, but it wouldn't be true. If you don't declare under what agreement people can use the work, or what rights they have, they should legally assume they have no rights (i.e. it is entirely your copyright).The fact that you haven't included a copyright notice doesn't matter: a CN is a nicety which is in fact there to remind people of the fact that it belongs to you and you can't use it in any way not allowed by the license. The Lack of a CN doesn't remove your copyright.
_opensource.1525
There are systems consisting of wholly free software, and in some cases the hardware is also driven by free software.What about mice? Do mice have software on them which would make them nonfree? I've plugged this mouse into Ubuntu, Debian and Trisquel Linux distributions and was never prompted to install any nonfree software, which leads me to assume that mice are generic and just send keycodes to the computer.Computer Mice seem like trivial devices. But I do own a more sophisticated mouse which can be loaded up with macros, so it doesn't seem like mice are necessarily so trivial, and the software driving those macros is probably nonfree. Would using either of these mice exempt me from running a wholly free system? (Or would the supposed triviality of such software--perhaps it is unable to make network requests, for instance--make it a non-issue?)
Can a mouse be free software?
hardware
Trivially, a mouse can't be free software, because a mouse is not software.So what other questions could this be?Can a mouse's drivers be free and or open source software?Can a mouse's design be free/open?Can a mouse's firmware be free/open?To all questions, the answer is, obviously, yes, it can be. If there exist advanced mice with non-free firmware (the coding on the chips in the mouse), I don't know, but from a gut feeling* I assume it won't be all that many. The same goes for the design. I doubt many high-end manufacturers would go for an open design, as one of their main selling point often is their design.As for the drivers, there usually exist free drivers all consumer peripherals.*warning: answer contains a gut feeling. There are hardware manufacturers that embraced open technology, but it's not much, and gaming mice, the kind that usually have macros, are not generally closesly affiliated with the open source or free software movements.
_unix.167690
I am new to *nix world. I have a Lenovo T440 laptop with Ubuntu 13.10 and looking to dock in its station. I have 2 ASUS external monitors connected to it. Ideally, I am looking to have the display in the external monitors, with extended view and no display in the laptop when I dock it. But I get a twin view in the external monitors which are extended views of my laptop. When I check the display settings, it does recognize just one external monitor, instead of 2. It have intel graphics card and does not have a nvidia one. How do I fix this?
Ubuntu external dual monitors set up problem
dual monitor;x server;display settings
null
_webmaster.81513
There are some urls like http://domain.com/user/:username and http://domain.com/post/:id on my website, the username and id is the parameter from database.I try to let google crawler more efficiently , can I use google webmaster tools > URL Parameters > add parameter ? I read the document but still don't get it how to set up?the column Parameter and the rest what should I filled? or this tools only for get method parameter?
proper use google webmaster tools URL Parameters
seo;web crawlers;google search console;googlebot
null
_unix.307905
I have a CentOS server (release 6.7). In order to do tmux shortcuts more quickly (ctrl-b), I thought of mapping the caps lock key as another ctrl key. How do I go about doing this?I have looked up this up on Google and have tried adding XKBOPTIONS=ctrl:nocaps to /etc/sysconfig/keyboard to no avail.
Mapping Caps Lock to Ctrl, CentOS
x11;keyboard shortcuts
null
_cstheory.32078
New to this forum, so please let me know if my question format is incorrect.For linear KP with $n$ items and $c$ capacity, dynamic programming can find exact solutions in $\mathcal{O}(nc)$. I have seen some results extending dynamic programming to approximate solutions to the QKP link giving a lower bound within 0.05% of true solutions in $\mathcal{O}(n^2c)$.Does there exist an algorithm giving at least as accurate results in at least as good time?
What is the current state-of-the-art solver for quadratic knapsack problems?
np hardness;dynamic programming;packing
null
_webapps.72685
I want to use hyper-linked images for navigation between Google Sites pages.Right now the hyperlink creates a new tab when opened.Is there a way for the hyperlink to 'replace' the current page - rather than open a new tab?
Can a hyperlink close the previous window in Google Sites?
links;google sites
You can have it open in the same page by unchecking the box labeled Open this link in a new window:https://sites.google.com/site/siteshelphowtos/google-sites-instructions/images/imagesaslinks
_unix.363576
I know the thread How can I inner join two csv files in R which has a merge option, which I do not want. I have two data CSV files. I am thinking how to query like them like SQL with R. Two CSV files where primary key is data_id. data.csv where OK to have IDs not found in log.csv (etc 4)data_id, event_value1, 7771, 6662, 1114, 123 3, 3241, 245log.csv where no duplicates in ID column but duplicates can be in namedata_id, name1, leo2, leopold3, loremPseudocode by partial PostgreSQL syntaxLet data_id=1Show name and event_value from data.csv and log.csv, respectivelyPseudocode like partial PostgreSQL select SELECT name, event_value FROM data, log WHERE data_id=1;Expected outputleo, 777leo, 666 leo, 245R approachfile1 <- read.table(file1.csv, col.names=c(data_id, event_value))file2 <- read.table(file2.csv, col.names=c(data_id, name))# TODO here something like the SQL query # http://stackoverflow.com/a/1307824/54964Possible approaches where I think sqldf can be sufficient heresqldfdata.tabledplyrPostgreSQL Schema pseudocode to show what I am trying to do with CSV filesCREATE TABLE data ( data_id SERIAL PRIMARY KEY NOT NULL, event_value INTEGER NOT NULL);CREATE TABLE log ( data_id SERIAL PRIMARY KEY NOT NULL, name INTEGER NOT NULL);R: 3.3.3OS: Debian 8.7Related: PostgreSQL approach in the relevant thread How to SELECT with two CSV files/ on PostgreSQL?
How to select on CSV files by R sqldf/data.table/dplyr?
csv;r;sql
sqldf approach.One approach which shows a caveat with join approach - you cannot use WHERE data_id on both tables if you join by data_id. Code 1file1 <- read.table(data.csv, col.names=c(data_id, event_value))file2 <- read.table(log.csv, col.names=c(data_id, name))library(sqldf)df3 <- sqldf(SELECT event_value, name FROM file1 LEFT JOIN file2 USING(data_id))df3Output wrong because data_id = 1 should be active tooLoading required package: gsubfnLoading required package: protoLoading required package: RSQLiteLoading required package: tcltkWarning message:Quoted identifiers should have class SQL, use DBI::SQL() if the caller performs the quoting. event_value name1 event_value name2 777 leo3 666 leo4 111 leopold5 123 <NA>6 324 lorem7 245 leoCode 2Code df3 <- sqldf(SELECT event_value, name FROM file1 LEFT JOIN file2 USING(data_id) WHERE data_id = 1)Output blank because join applied already[1] event_value name <0 rows> (or 0-length row.names)Code 3Do WHERE earlierdf3 <- sqldf(SELECT event_value, name FROM file1 WHERE data_id = 1 LEFT JOIN file2 USING(data_id))Output error because two tables are of different size so WHERE should be applied on both tablesError in rsqlite_send_query(conn@ptr, statement) : near LEFT: syntax errorCalls: sqldf ... initialize -> initialize -> rsqlite_send_query -> .CallIn addition: Warning message:Quoted identifiers should have class SQL, use DBI::SQL() if the caller performs the quoting. Execution haltedCode 4Using two SELECTs with JOINdf3 <- sqldf(SELECT event_value, name FROM file1 WHERE data_id = 1 LEFT JOIN (SELECT data_id, name FROM file2 WHERE data_id = 1) USING(data_id))Output errorError in rsqlite_send_query(conn@ptr, statement) : near LEFT: syntax errorCalls: sqldf ... initialize -> initialize -> rsqlite_send_query -> .CallIn addition: Warning message:Quoted identifiers should have class SQL, use DBI::SQL() if the caller performs the quoting. Execution haltedMaybe, a syntax error with the second SELECT and its attachment to JOIN.
_cs.59282
I have studied Pumping Lemma carefully and have solved many exercises about it but I can't get an idea on how to solve this one: can anyone help me?Let L = { w#x | x is a substring of w }. Prove that L is not regular.I have tried setting the Pumping length to |w| and to |x| but I just don't find words that are not in L..Thx in advance
Struggling with Pumping Lemma application
formal languages;regular languages;pumping lemma
null
_unix.195045
I'm using logrotate to rotate mysql output. When cron runs logrotate, I get frequently get an e-mail with the following content:error: Compressing program wrote following message to stderr when compressing log /var/log/mysql/mysqld.err-20150408:gzip: stdin: file size changed while zippingindicating that after logrotate moved the file and called gzip on it, the file was still open and mysql was writing to it. Here's my logrotate config for mysql:/var/log/mysql/mysql.err /var/log/mysql/mysql.log /var/log/mysql/mysqld.err {monthlycreate 660 mysql mysqlnotifemptysize 5Msharedscriptsmissingokpostrotate[ -f /var/run/mysqld/mysqld.pid ] && /bin/kill -HUP `cat /var/run/mysqld/mysqld.pid`endscript}This is the unmodified file that is shipped with Gentoo's mysql package, so I doubt there are problems with it. I have no trouble with other logs being rotated.Any ideas what may be going on?
Why does logrotate zip a file before it is closed
logs;mysql;logrotate
The gzip error message states pretty much what is going on -- the file is being written to (by MySQL in this case) during compression. Try using delaycompress (with compress); from the man page:delaycompress Postpone compression of the previous log file to the next rotation cycle. This only has effect when used in combination with compress. It can be used when some program cannot be told to close its logfile and thus might continue writing to the previous log file for some time.
_datascience.13757
I am using a Random Forest classifier using two datasets, one for training and the other for testing and vice-versa, described below.First dataset $F$Continuous data, 28 features x 58 observations.Two classes: $A$ and $B$Original cardinalities: $A: 42~obs.$, $B: 16~obs.$Random subsampled to: $A: 16~obs.$, $B: 16~obs.$Second dataset $S$Continuous data, 28 features x 90 observations.Two classes: $A$ and $B$Cardinalities: $A: 76~obs.$, $B: 14~obs.$Random subsampled to: $A: 14~obs.$, $B: 14~obs.$Training methodologyAfter a procedure of feature selection with mRMR [1], selecting a number of feature no greater than $n/5$, where $n$ is the number of observations in the training dataset (as proposed by [2]), respectively 6 features for the $F$ dataset, 5 features for the $S$ dataset, I trained my classifier on the training dataset (in which I have downsized the majority class with a random subsample, in order to train each class with the same number of observations).ResultsIf I train my Random Forest classifier on $F$ and I test it on $S$ I get the following confusion matrix:$\begin{matrix}&B&A\\True~B&3&11\\True~A&3&73\\\end{matrix}$And normalised per-row:$\begin{matrix}&B&A\\True~B&0.214286&0.785714\\True~A&0.039474&0.960526\\\end{matrix}$Vice-versa, if I train my classifier on $S$ and I test it on $F$ I get the following confusion matrix:$\begin{matrix}&B&A\\True~B&16&0\\True~A&12&30\\\end{matrix}$And normalised per-row:$\begin{matrix}&B&A\\True~B&1&0\\True~A&0.285714&0.714286\\\end{matrix}$How could it be to have such asymmetry in classification?[1] Peng, Hanchuan, Fuhui Long, and Chris Ding. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy, IEEE Transactions on pattern analysis and machine intelligence 27.8 (2005): 1226-1238.[2] Johnstone, Iain M., and D. Michael Titterington. Statistical challenges of high-dimensional data. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 367.1906 (2009): 4237-4253.
Differences in classification performance switching training and test set
classification;training
null
_codereview.154347
This piece of code works fine. But I'm wondering if it can be done in a more efficient way. More specifically, this part (*(s1 + i)) if it possible to force it to sequence through entire array character by character via pointer, for example, *s1++.My task to do this function compareStrings without index array []:int compareStrings(const char *s1, const char *s2){ int i = 0, answer; // i - to sequence through array of characters // pointer to character string1 and character string2 while (*(s1 + i) == *(s2 + i) && *(s1 + i) != '\0'&& *(s2 + i) != '\0') { i++; } if ( *(s1 + i) < *(s2 + i) ) answer = -1; /* s1 < s2 */ else if ( *(s1 + i) == *(s2 + i) ) answer = 0; /* s1 == s2 */ else answer = 1; /* s1 > s2 */ return answer;But I want to change it to s1++ and s2++ instead of *(s1 + i) and *(s2 + i). I've tried to implement this idea with pining an extra pointer to the beginning but I've failed.int compareStrings(const char *s1, const char *s2){ int answer; char **i = s1, **j = s2; // i to sequence through array of characters while (*(i++) == *(j++) && *(i++) != '\0'&& *(j++) != '\0'); if (*i < *j) answer = -1; /* s1 < s2 */ else if (*i == *j) answer = 0; /* s1 == s2 */ else answer = 1; /* s1 > s2 */ return answer;}
String comparison using pointers
c;strings;pointers
You don't need pointers to character pointers at all:int str_cmp(const char* s1, const char* s2){ while (*s1 != '\0' && *s1 == *s2) { ++s1; ++s2; } if (*s1 == *s2) { return 0; } return *s1 < *s2 ? -1 : 1;}Also, there is a bug in your second implementation: it returns 1 on compareStrings(hello, helloo), when the correct result is -1.Hope that helps.
_softwareengineering.284333
Last year, I created a derived work from published source code that was distributed unlicensed on a blog. Recently, I went back to the blog and noticed it was now licensed under CC BY-SA 3.0. Is my derived work also bound by share alike now?
Non-licensed source code later becomes licensed
licensing;creative commons
A license doesn't restrict what you are allowed to do, a license gives you permission to do things that you would not be allowed to do without the license. If that code had no license, then you had no right to use it, and creating a derived work was copyright infringement. You are lucky that it is now licensed.
_webapps.18071
Is there an online XML generator to generate XML code from a list of values ?i.e.Value1Value2Value3Value4Value5Value6<item>Value1</item><item>Value2</item><item>Value3</item><item>Value4</item><item>Value5</item><item>Value6</item>
Is there an online XML generator to generate xml code from a list of values ?
xml
null
_webapps.9870
A customer of mine has created a YouTube channel for their business but is concerned about the advertising distracting from the product.Is there any way to stop or prevent the adverts from appearing?The channel is here - http://www.youtube.com/user/AtholeStillOpera
Block advertising from appearing on YouTube channels
youtube;ads
null
_codereview.80541
what could I do to make this code more beautiful?exports.index = function (req, res, next) { Booking.aggregate( [{ '$group': { '_id': '$booking.date', 'name': { '$first': '$booking.name' }, 'participants': { '$sum': '$booking.participants' }, 'attended': { '$sum': { '$cond': [{ '$eq': ['$attended', true] }, 1, 0] } }, 'bookings': { '$sum': 1 } } }, { $sort: { 'booking.date': -1 } }], function (error, bookings) { if (error) { console.log(error); } else { res.render('admin/bookings/index', { moment: moment, data: bookings, }); } } );};exports.show = function (req, res, next) { Booking.find({ 'booking.date': req.params.id }) .exec(function (error, booking) { if (error) { console.log(error); } else if (booking.length === 0) { res.redirect('admin/reservas'); } else { res.render('admin/bookings/show', { moment: moment, data: booking, }); } });};exports.changeStatus = function (req, res, next) { Booking.findById(req.params.id, function (error, booking) { if (error) { console.log(error); } else { booking.update({ 'attended': booking['attended'] === false ? booking['attended'] = true : booking['attended'] = false }, function (error) { if (error) { console.log(error); } else { res.end('Success!'); } }); } });};exports.destroy = function (req, res, next) { Booking.remove({ _id: req.params.id }, function (error) { if (error) { console.log(error); } else { res.end('Success!'); } });};
Booking program
javascript;node.js
Indention in JS is usually personal preference, but given that JS has a knack for nesting a lot of stuff... I suggest 2-space indents instead. Try to keep lines under 80 columns whenever possible. That way, people don't run off to the right.Try not to use _ and $ in key or variable names. While they are valid, it usually doesn't make sense. Also, for valid key values, '' is optional and since $ and _ are valid, then you can remove ''.I saw this code:'$cond': [{ '$eq': ['$attended', true]}, 1, 0]I don't know what the 1 and 0 are for. Better put them in keys instead. Also if 0 and 1 happen to be booleans, better use true and false instead to be more meaningful and we don't assume they are integers for some numeric purpose.cond: { eq: { attended: true }, whateverOneIsFor: 1, whateverZeroIsFor: 0,}if (booking.length === 0) is the same as if(!booking.length) since 0 is loosely equal to false.Not entirely sure but I think you just want to toggle'attended': booking['attended'] === false ? booking['attended'] = true : booking['attended'] = false// is the same as'attended': booking['attended'] = !booking['attended']
_cs.68945
Given a cryptographic fingerprint of a collection, K == hash(X), an arbitrary function F, and a value Y, is it possible to build a short proof that F(X) == Y that doesn't require having X? I.e., a proof that Y is the result of applying F to some X such that hash(X) == K?
Given the hash of a collection, H(X), can I build a proof that F(X) == Y, without having X?
proof techniques;cryptography
Yes, this is possible. Use a zero knowledge proof (of knowledge). If F is computable in polynomial time, then there exists a polynomial-time zero-knowledge proof of the property you want, i.e., given K,Y, the prover can construct a proof that there exists X such that H(X)=K and F(X)=Y. Amazingly, this proof is zero-knowledge: it reveals nothing about X (other than the property is true).If you don't care about the zero knowledge part, you can just use an interactive proof system.Zero-knowledge proofs can be made non-interactive using standard techniques.There's tons of work on this in the cryptographic research literature. I can't explain the ideas in the length of an answer here, so I suggest you spend some quality time studying this material. However, while the theory is elegant and beautiful, you're probably going to find the generic schemes are not very practical and the performance overhead is significant. Therefore, if you actually want to do this in practice, you'll want to focus on a specific function F and see whether you can build a custom protocol specialized to that particular function F.
_unix.326796
I downloaded the openssl 1.0.1e source then configured it with./config shared --prefix=/cygdrive/f/dev/progs --openssldir=/cygdrive/f/dev/progs/openssl_n -DENGINE_DYNAMIC_SUPPORTthen didmake && make install_swThis error occurred during openssl 1.0.1e buildinginstalling 4758cca cp: cannot stat 'lib4758cca.a': No such file or directoryI am using windows 7 and trying to build openssl on cygwin 64 bit.What to do to get it working?
Error during openssl 1.0.1e make install_sw
compiling;openssl;cygwin
null
_codereview.150202
I wrote the following code:def find_storms_dst(d_f, max, min): i = 0 storms = [] while i < len(d_f.index): dst = d_f['Dst'][i] if dst < max: if dst < min: print 'out of range' s = Storm(i, d_f.index[i], dst) while i < len(d_f.index)-1: i += 1 dst = d_f['Dst'][i] if dst < max: if dst < min: print 'out of range' s.log(i, d_f.index[i], dst) else: break storms.append(s) i += 1 return stormsI does what I want. But I see some elements in it repeat, is that a recursion issue?How one could write this better?
Better way to write algorithm, possibly recursive
python;algorithm;python 2.7
null
_unix.131311
I am attempting to move some folders (such as /var and /home) to a separate partition after reading this guide:3.2.1 Choose an intelligent partition schemeI was able to move one folder successfully following this guide.However, it doesn't seem to work for multiple folders and all my folders are dumped into the partition without proper folders.I would like to mount /var, /home, /tmp onto the seperate partition, can someone guide me on this?
Moving /var, /home to separate partition
debian;security;partition;fstab
null
_unix.364083
I would like to know how to configure the security of linux computers in programming companies. They need to do everything: run binaries, install packages ... but I do not want them to have root access nor can they change the root password, is that possible?My main idea was to add it to sudoers and then exclude certain commands, but I think it would not be useful because they could make sudo su and then as root change the key.Any ideas?Thank you.
Permissions programmers - Linux Sudo
linux;security;sudo;programming
null
_softwareengineering.199939
If we assume we have this little snippet of code:string str = checked;bool test1;if (str == checked){ test1 = true;}else{ test1 = false;}Is it bad practice to change a simple statement like this to the following?:bool test2 = (str == checked);Because they work exactly the same, and work as required, so I can't imagine how it would be. However, as a young, inexperienced programmer I am not aware of whether such a thing is frowned upon or not. Can anyone tell me, if this is NOT ok, why not?The following test program:using System;public class Test{ public static void Main() { string str = checked; bool test1; if (str == checked) { test1 = true; } else { test1 = false; } bool test2 = (str == checked); bool test3 = (str != checked); Console.WriteLine(test1.ToString()); Console.WriteLine(test2.ToString()); Console.WriteLine(test3.ToString()); }}Outputs:TrueTrueFalseAny insight etc is appreciated.
Making Simple IF Statements Shorter
programming practices;language agnostic;coding standards;conditions
Is it bad practice to change a simple statement like this to the following?:bool test2 = (str == checked);No, it's good practice. To me, the longer code:if (str == checked){ test1 = true;}else{ test1 = false;} indicates that the programmer doesn't understand Boolean expressions. The shorter form is much clearer. Similarly, don't write:if (boolean-expression) { return true;} else { return false;}Just write return boolean-expression;
_softwareengineering.227867
For an academic exercise, we have been tasked with creating a small website. We have already gathered the requirements and fleshed out the business domain to see the classes we are supposed to support. It is to be on Microsoft stack, so I decided to use ASP MVC with code first Entity Framework. I am now looking for the best way to split up a 6 man team in order to best tackle the project.TLDR: What are some effective ways of splitting up an ASP MVC project so that multiple developers can work on it concurrently?I have researched where projects have been split across the tiers, i.e. database, business, views, and such, but I don't exactly know how to instruct them to go about designing a view for which there is no controller.Is splitting it up between the different business models used a decent strategy? I have read that this has led to an application that didn't mesh well, due to the different aesthetics.
How to split up ASP MVC Project using Code-First Entity Framework
web development;project management;asp.net mvc;entity framework;codefirst
The way our small team approached the problem was to cooperatively develop the domain layer and database logic, and then proceeded to split the work up by Area, so I worked on settings while another person worked on our core CRUD pages. Once that was done we picked another area and off we go again. There are several problems with this approach:Our initial development work produced views that didn't look at all similar. We had to go back and redo the views. This could be avoided by having specifications and design work done thoroughly beforehand (we are doing this now, and it makes things much better).We both developed library methods to do the same thing independently, and refactoring was needed to remove the duplicate (this was sometimes easy, but often difficult as we wrote them quite differently).If I had my time again I would do the following:Write your business logic together and decide on URL endpoints (controllers and actions and their arguments). If you are using separate business entities and MVC models, design the MVC models too.Now one person can write views based on the MVC models and for the URLs decided upon, whilst another person would write the controllers/actions necessary.That's not perfect (can't really test the views until the controllers are written) but it does minimize interaction so more work can be done faster.Another option (although not true MVC) is to write the application as a single-page application, and have one group write everything behind the REST API, and have another group write the JavaScript on the front-end.
_webapps.33370
I registered an Facebook App iWish and the namespace iwishns a long time ago, and now my app is finally in a state where i feel i can get it out on the App centre.But now there is an iWishApp application, and they have registered the facebook page iWishso now when i try to register a facebook page all my attempts fails i guess due to its similatities with this iwish page. It doesn't say what is invalid just that it is. I've tried to name it iWish-wishlists and so on but nothing seems to work.Do i have to throw my appname all together?
My facebook app details Display Name page name is occupied
facebook pages;facebook apps
null
_cstheory.20605
I am interested in an algorithm that allows me to construct all inequivalent (non-isomorpic) digraphs of size n with self-loops allowed. For example, the output should look like this:Even though it is trivial to simply construct all such digraphs, I believe that the requirement that non-isomorphic graphs should appear only once makes the problem very difficult. Of course, we can always construct all of them and drop the duplicates but this approach is too slow. Ideally I would like an algorithm that just creates the inequivalent ones by construction.Does such a thing exist? Any ideas or references would be most welcome.Descriptions of the algorithm either in a diagrammatic form or in terms of adjacency matrices are equally acceptable. Thanks in advance!
Constructing all digraphs without repetitions
graph theory;graph algorithms
null
_unix.365257
On a CentOS 7 server, a user is added to the wheel group, but then that same user is not able to run sudo commands. What specific commands need to be added to the below in order for the user named wannabe_sudoer_user to be allowed to run sudo commands?The Specifics:Here is the terminal log that shows the failed attempt to add the wannabe_sudoer_user user to the sudoers list:[wannabe_sudoer_user@localhost ~]$ su -Password:Last login: Mon May 15 13:58:21 PDT 2017 on ttyS0[root@localhost ~]# gpasswd -a wannabe_sudoer_user wheelAdding user wannabe_sudoer_user to group wheel[root@localhost ~]# exitlogout[wannabe_sudoer_user@localhost ~]$ sudo mkdir /opt/atlassian/We trust you have received the usual lecture from the local SystemAdministrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility.[sudo] password for wannabe_sudoer_user:wannabe_sudoer_user is not in the sudoers file. This incident will be reported.
Why isn't user being added to sudoers list?
centos;sudo
After a quick glance I would say your session in the above example does not KNOW that your user has been added to wheel. I would close the session (logout), start a new session (login, open a new terminal, etc.) and try again.
_softwareengineering.274653
I've tried finding some resources to help me on my dilemma, but wasn't successful in my approach. So here goes:I am implementing an automated firewall manager for Windows Firewall which will ban some offending IP address for a certain port, for a certain period of time, after which the same firewall manager will remove the ban. My trouble is deciding whether I should just stick to creating a new rule for each IP/port pair or create one rule for each port and only edit the IP list to add/remove an IP address. My main consideration would be if one of the approach would yield better performance than the other. It would suit me better to use one rule for each IP/port pair, but I don't mind going for the other approach if it is significantly better.I've tried asking this on security.stackexchange.com but it's been marked as off-topic there.Thanks!
Windows Firewall single rule with multiple IP addresses vs multiple rules with single IP address
performance;windows
Well this answer on ServerFault shows a single rule with many IP blocks works fine, but as he says it take a lot of CPU to process when updating, I imagine the rules are stored internally in an efficient format, and when the rule is changed WF will re-parse and store the IPs. In this case, it doesn't matter if you have 1 rule with a million IPs or a million rules with 1 IP, except that the time it takes to parse each rule will be different.Personally I'd go with a half-way house, finding the sweet spot between management and rule size. Probably 1 rule per country or per netblock.
_softwareengineering.165615
I lose a ton of productivity by getting distracted while waiting for my tests to run. Usually, I'll start to look at something while they're loading --- and 15-20 minutes later I realize my tests are long done, and I've spent 10 minutes reading online.Make a small change... rerun tests ... another 10-15 minutes wasted!How can I make my computer make some kind of alert (Sound or growl notification) when my tests finish, so I can snap back to what I was doing??
Make audible Ding! sound, or growl notification, when `rake test` finishes!
productivity
While I haven't tried this, a quick Google search brought up growlnotify, which will send growl notifications from the command line.From there, it's just something like this:$ rake test ; growlnotify doneYour syntax may vary.
_webapps.29657
I need to assign people to individual lists and not the entire board so it is all they see. Is there a way to do this? If not, can I have a list from one board feed into another board automatically?
Can I just assign members to Lists on Trello on not the entire board? If not, is Trello going to make this possible?
trello
null
_unix.266624
So I was asked to create a link ... like ../ but two go an extra level up in Linux. I understand that I must use ln -s to create a symbolic link, but I'm unsure where to go next.I am new to Linux and trying to understand how to create such link
Symbolic link ./././
linux;directory
Basically you have to do the following (if I understood your question correctly):ln -s /path/to/file /path/to/symlink
_unix.353536
I was trying to update my Kali live usb with the feature that restarts kali and when it boots up again it updates not through the terminal with apt-get. The update stopped by itself before it was done causing my Kali live usb to break. Now I can boot into Kali but I can't open anything or see any icons. If I try to open terminal it goes into the screen you usually get before the desktop appears when you boot up normally except it does nothing. The way I set up my live usb does not allow me to choose to boot into safe mode. I have some files that I really need to rescue if possible. What can I do to try to restore Kali or at least be able to access the filesystem and save those files?
Restoring broken Kali Live USB or Rescuing files?
linux;kali linux
null
_unix.246853
Im making a little backup script that includes the function to encrypt the backup. The script runs automaticly with a cronjob. So the password for gpg is in a File. Its just a textfile. How can i improve the security of the file so no one can see the blank password there?Greez Nyno
how do i make a script to generate a gpg backup saver?
bash;shell script;cron;backup;gpg
null
_webmaster.34336
We've been logging GET requests on our domain to the following:XX/YY/ZZ/CI/MGPGHGPGPFGHCDPFGGHGFHBGCHEGPFHHGGIs this a known kind of attack? What might it be targeting?
Meaning of hack request
hacking
This type of attacks can be used to generate a 404 message. Hackers do this in order to find out what kind of web server you have. Or to find out what kind of framework you use. (Based on the structure of the 404 HTML code.)If they find out what server you have and what CMS/framework (and version) then they can use this information to hack into your website.I recommend you to keep everything up to date. There is not a lot more you can do against those requests.
_datascience.9943
My ultimate goal is to use Jupyter together with Python for data analysis using Spark. The current hurdle I face is loading the external spark_csv library. I am using Mac OS and Anaconda as the Python distribution.In particular, the following:from pyspark import SparkContextsc = SparkContext('local', 'pyspark')sqlContext = SQLContext(sc)df = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('file.csv')df.show()when invoked from Jupyter yields: Py4JJavaError: An error occurred while calling o22.load.: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.orgHere are more details:Setting Spark together with JupyterI managed to set up Spark/PySpark in Jupyter/IPython (using Python 3.x). System initial settingOn my OS X I installed Python using Anaconda. The default version of Python I have currently installed is 3.4.4 (Anaconda 2.4.0). Note, that I also have installed also 2.x version of Python using conda create -n python2 python=2.7.Installing SparkThis is actually the simplest step; download the latest binaries into ~/Applications or some other directory of your choice. Next, untar the archive tar -xzf spark-X.Y.Z-bin-hadoopX.Y.tgz.For easy access to Spark create a symbolic link to the Spark:ln -s ~/Applications/spark-X.Y.Z-bin-hadoopX.Y ~/Applications/sparkLastly, add the Spark symbolic link to the PATH:export SPARK_HOME=~/Applications/sparkexport PATH=$SPARK_HOME/bin:$PATHYou can now run Spark/PySpark locally: simply invoke spark-shell or pyspark.Setting JupyterIn order to use Spark from within a Jupyter notebook, prepand the following to PYTHONPATH:export PYTHONPATH=$SPARKHOME/python/lib/py4j-0.8.2.1-src.zip:$SPARKHOME/python/:$PYTHONPATHFurther details can be found here.
Use spark_csv inside Jupyter and using Python
python;apache spark;pyspark;jupyter
Assuming the rest of your configuration is correct all you have to do is to make spark-csv jar available to your program. There are a few ways you can achieve this:manually download required jars including spark-csv and csv parser (for example org.apache.commons.commons-csv) and put them somewhere on the CLASSPATH.using --packages option (use Scala version which has been used to build Spark. Pre-built versions use 2.10):using PYSPARK_SUBMIT_ARGS environmental variable:export PACKAGES=com.databricks:spark-csv_2.11:1.3.0export PYSPARK_SUBMIT_ARGS=--packages ${PACKAGES} pyspark-shelladding Gradle string to spark.jars.packages in conf/spark-defaults.conf:spark.jars.packages com.databricks:spark-csv_2.11:1.3.0
_unix.344051
I recently migrated an ext4 filesystem on a lvm volume to btrfs using btrfs-convert. After mounting it later, I realized that there were some problems with it.As soon as I did something that involved writing data to the disk, it was forced readonly, as of dmesg. I did some research and ran scrub, which found two files with csum errors. Unfortunately one of them was the ext2_saved for rollback. I thought that deleting the files with the csum errors would resolve the problem. So I deleted the backup and the other file.After rebooting, scrub did not find any errors. But when mounting, I now get the following message: bdev /dev/mapper/my-volume errs: wr 0, rd 0, flush 0, corrupt 608, gen 0. It seems like I can now write to the disk (renaming a file worked, I didn't do further tests yet).Should this message worry me or can it be ignored? Or even better: how can I find the cause of it? scrub and even btrfs check --repair didn't find any issues.UPADTE:I ran a Memtest and checked for badblocks. Both tests came out clean.I also updated my Kernel to 4.9.9-gentoo. When compiling the Kernel I found out that I had the CONFIG_BTRFS_FS_CHECK_INTEGRITY option enabled, aka Btrfs with integrity check tool compiled in (DANGEROUS). I now disabled this option.After this I tried to launch Chrome - which obviously did something on the mentioned disk. Shortly after I read something like this in dmesg:*Some stacktrace* btrfs_finish_ordered_io:someline errno=-95 unknownforced readonlyUmounting left me with this message:cleaner transaction attach returned -30I also had these when I still had the checksum errors that now are resolved. Now I can't find a reason for them.I ran a scrub again, which passed with 0 errors.When running btrfs check --repair /dev/mapper/my-volume, it now fixed discount file extents for some inodes, which apparently were newly occurred errors as the same command before the update didn't find anything.I'll probably have to move the data away in readonly to another disk and just format the thing.UPDATE:Copying the data away in readonly worked, without losing data, so it seems. Looks like the conversion from ext4 to btrfs isn't working perfectly yet.System Info:Kernel: 4.4.39-gentoo; now: 4.9.9-gentoobtrfs-progs v.4.9
Btrfs forced readonly/corrupted after conversion from ext4
filesystems;lvm;btrfs
btrfs keeps per-device statistics and stores the persistently on disk. You can manually print them using btrfs device stats device|mountpoint. They are also printed on mount (at least if any of them are non-zero).So what you're seeing is just telling you that corruption has been detected in the past; you can clear the counters with the -z flag: btrfs device stats -z /dev/mapper/my-volume.Of course, it'd be good to find out what caused the corruption originallyI'm not sure if its a btrfs convert bug or if you have something that may continue to cause corruption (unreliable hardware). I recommend at a memory test if you're not sure.(And, it should go without saying, but make backupsespecially when using a relatively young filesystem like btrfs.)
_webmaster.17398
My website has a pagerank of 6 but it ranks poorly (6th position), they have the 3rd position despite a lower pagerank. The sites are about the same subject. I have title text on links, content is not flash, etc. Why?
Another website has lower pagerank (4) and lower backlinks, but they score higher than me in search results
seo
You're making the common mistake of confusing PageRank (the Google algorithm score) with Ranking. While PR is a ranking factor, it's only one of 200+ (some say 1000+) elements that make up for your final page ranking.And remember, localization, personalization and relevancy to that query all factor greatly in results.You might rank #1 for a different keyword/phrase, or not at all.Short Answer: PageRank has little to do with results rank.
_cs.49079
I understand that the output for Latent Dirichlet Allocation is a distribution over K topics.Suppose I have a Dx(K+1) matrix, where rows are documents and columns are the topic distribution + one column for class. For example, each row represents one movie review. The first K columns represent the topic distribution of the document, and the last column is the classification of this document. For example, if K=5, one row may read as:2 | 0.25 | 0.4 | 0.1 | 0.15 | 0.1 | 1 where moview review 2 had 25% of the text about topic 1 (pleasure), 40% topic 2 (discontent), 10% topic 3 (personal feelings) ... and the classification of this document was to class #1.How would I go about creating a Naive Bayes Classifier using this data?Typically, I have used a Gaussian Naive Bayes where the feature space is iid over normally distributed variables, but this assumption I do not believe makes sense for LDA output. Would I need to assume the features are the individual columns distributed in a particular way (say a Dirichlet probability)?This exercise is more for proof of concept to use Naive Bayes. I want to use, say 60%, of the data to construct a Naive Bayes Classifier and test the accuracy of the remaining 40%. My main concern is to how to define the PDF for the Naive Bayes Classifier.
Naive Bayes where Feature Space is LDA Output
machine learning
The topic scores (the numbers in the first $K$ columns) come from a Dirichlet distribution. The marginals of the Dirichlet distribution are a beta distribution.Therefore, it would be reasonable to model the distribution on each feature as beta-distributed, not normally-distributed. With this adjustment, all the math for Naive Bayes goes through.Let me flesh this out. Given some data $x=(x_1,\dots,x_k)$, the Naive Bayes classifier computes the likelihood of a class $c$ as$$L(c) = P(c) \prod_{i=1}^k P(x_i | c).$$Here $x_i$ is the value in the $i$th column (the estimated probability the document came from topic $i$). Given the comments above, we will estimate that the conditional distribution $P(x_i | c)$ has a beta distribution with some parameters, i.e., $\text{Beta}(\alpha_i,\beta_i)$. Thus,$$P(x_i | c) = {1 \over B(\alpha_i,\beta_i)} x_i^{\alpha_i-1} (1-x_i)^{\beta_i-1}$$for some parameters $\alpha_i,\beta_i$.Where do we get the parameters from? From the training set, as usual. In other words, we take from the training set just the documents that are classified with class $c=0$, and from those, we fit a beta distribution to the conditional distribution $P(x_i | c=0)$ and find the parameters $\alpha_i,\beta_i$ that make the beta distribution best fit the observed values of $x_i$ (out of the training documents classified as class $c=0$). We use those as our estimate for $P(x_i | c=0)$. How do we find $\alpha_i,\beta_i$? By using standard methods for estimating the parameters of a beta distribution, given a bunch of observations drawn from it.Then, we do the same for the documents in the training set that are classified as $c=1$, to fit a beta distribution and use that as the distribution for $P(x_i | c=1)$.Finally, once you have formulas for $P(x_i | c=0)$ and $P(x_i | c=1)$, you plug that into the definition of $L(c)$ above and use that in the Naive Bayes classifier just as normal.
_webmaster.78833
How would you create a sitemap for a forum where pages are created every hour, because on daily basis approximately 5-10 new pages are created.Should you be adding all of them to the sitemap and update it everyday? Doesn't sound correct.
How to create a sitemap for a forum
seo;sitemap
null
_softwareengineering.319885
Reading the literature of DDD I came up with the following layers:Application Outsider World (Controllers, Crons, etc)Application Services (or UseCases) - which orchestrates multiple Domain Services or Infrastructure Services. They are called from Outside World. They know what things have to be doneDomain Services - which contains how the things are done (relying on repository interfaces)Question: Is there any best practices how to communicate between layers? What I know:- Application services should return wanted data to be exposed and some of success of the transaction (warnings, error, infos)- The data that an Application Service returns, should be gathered from Domain Services and/or Infrastructure Services and composed togather.Controller <-> Application Service <-> Domain Service <-> Infrastructure ServiceThese are some of my ambiguous thoughts:Should all methods on Application Service have a specific DTO that contain the request as a parameter? Like AddItemToCardCommandDto (that encapsulated all the needed data). How about a generic ResultObject that have only a couple of methods like getResult and hasErrorrs or getMessages?How should return DomainService data and errors? Should they Return errors by exception? That seems odd because to me Bussines Validation should be called in DomainServices as they are part of business rules.
Comunicating between layers in DDD
domain driven design
null
_codereview.144598
I found this question online as an example from a technical interview and it seems to be a flawed question in many ways. It made me curious how I would answer it. So, If you were on a technical Python interview and asked to do the following:Write an algorithm that receives a dictionary, converts it to a GET string, and is optimized for big data.Which option would you consider the best answer? Any other code related comments are welcome.Common:import requestsbase_url = https://api.github.comdata = {'per_page': 10}node = 'users/arctelix/repos'Option 1:My first thought was just answer the question in the simplest form and use pagination to control the size of the data returned.def get_query_str(node, data=None): # base query query_str = %s/%s % (base_url, node) # build query params dict query_params = &.join([%s=%s % (k,str(v)) for k, v in data.items()]) if query_params: query_str += ?%s % query_params return query_strprint(\n--Option 1--\n)url = get_query_str(node, data)print(url = %s % url)Option 2:Well, that's not really optimized for big data and the requests library will convert a dict to params for me. Secondly, a generator would be a great way to keep memory in check with very large data sets.def get_resource(node, data=None): url = %s/%s % (base_url, node) print(geting resource : %s %s % (url, data)) resp = requests.get(url, params=data) json = resp.json() yield jsonprint(\n--Option 2--\n)results = get_resource(node, data)for r in results: print(r)Option 3:Just in case the interviewer was really looking to see if I knew how join() and a list comprehension could be used to convert a dictionary to a string of query parameters. Let's put it all together and use a generator for not only the pages, but the objects as well. get_query_str is totally unnecessary, but again the task was to write something that returned a GET string..class Github: base_url = https://api.github.com def get_query_str(self, node, data=None): # base query query_str = %s/%s % (self.base_url, node) # build query params dict query_params = &.join([%s=%s % (k,str(v)) for k, v in data.items()]) if query_params: query_str += ?%s % query_params return query_str def get(self, node, data=None): data = data or {} data['per_page'] = data.get('per_page', 50) page = range(0,data['per_page']) p=0 while len(page) == data['per_page']: data['page'] = p query = self.get_query_str(node, data) page = list(self.req_resource(query)) p += 1 yield page def req_resource(self, query): print(geting resource : %s % query) r = requests.get(query) j = r.json() yield jgh = Github()pages = gh.get(node, data)print(\n--Option 3--\n)for page in pages: for repo in page: print(repo=%s % repo)
Algorithm that receives a dictionary, converts it to a GET string, and is optimized for big data
python;interview questions;comparative review
There are a bunch of things that are not said or rendered implicit by the question so Im going to assume that the optimized for big data part is about the GitHub API response. So Id go with the third version. But first, some general advices:Document your code. Docstrings are missing all around your code. You should describe what each part of your API is doing or no-one will make the effort to figure it out and use it.Don't use %, sprintf-like formatting. These are things of the past and have been superseeded by the str.format function. You may also want to try and push newest features such as formatted string litterals (or f-strings) of Python 3.6: query_str = f'{self.base_url}/{node}'.You should use a generator expression rather than a list-comprehension in your '&'.joins as you will discard the list anyway. It will save you some memory management. Just remove the brakets and youre good to go.You shouldn't use f{k}={v} for k, v in data.items(): what if a key or a value contains a '&' or an '='? You should encode the values in your dictionnary before joining them. urllib.parse.urlencode (which is called by requests for you) is your friend.Now about handling the response:page = list(self.req_resource(query)) defeats the very purpose of having a generator in the first place. Consider using yield from self.req_resource(query) instead.Pagination of the Github API should be handled using the Link header instead of manually incrementing the page number. Use the request's headers dictionnary on your response to easily get them.Consider using the threading module to fetch the next page of data while you are processing the current one.
_softwareengineering.309052
I have written some projects. I try to annotate or comment all possible (hard to say if correctly toward clean explaining).I would like to write documentation for it, but I have problem with examples.It is not problem to write in annotation above each function (where it is really needed) examples of all possible using of that function - but it would be good to describe each of those examples.And for this, it would be better to use additional file where it will be written as well as possible. But what format and what filetype to use for it?Secondary question is (partly related to main one), that I am not sure if is it needed to examples would be running. Because in case of functions stored in traits it is probably very hard to make those examples running.
The best practice for writing of examples
documentation
null
_codereview.30139
I wrote a script for getting the stats of codereview.SE from the front page into a file. Github link if someone prefers reading there. Here is data_file.txt's current contents23-08-2013dd-mm-yyyy, questions, answers, %answered, users, visitors/day22-08-2013,9079,15335,88,26119,7407,23-08-2013,9094,15354,88,26167,7585,The first sentence contains the last date at which the data is written into the file. It is a check to make sure that I don't write the data for the same date twice in the file. Also note that there is a newline at the end which isn't being shown above.What I plan to do is to collect the data for many data and then use matplotlib to get graphs for finding out about the growth of this website. The part about plotting is currently under progress so I am not putting that here.The python script that I am using is below. I would like a general review of this script and any suggestions about its design in particular.#! python3import urllib.requestimport datetimeFILE_NAME = 'data_file.txt'CURRENT_URL = 'http://codereview.stackexchange.com/'def today_date(): return datetime.date.today().strftime('%d-%m-%Y')def already_written(): with open(FILE_NAME, 'a+') as f: f.seek(0) first_line = f.readline() if today_date() == first_line[:-1]: return True return Falsedef parse(line): This separates the stat-name and associated number temp = [0, ''] braces = False for c in line: if c == '<': braces = True elif c == '>': braces = False elif braces is True or c in [' ', ',', '%']: continue elif c.isdigit(): temp[0] *= 10 temp[0] += int(c) else: temp[1] += c return tempdef write_stats(): '''This writes the stats into the file''' with open(FILE_NAME, 'r') as f: data = f.readlines() with open(FILE_NAME, 'w') as f: f.write(today_date() + '\n') f.writelines(data[1:]) url_handle = urllib.request.urlopen(CURRENT_URL) write_this = today_date() + ',' for line in url_handle: temp_line = str(line)[2:-5] if 'stats-value' in temp_line and 'label' in temp_line: temp = parse(temp_line) write_this += str(temp[0]) + ',' else: write_this += '\n' f.write(write_this)def main(): if not already_written(): write_stats()if __name__ == __main__: main()EDIT:I think this line might need some explanation.if 'stats-value' in temp_line and 'label' in temp_line:I choose this for finding out the lines in the source code of the html of codereview.SE's front page. Only the 5 lines containing the stats have these 2 in it. So in front they tell me which line to parse.
Review Python script for getting the stats of codereview.SE into a file for analysis
python;python 3.x
Firstly, you don't need to store the last date at the beginning of the file since it's already at the end.Secondly, you shouldn't compute today's date twice.Thirdly, you should use Stack Exchange's API instead of trying to extract information from Code Review's home page. It has everything except visitors/day.Fourthly, the date format %Y-%m-%d is better than %d-%m-%Y because it allows string comparison of dates.So, here's what I came up with:#!/usr/bin/env python3import datetimeimport gzipimport jsonfrom urllib.request import Request, urlopenFILE_NAME = 'data_file.txt'API_URL = 'http://api.stackexchange.com/2.1/info?site=codereview'def last(iterable, default): r = default for v in iterable: r = v return rdef main(): today = datetime.date.today().strftime('%Y-%m-%d') with open(FILE_NAME, 'a+') as f: f.seek(0) last_date = last(f, '').split(',')[0] if last_date == today: return # we already have the data for today elif last_date > today: raise Exception('last date is in the future') req = Request(API_URL, headers={'Accept-Encoding':'gzip'}) r = gzip.decompress(urlopen(req).read()).decode('utf8') d = json.loads(r)['items'][0] print(today, d['total_questions'], d['total_answers'], d['total_unanswered'], d['total_users'], sep=',', file=f)if __name__ == __main__: main()The gzip stuff is to work around a bug in Stack Exchange's API server: it always compresses the response even if you don't ask, which is a protocol violation. Using the requests library might make it easier.The way I get the last line of the file isn't the most efficient but it should be good enough.
_unix.279840
A video with a buggy, or at least too complex soundtrack broke my Intel Ivy Bridge (i5-3337U) laptop running Ubuntu 15.10.In the middle of VLC playing the video, a shrill, slightly grating sound interrupted the soundtrack, and now it comes up at every boot.I know this is not hardware-related as:Alternate OS (KaOSx) works prefectly, even with the buggy videoThe sound follows the output, i.e. it's on laptop speakers until I connect external ones through the jack.How can I stop this !? Obviously it's driving me nuts, and the internet is full people with no sound but quiet about my issue...My guess would go with some kind of pulseaudio state being messed-up...Update - Some precisions:I am using KDE Plasma 5, with sddm (I guess) to loginThe sound starts after login, during the KDE startup scrollbar
Permanent shrill sound on ubuntu since buggy vlc video
ubuntu;audio;kde;pulseaudio;intel
null
_codereview.127639
Despite looking terribly ugly, it is pretty interesting due to some specific connections to cellular automata theory. I'm looking for ways of simplifying it further. This program takes 2.7 seconds to run on my machine. Is it possible, by merely changing the logic / control flow on magic, to speed it up? How fast can it be?#include <stdio.h>// Rewrites 12 consecutive ints based on bizarre logic.void magic(int* mem){ static int next = 1000000; int ak = mem[0+0], ax = mem[0+1], ay = mem[0+2], az = mem[0+3], bk = mem[4+0], bx = mem[4+1], by = mem[4+2], bz = mem[4+3], ck = mem[8+0], cx = mem[8+1], cy = mem[8+2], cz = mem[8+3], A, B, C, D; if (ak == 0 && bk == 0 && ck == 0) return; if (ak == 0 && bk == 0 && ck != 0) ak = ck, ax = cx, ay = cy, az = cz, ck = 0, cx = 0, cy = 0, cz = 0; if (ak != 0 && bk != 0 && ck == 0) ck = bk, cx = bx, cy = by, cz = bz, bk = 0, bx = 0, by = 0, bz = 0; if (ak == 1 && bk == 0 && ck != 0 && ax == -cx) cx = ay, ak = 0, ax = 0, ay = 0, az = 0; if (ak == 1 && bk == 0 && ck != 0 && ax == -cy) cy = ay, ak = 0, ax = 0, ay = 0, az = 0; if (ak == 1 && bk == 0 && ck != 0 && ax == -cz) cz = ay, ak = 0, ax = 0, ay = 0, az = 0; if (ck == 1 && bk == 0 && ck != 0 && cx == -ax) ax = cy, ck = 0, cx = 0, cy = 0, cz = 0; if (ck == 1 && bk == 0 && ck != 0 && cx == -ay) ay = cy, ck = 0, cx = 0, cy = 0, cz = 0; if (ck == 1 && bk == 0 && ck != 0 && cx == -az) az = cy, ck = 0, cx = 0, cy = 0, cz = 0; if (ay == -az && ay < 0) ay *= -1, az *= -1; if (by == -bz && by < 0) by *= -1, bz *= -1; if (cy == -cz && cy < 0) cy *= -1, cz *= -1; if (ak == 1 && ax == -ay) ak = 0, ax = 0, ay = 0, az = 0; if (ck == 1 && cx == -cy) ck = 0, cx = 0, cy = 0, cz = 0; if (ak > 0 && bk == 0 && ck > 0 && ((ax > 0 && cx < 0) || (ak == 1 && ck != 1 && ax > 0) || (ck == 1 && ak != 1 && cx < 0))){ ak = ak+ck, ck = ak-ck, ak = ak-ck; ax = ax+cx, cx = ax-cx, ax = ax-cx; ay = ay+cy, cy = ay-cy, ay = ay-cy; az = az+cz, cz = az-cz, az = az-cz; if (ax == -cx) ax *= -1, cx *= -1; if (ax == -cy) ax *= -1, cy *= -1; if (ax == -cz) ax *= -1, cz *= -1; if (ay == -cx) ay *= -1, cx *= -1; if (ay == -cy) ay *= -1, cy *= -1; if (ay == -cz) ay *= -1, cz *= -1; if (az == -cx) az *= -1, cx *= -1; if (az == -cy) az *= -1, cy *= -1; if (az == -cz) az *= -1, cz *= -1; }; if (ak < -1 && bk == 0) A = ak, B = ax, C = ay, D = az, ak = -A, ax = C, ay = B>0?B+0:-(-B+0), az = B>0?B+1:-(-B+2), bk = -A, bx = D, by = B>0?B+2:-(-B+1), bz = B>0?B+3:-(-B+3); if (ak > 1 && bk == 0 && ck > 1 && ax == -cx && ak == ck) ak = 1, ax = ay, ay = cy, ck = 1, cx = az, cy = cz, az = 0, cz = 0; if (ak > 1 && bk == 0 && ck > 1 && ax == -cx && ak != ck) A = (next+=6), ak = ak+ck, ck = ak-ck, ak = ak-ck, ak = -ak, ck = -ck, ax = A, cx = -A; mem[0+0] = ak, mem[0+1] = ax, mem[0+2] = ay, mem[0+3] = az, mem[4+0] = bk, mem[4+1] = bx, mem[4+2] = by, mem[4+3] = bz, mem[8+0] = ck, mem[8+1] = cx, mem[8+2] = cy, mem[8+3] = cz;};void compute(int *mem, int len){ for (int j=0; j<3; ++j) for (int i=j; i<len-2; i+=3) magic(mem+i*4);};const int program[] = {3,2,3,-1,0,0,0,0,3,4,5,-2,0,0,0,0,3,-4,6,7,0,0,0,0,62,-6,8,9,0,0,0,0,3,-8,10,11,0,0,0,0,3,-7,-10,12,0,0,0,0,3,13,14,-12,0,0,0,0,4,15,16,-13,0,0,0,0,5,17,18,-15,0,0,0,0,6,19,20,-17,0,0,0,0,7,21,22,-19,0,0,0,0,8,23,24,-21,0,0,0,0,9,25,26,-23,0,0,0,0,10,27,28,-25,0,0,0,0,11,29,30,-27,0,0,0,0,12,31,32,-29,0,0,0,0,13,33,34,-31,0,0,0,0,14,35,36,-33,0,0,0,0,15,37,38,-35,0,0,0,0,16,39,40,-37,0,0,0,0,17,41,42,-39,0,0,0,0,18,43,44,-41,0,0,0,0,19,45,46,-43,0,0,0,0,20,47,48,-45,0,0,0,0,21,49,50,-47,0,0,0,0,22,51,52,-49,0,0,0,0,23,53,54,-51,0,0,0,0,24,55,56,-53,0,0,0,0,25,57,58,-55,0,0,0,0,26,59,60,-57,0,0,0,0,27,61,62,-59,0,0,0,0,28,63,64,-61,0,0,0,0,29,65,66,-63,0,0,0,0,30,67,68,-65,0,0,0,0,31,69,70,-67,0,0,0,0,32,71,72,-69,0,0,0,0,33,73,74,-71,0,0,0,0,34,75,76,-73,0,0,0,0,35,77,78,-75,0,0,0,0,36,79,80,-77,0,0,0,0,37,81,82,-79,0,0,0,0,38,83,84,-81,0,0,0,0,39,85,86,-83,0,0,0,0,40,87,88,-85,0,0,0,0,41,89,90,-87,0,0,0,0,42,91,92,-89,0,0,0,0,43,93,94,-91,0,0,0,0,44,95,96,-93,0,0,0,0,45,97,98,-95,0,0,0,0,46,99,100,-97,0,0,0,0,47,101,102,-99,0,0,0,0,48,103,104,-101,0,0,0,0,49,105,106,-103,0,0,0,0,50,107,108,-105,0,0,0,0,51,109,110,-107,0,0,0,0,52,111,112,-109,0,0,0,0,53,113,114,-111,0,0,0,0,54,115,116,-113,0,0,0,0,55,117,118,-115,0,0,0,0,56,119,120,-117,0,0,0,0,57,121,122,-119,0,0,0,0,58,123,124,-121,0,0,0,0,59,125,126,-123,0,0,0,0,60,127,128,-125,0,0,0,0,61,-9,129,-127,0,0,0,0,3,-129,-11,130,0,0,0,0,3,-128,-130,131,0,0,0,0,3,-126,-131,132,0,0,0,0,3,-124,-132,133,0,0,0,0,3,-122,-133,134,0,0,0,0,3,-120,-134,135,0,0,0,0,3,-118,-135,136,0,0,0,0,3,-116,-136,137,0,0,0,0,3,-114,-137,138,0,0,0,0,3,-112,-138,139,0,0,0,0,3,-110,-139,140,0,0,0,0,3,-108,-140,141,0,0,0,0,3,-106,-141,142,0,0,0,0,3,-104,-142,143,0,0,0,0,3,-102,-143,144,0,0,0,0,3,-100,-144,145,0,0,0,0,3,-98,-145,146,0,0,0,0,3,-96,-146,147,0,0,0,0,3,-94,-147,148,0,0,0,0,3,-92,-148,149,0,0,0,0,3,-90,-149,150,0,0,0,0,3,-88,-150,151,0,0,0,0,3,-86,-151,152,0,0,0,0,3,-84,-152,153,0,0,0,0,3,-82,-153,154,0,0,0,0,3,-80,-154,155,0,0,0,0,3,-78,-155,156,0,0,0,0,3,-76,-156,157,0,0,0,0,3,-74,-157,158,0,0,0,0,3,-72,-158,159,0,0,0,0,3,-70,-159,160,0,0,0,0,3,-68,-160,161,0,0,0,0,3,-66,-161,162,0,0,0,0,3,-64,-162,163,0,0,0,0,3,-62,-163,164,0,0,0,0,3,-60,-164,165,0,0,0,0,3,-58,-165,166,0,0,0,0,3,-56,-166,167,0,0,0,0,3,-54,-167,168,0,0,0,0,3,-52,-168,169,0,0,0,0,3,-50,-169,170,0,0,0,0,3,-48,-170,171,0,0,0,0,3,-46,-171,172,0,0,0,0,3,-44,-172,173,0,0,0,0,3,-42,-173,174,0,0,0,0,3,-40,-174,175,0,0,0,0,3,-38,-175,176,0,0,0,0,3,-36,-176,177,0,0,0,0,3,-34,-177,178,0,0,0,0,3,-32,-178,179,0,0,0,0,3,-30,-179,180,0,0,0,0,3,-28,-180,181,0,0,0,0,3,-26,-181,182,0,0,0,0,3,-24,-182,183,0,0,0,0,3,-22,-183,184,0,0,0,0,3,-20,-184,185,0,0,0,0,3,-18,-185,186,0,0,0,0,3,-16,-186,-14,0,0,0,0,3,-5,187,188,0,0,0,0,121,-187,189,190,0,0,0,0,3,-189,191,192,0,0,0,0,3,-188,-191,193,0,0,0,0,3,194,195,-193,0,0,0,0,63,196,197,-194,0,0,0,0,64,198,199,-196,0,0,0,0,65,200,201,-198,0,0,0,0,66,202,203,-200,0,0,0,0,67,204,205,-202,0,0,0,0,68,206,207,-204,0,0,0,0,69,208,209,-206,0,0,0,0,70,210,211,-208,0,0,0,0,71,212,213,-210,0,0,0,0,72,214,215,-212,0,0,0,0,73,216,217,-214,0,0,0,0,74,218,219,-216,0,0,0,0,75,220,221,-218,0,0,0,0,76,222,223,-220,0,0,0,0,77,224,225,-222,0,0,0,0,78,226,227,-224,0,0,0,0,79,228,229,-226,0,0,0,0,80,230,231,-228,0,0,0,0,81,232,233,-230,0,0,0,0,82,234,235,-232,0,0,0,0,83,236,237,-234,0,0,0,0,84,238,239,-236,0,0,0,0,85,240,241,-238,0,0,0,0,86,242,243,-240,0,0,0,0,87,244,245,-242,0,0,0,0,88,246,247,-244,0,0,0,0,89,248,249,-246,0,0,0,0,90,250,251,-248,0,0,0,0,91,252,253,-250,0,0,0,0,92,254,255,-252,0,0,0,0,93,256,257,-254,0,0,0,0,94,258,259,-256,0,0,0,0,95,260,261,-258,0,0,0,0,96,262,263,-260,0,0,0,0,97,264,265,-262,0,0,0,0,98,266,267,-264,0,0,0,0,99,268,269,-266,0,0,0,0,100,270,271,-268,0,0,0,0,101,272,273,-270,0,0,0,0,102,274,275,-272,0,0,0,0,103,276,277,-274,0,0,0,0,104,278,279,-276,0,0,0,0,105,280,281,-278,0,0,0,0,106,282,283,-280,0,0,0,0,107,284,285,-282,0,0,0,0,108,286,287,-284,0,0,0,0,109,288,289,-286,0,0,0,0,110,290,291,-288,0,0,0,0,111,292,293,-290,0,0,0,0,112,294,295,-292,0,0,0,0,113,296,297,-294,0,0,0,0,114,298,299,-296,0,0,0,0,115,300,301,-298,0,0,0,0,116,302,303,-300,0,0,0,0,117,304,305,-302,0,0,0,0,118,306,307,-304,0,0,0,0,119,308,309,-306,0,0,0,0,120,-190,310,-308,0,0,0,0,3,-310,-192,311,0,0,0,0,3,-309,-311,312,0,0,0,0,3,-307,-312,313,0,0,0,0,3,-305,-313,314,0,0,0,0,3,-303,-314,315,0,0,0,0,3,-301,-315,316,0,0,0,0,3,-299,-316,317,0,0,0,0,3,-297,-317,318,0,0,0,0,3,-295,-318,319,0,0,0,0,3,-293,-319,320,0,0,0,0,3,-291,-320,321,0,0,0,0,3,-289,-321,322,0,0,0,0,3,-287,-322,323,0,0,0,0,3,-285,-323,324,0,0,0,0,3,-283,-324,325,0,0,0,0,3,-281,-325,326,0,0,0,0,3,-279,-326,327,0,0,0,0,3,-277,-327,328,0,0,0,0,3,-275,-328,329,0,0,0,0,3,-273,-329,330,0,0,0,0,3,-271,-330,331,0,0,0,0,3,-269,-331,332,0,0,0,0,3,-267,-332,333,0,0,0,0,3,-265,-333,334,0,0,0,0,3,-263,-334,335,0,0,0,0,3,-261,-335,336,0,0,0,0,3,-259,-336,337,0,0,0,0,3,-257,-337,338,0,0,0,0,3,-255,-338,339,0,0,0,0,3,-253,-339,340,0,0,0,0,3,-251,-340,341,0,0,0,0,3,-249,-341,342,0,0,0,0,3,-247,-342,343,0,0,0,0,3,-245,-343,344,0,0,0,0,3,-243,-344,345,0,0,0,0,3,-241,-345,346,0,0,0,0,3,-239,-346,347,0,0,0,0,3,-237,-347,348,0,0,0,0,3,-235,-348,349,0,0,0,0,3,-233,-349,350,0,0,0,0,3,-231,-350,351,0,0,0,0,3,-229,-351,352,0,0,0,0,3,-227,-352,353,0,0,0,0,3,-225,-353,354,0,0,0,0,3,-223,-354,355,0,0,0,0,3,-221,-355,356,0,0,0,0,3,-219,-356,357,0,0,0,0,3,-217,-357,358,0,0,0,0,3,-215,-358,359,0,0,0,0,3,-213,-359,360,0,0,0,0,3,-211,-360,361,0,0,0,0,3,-209,-361,362,0,0,0,0,3,-207,-362,363,0,0,0,0,3,-205,-363,364,0,0,0,0,3,-203,-364,365,0,0,0,0,3,-201,-365,366,0,0,0,0,3,-199,-366,367,0,0,0,0,3,-197,-367,-195,0,0,0,0,3,-3,368,-368};const int node_count = sizeof program / sizeof(int) / 4;const int space = 4000;int memory[space*4];int main(){ for (int i=0; i < space*4; ++i) memory[i] = i < node_count*4 ? program[i] : 0; // If this halts, `magic` is probably implemented correctly. while (memory[1]!=-1 || memory[2]!=-memory[3]) compute(memory, space); printf(Pass!);}
Rewrite 12 consecutive ints based on bizarre logic
performance;algorithm;c;cellular automata
null
_unix.293340
I am a beginner to script. I am making a tool script for my theme with 2 functions: Check for update, reinstall themeSo here is the code for selection menu:PS3='Choose an option: 'options=(Check for update Reinstall theme)select opt in ${options[@]}do case $opt in Check for update) echo Checking update ;; Reinstall theme) echo Reinstalling ;; *) echo invalid option;; esacdoneWhen running it appear like this1) Check for update2) Reinstall themeChoose an option:I type 1 and enter, the check for update command is performedThe problem is when it finished performing the script, it re-display Choose an option: not with the menu. So it can make users hard to choose without the menu (especially after a long script)1) Check for update2) Reinstall themeChoose an option: 1Checking updateChoose an option:So how can I re-display the menu after an option is performed
bash - How can I re-display selection menu after a selection is chosen and performed
bash;select
I'm guessing you really want something like this:function check_update { echo Checking update}function reinstall_theme { echo Reinstalling theme}all_done=0while (( !all_done )); do options=(Check for update Reinstall theme) echo Choose an option: select opt in ${options[@]}; do case $REPLY in 1) check_update; break ;; 2) reinstall_theme; break ;; *) echo What's that? ;; esac done echo Doing other things... echo Are we done? select opt in Yes No; do case $REPLY in 1) all_done=1; break ;; 2) break ;; *) echo Look, it's a simple question... ;; esac donedoneI've separated out the tasks into separate function to keep the first case statement smaller. I've also used $REPLY rather than the option string in the case statements since this is shorter and won't break if you decide to change them but forget to update them in both places. I'm also choosing to not touch PS3 as that may affect later select calls in the script. If I wanted a different prompt, I would set it once in and leave it (maybe PS3=Your choice: ). This would give a script with multiple questions a more uniform feel.I've added an outer loop that iterates over everything until the user is done. You need this loop to re-display the question in the first select statement.I've added break to the case statements, otherwise there's no way to exit other than interrupting the script.The purpose of a select is to get an answer to one question from the user, not really to be the main event-loop of a script (by itself). In general, a select-case should really only set a variable or call a function and then carry on. A shorter version that incorporates a Quit option in the first select:function check_update { echo Checking update}function reinstall_theme { echo Reinstalling theme}all_done=0while (( !all_done )); do options=(Check for update Reinstall theme Quit) echo Choose an option: select opt in ${options[@]}; do case $REPLY in 1) check_update; break ;; 2) reinstall_theme; break ;; 3) all_done=1; break ;; *) echo What's that? ;; esac donedoneecho Bye bye!
_webmaster.98937
I have strange URLs that Bing is trying to crawl. The Umbraco CMS throws exception when Bingbot request these URLs. Bing appears to think that these are valid URLs, but the URLs do not exist.The Bing Webmaster Tool screenshot :How to remove this bug from Bing Webmaster Tool?
Bingbot in requesting stange, invalid URLs
bing;bing webmaster tools;bingbot
null
_unix.313882
I recently got a Dell bluetooth keyboard, and I haven't had any luck with getting it to connect. I've tried with both the Blueman manager and with the process recommended here (https://forums.linuxmint.com/viewtopic.php?t=125166) and either way I get basically the same results with hcidump.Here's what I get from Blueman:HCI sniffer - Bluetooth packet analyzer ver 5.37device: hci0 snap_len: 1500 filter: 0xffffffffffffffff2016-03-06 12:07:42.634413 < HCI Command: Create Connection (0x01|0x0005) plen 13bdaddr 00:07:61:B1:5E:97 ptype 0xcc18 rswitch 0x01 clkoffset 0x0000Packet type: DM1 DM3 DM5 DH1 DH3 DH52016-03-06 12:07:42.647646 > HCI Event: Command Status (0x0f) plen 4Create Connection (0x01|0x0005) status 0x00 ncmd 12016-03-06 12:07:47.769619 > HCI Event: Connect Complete (0x03) plen 11status 0x04 handle 0 bdaddr 00:07:61:B1:5E:97 type ACL encrypt 0x00Error: Page TimeoutAnd here's what I get when I try from the command line:HCI sniffer - Bluetooth packet analyzer ver 5.37device: hci0 snap_len: 1500 filter: 0xffffffffffffffff2016-03-06 12:10:16.007020 < HCI Command: Inquiry (0x01|0x0001) plen 5lap 0x9e8b33 len 8 num 02016-03-06 12:10:16.008602 > HCI Event: Command Status (0x0f) plen 4Inquiry (0x01|0x0001) status 0x00 ncmd 12016-03-06 12:10:26.251518 > HCI Event: Inquiry Complete (0x01) plen 1status 0x002016-03-06 12:11:32.579945 < HCI Command: Create Connection (0x01|0x0005) plen 13bdaddr 00:07:61:B1:5E:97 ptype 0xcc18 rswitch 0x01 clkoffset 0x0000Packet type: DM1 DM3 DM5 DH1 DH3 DH52016-03-06 12:11:32.593057 > HCI Event: Command Status (0x0f) plen 4Create Connection (0x01|0x0005) status 0x00 ncmd 12016-03-06 12:11:37.715087 > HCI Event: Connect Complete (0x03) plen 11status 0x04 handle 0 bdaddr 00:07:61:B1:5E:97 type ACL encrypt 0x00Error: Page TimeoutI'm running Mint 17.1, MATE 1.8.1, kernel is 3.13.0-39-generic.Halp?
Dell Bluetooth keyboard, keep getting 'Page Timeout'
linux mint;keyboard;bluetooth;timeout;blueman
null
_softwareengineering.275514
OK so I have a programming assignment and I need to create a class that represents Complex Numbers (good so far), add them to a list(good so far), and then output whether each individual number is real (a+0x), imaginary (0+bx), or complex (a+bx). A Complex object is made by initializing the real portion and imaginary portion Complex(int _real, int _imag) { real = _real; imag = _imag; }; Complex(int _real) {real = _real; imag = 0;}; Complex() { real, imag = 0;}The problem is my professor has silly stipulations such as all of the data fields must be private and there can be no get methods to access any portion of an indivdual Complex number from outside the class.Instead the only public methods I have to work with are plus, subtract, and multiply methods that take another Complex number and return a sum, difference and product of the two respectively and a conjugate method that returns an individual Complex's conjugate. With only those four public methods at my disposal, what formula can I use to compute whether an individual number is real, imaginary or complex?This is less a coding issue and more of a logic issue I know.
Complex Number help
java;object oriented
null
_cs.1731
Define $\mathrm{Prefix} (L) = \{x\mid \exists y .xy \in L \}$. I'd love your help with proving that $\mathsf{RE}$ languages are closed under $\mathrm{Prefix}$.I know that recursively enumerable languages are formal languages for which there exists a Turing machine that will halt and accept when presented with any string in the language as input, but may either halt and reject or loop forever when presented with a string not in the language.Any help for how should I approach to this kind of a proof?
Proving that recursively enumerable languages are closed against taking prefixes
formal languages;turing machines;closure properties
Below is a hint for working with the Turing Machine (TM) formalism for RE languages. But finishing that approach from the hint depends on how you've been working with TMs.You have a TM, say $T_L$ to accept L and you want to construct a new TM $T_L^'$ for $\text{Prefix}(L)$. You can start $T_L^'$ on a string $x$ and then do something to finish up with the hypothetical $y$ that completes $xy\in L$. How you do that depends somewhat on the methods you have been using to work with TMs. But that's a hint so far.
_unix.171010
i got a homework today. Can u help me guys?Determine on any PC pool that hosts regular file on the local file system root who has the most hard links! Avoid doing a search of the user home directories or other NFS mounted directories. Type a file name of an regular file, the number of hard links, and a command that you could have a look at all the name of this file!With best regards!Marko
Determine a file with the most Hard-Links
hard link
null
_cstheory.19341
This question was originally posted on mathoverflow. Having obtained no answers after one month, I've decided to cross-post it here.Let $H = ( V, E )$ be a $k$-uniform connected hypergraph, with $n = |V|$ vertices and $m = |E|$ hyperedges. Let $O_w$ be the number of distinct edge induced subgraphs of $H$ having $w$ vertices and an odd number of hyperedges. Let $E_w$ be the number of distinct edge induced subgraphs of $H$ having $w$ vertices and an even number of hyperedges. Let $\Delta_w = O_w - E_w$.Let $b_w$ be the number of bits required to encode $\Delta_w$, i.e. $b_w = log_2 \ \Delta_w$.Let $b = \displaystyle\max_{\substack{w}} b_w$.I'm interested in how $b$ grows. I would like to determine the best possible upper bound for $b$ which is expressible as a function of only $n$, $m$ and $k$. More precisely, I would like to determine a function $f(n,m,k)$ having both the following properties:$b \leq f( n, m, k )$ for any $k$-uniform hypergraph $H$ having $n$ vertices and $m$ hyperedges.$f(n,m,k)$ grows slower than any other function which satisfies the 1st property.In general both $O_w$ and $E_w$ are exponential in $m$, therefore I expect that their difference $\Delta_w$ is not exponential in $m$ and thus that $b \in o( m )$. However for the moment I've no clue on how to try to prove this.Questions How does $b$ grow with respect to $n$, $m$ and $k$?Are there any relevant results in the literature?Any hint on how to try to prove $b \in o(m)$?
Bits required to encode difference between number of subgraphs with odd number of edges and number of subgraphs with even number of edges
graph theory;co.combinatorics
null