text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
How to deal with modals not opening in PhantomJS?
I have a Watir code that uses Phantomjs as headless browser. I'm trying to fill a form on website using this code (because Phantomjs doesn't show modal on simple click (browser.link(id: 'choise-sity').click)):
require 'watir'
browser = Watir::Browser.new :phantomjs
browser.goto 'http://sales.ubrr.ru/open'
browser.execute_script(" $('#modalCityChoise').show();$('#modalCityChoise').css({'opacity':'1', 'top':'0'})")
browser.link(text: 'Архангельск').click
browser.screenshot.save 'a.png'
Current code does show modal, but I'm unable to click links there after.
My question is: is there an easier way to deal with modals in PhantomJS - they said they added support to modals? Or how to deal with this particular example by javascript injection?
edit: I managed to deal with this modal just by changing a value of hidden input browser.execute_script(" $('#OpenBkiForm_city_code').val('4600000100000') ") - but question still lingers.
A:
I misinterpreted the issue - modals were managed correctly by Phantomjs, but I couldn't prove it because at the time of my screenshoting they were only beginning to fade in, which I haven't thought of.
The real issue was that I couldn't press the button in modal window. Turns out that was because of the window size - even when screenshots captured the button, it wasn't really accessible to Watir somehow. This resolved the whole issue:
$browser.driver.manage.window.maximize
| {
"pile_set_name": "StackExchange"
} |
Q:
Non uniform continuity of $f(x)=x^3$ on the interval $[10,\infty)$
How to show that $f(x)=x^3$ is not uniform continuous on the interval $[10,\infty)$?
I am well aware that $f(x)^3$ is not uniform continuous on the set of real numbers but and I believe on $[10,\infty)$ it is also not uniform continuous. However, how do I show it's not for a given set? I have seen proofs of it on the reals but not with a given interval. How do I use the interval to help me decide it's not uniform continuous?
A:
One shortcut... if the function is continuous and differentiable over a domain, and the derivative is bounded for all x in the domain, then then the function is uniformly continuous over that domain.
If the domain is unbounded e.g. $[3,\infty)$ the derivative of $f(x)=x^3$ goes to infinity as $x$ goes to infinity, and $f(x)$ is not uniformly continuous.
But over the interval $[0, 10^{10})$ it is.
However, it is possible for the derivative to be undefined / unbounded and the function to still be uniformly continuous.
e.g. $f(x) = \sqrt x$
$\lim_\limits {x\to 0^+} f'(x) = \infty$
Yet, $f(x)$ is uniformly continous over $[0,\infty)$
If you want to be safe, always check against the definition.
The function is uniformly continuous if:
$\forall \epsilon>0,\exists \delta>0: \forall x,y \in [10,\infty), |x-y|<\delta \implies |x^3 - y^3|<\epsilon$
And therefore is not uniformly continuous if
$\forall \delta > 0, \exists \epsilon > 0, \exists x,y\in [10,\infty): |x-y|<\delta$ and $|x^3 - y^3|>\epsilon$
$|x^3 - y^3| = |x-y||x^2 +xy + y^2|$
For any $\epsilon, \delta$ we can choose $x = \min(\frac 32 \sqrt {\frac {\epsilon}{\delta}},10), y = x+ \frac {\delta}{2}$
| {
"pile_set_name": "StackExchange"
} |
Q:
Firebase database rules only two out of three conditions work
I have "UID" database (uid root for each user).
I try to share "uid" roots between users, and i want to give read access to "UID" root in one of three cases:
1. The user who access is the owner.
2. The user who access is located in "UID\PERMITTED_USERS" of target root.
3. The user who access is located in "UID\TEMP_USERS" of target root.
To accomplish this I created next rule:
".read" : "$uid === auth.uid || (root.child(root.child(auth.uid).child('PRE_SHARE').val()).child('TEMP_USERS').hasChild(root.child(auth.uid).child('TEMP_PERMIT').val()) || root.child(root.child(auth.uid).child('CURRENT_SHARE').val()).child('PERMITTED_USERS').hasChild(auth.uid))"
But I was disapointed to discover that only the first two conditions are checked, and the third is not. (I changed the order of the conditions and every time I could access using the first two in a row).
Is there a way to solve this?
EDIT:
Adding db example:
A:
So after lots of tests I found the problem. My code deletes PRE_SHARE as well as TEMP_USERS, and when rule tries to access val() of non existent PRE_SHARE it gets null pointer exception. Too bad Firebase doesn't write this exception, it would save me lots of time...
".read" : "$uid === auth.uid ||
(root.child(auth.uid).hasChild('PRE_SHARE') &&
root.child(root.child(auth.uid).child('PRE_SHARE').val()).hasChild('TEMP_USERS') &&
root.child(auth.uid).hasChild('TEMP_PERMIT') &&
root.child(root.child(auth.uid).child('PRE_SHARE').val()).child('TEMP_USERS').hasChild(root.child(auth.uid).child('TEMP_PERMIT').val())) ||
(root.child(auth.uid).hasChild('CURRENT_SHARE') &&
root.child(root.child(auth.uid).child('CURRENT_SHARE').val()).hasChild('PERMITTED_USERS') &&
root.child(root.child(auth.uid).child('CURRENT_SHARE').val()).child('PERMITTED_USERS').hasChild(auth.uid))"
| {
"pile_set_name": "StackExchange"
} |
Q:
Application onCreate not being called in test
I am attempting to run my android tests using the AndroidJUnitRunner via the command "gradle connectedDebugAndroidTests" and I noticed that when my tests run, my app's Application object is not being created and "onCreate" is not being called. I am guessing this is expected. However, my tests rely on this code being invoked before the tests can run.
Is there a way to get this to happen?
I tried creating a new manifest in the "androidTest" section of my app that defines the "application" attribute, but this doesn't seem to work either :(
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.myapp">
<application
android:name=".MyTestApplication"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round" />
</manifest>
A:
You should create a method annotated with @Before and in that method do below code to start your application class :
@Before
public void prepareApplication() {
MyTestApplication app = (MyTestApplication) InstrumentationRegistry.getInstrumentation().getTargetContext().getApplicationContext();
app.onCreate();
}
| {
"pile_set_name": "StackExchange"
} |
Q:
What is the phrase "Above all the hunt" translated into Latin?
I'm designing a sigil for my special forces team in a sci-fi book I'm writing, and without making this a 10,000 word post with backstory, the phrase on the sigil is "Above all, the hunt". Google and every other translation site out there does literal word for word translation, and I know that is not how Latin works or pretty much how any language translated into another works, so I'm lost as to what it could be. Any help?
A:
I would suggest "venatio supra omnia".
There are many ways to translate "above all", and what I chose is a literal one.
To get started with future requests, you can look at an online Latin dictionary.
You can find a list in our dictionary list question.
If you type in "hunt", you will get several hunting-related words in Latin.
You can read their English translations and decide which word would have the most suitable tone for your use even if you can't put words together into sentences.
For hunting the case is pretty clear, but it's not so for all words.
If you have some suitable words together and maybe a Google translation, you can give them in your question; sometimes such background work really helps, and most importantly it shows your own effort.
Sometimes short phrases are the hardest ones to translate well, so it might help to give several ways to express your sentence in English and explain what you want to say.
Your translation request was simple enough, but I just wanted to give some advice to you (and others who end up reading this) to make things run smoothly when help is needed again.
| {
"pile_set_name": "StackExchange"
} |
Q:
Find row value, copy row and all the range underneath for data reduction
I am trying to use a macro to clean up data files and only copy on Sheet2 what is most relevant.
I have written the code to find the row I want the data to be copied from. However I can only copy the row itself and not the range underneath. Please note I need the range to go from that row to the last column and last row as the size of the matriz always varies.
s N s N s N s N s rpm
Linear Real Linear Real Linear Real Linear Real Linear Amplitude
0.0000030 9853.66 0.0000030 5951.83 0.0000030 533.48 0.0000030 476.15 0.0000030 2150.16
0.0000226 9848.63 0.0000226 5948.19 0.0000226 557.02 0.0000226 488.60 0.0000226 2150.16
0.0000421 9826.05 0.0000421 5956.22 0.0000421 615.94 0.0000421 480.75 0.0000421 2150.15
0.0000616 9829.72 0.0000616 5989.72 0.0000616 642.59 0.0000616 476.77 0.0000616 2150.15
So basically the code below finds that first row and copies it in Sheet2. I need the macro to also select the range underneath and copy it onto Sheet2. Please can you help me finishing off the script?
Sub SearchForRawData()
Dim LSearchRow As Integer
Dim LCopyToRow As Integer
On Error GoTo Err_Execute
'Start search in row 1
LSearchRow = 1
'Start copying data to row 2 in Sheet2 (row counter variable)
LCopyToRow = 2
While Len(Range("A" & CStr(LSearchRow)).Value) >= 0
'If value in column A = "s", copy entire row to Sheet2
If Range("A" & CStr(LSearchRow)).Value = "s" Then
'Select row and range in Sheet1 to copy
Rows(CStr(LSearchRow) & ":" & CStr(LSearchRow)).Select
Selection.Copy
'Paste row into Sheet2 in next row
Sheets("Sheet2").Select
Rows(CStr(LCopyToRow) & ":" & CStr(LCopyToRow)).Select
ActiveSheet.Paste
'Select all Raw Data underneath found Row to Copy
'Paste all Raw Data into Sheet 2
'Move counter to next row
LCopyToRow = LCopyToRow + 1
'Go back to Sheet1 to continue searching
Sheets("Sheet1").Select
End If
LSearchRow = LSearchRow + 1
Wend
'Position on cell A1
Application.CutCopyMode = False
Range("A1").Select
MsgBox "All matching data has been copied."
Exit Sub
Err_Execute:
MsgBox "An error has occured"
End Sub
A:
You don't need a loop for this if you want to copy the row that has the "s" and everything below it to the target sheet. The following sub finds the row with the "s" in column A and then copies that row and everything below it to the target sheet.
Note that you should always avoid selecting or activating anything in VBA code, and that the normal way to copy and paste relies on selecting. If you use the syntax I've included here, the clipboard is not used and the target sheet does not need to be selected.
Sub CopyRowAndBelowToTarget()
Dim wb As Workbook
Dim src As Worksheet
Dim tgt As Worksheet
Dim match As Range
Set wb = ThisWorkbook
Set src = wb.Sheets("Sheet1")
Set tgt = wb.Sheets("Sheet2")
Dim lastCopyRow As Long
Dim lastPasteRow As Long
Dim lastCol As Long
Dim matchRow As Long
Dim findMe As String
' specify what we're searching for
findMe = "s"
' find our search string in column A (1)
Set match = src.Columns(1).Find(What:=findMe, After:=src.Cells(1, 1), _
LookIn:=xlValues, LookAt:=xlWhole, SearchOrder:=xlByRows, _
SearchDirection:=xlNext, MatchCase:=False, SearchFormat:=False)
' figure out what row our search string is on
matchRow = match.Row
' get the last row and column with data so we know how much to copy
lastCopyRow = src.Range("A" & src.Rows.Count).End(xlUp).Row
lastCol = src.Cells(1, src.Columns.Count).End(xlToLeft).Column
' find out where on our target sheet we should paste the results
lastPasteRow = tgt.Range("A" & src.Rows.Count).End(xlUp).Row
' use copy/paste syntax that doesn't use the clipboard
' and doesn't select or activate
src.Range(Cells(matchRow, 1), Cells(lastCopyRow, lastCol)).Copy _
tgt.Range("A" & lastPasteRow)
End Sub
| {
"pile_set_name": "StackExchange"
} |
Q:
Coding the number of visits based on dates and assigning value in new column R
I am relatively new to R and am trying to create a new column for number of visits (ie num_visits) based on the admission dates (ie admit_date)
The sample dataframe is below and the number of visits has to be created based on the admit_date column. The admit_dates do not necessarily run in sequence.
subject_id admit_date num_visits
22 2010-10-20 1
23 2010-10-20 1
24 2010-10-21 1
25 2010-10-21 1
22 2010-12-30 3
22 2010-12-22 2
23 2010-12-25 2
30 2011-01-14 1
31 2011-01-14 1
33 2011-02-05 2
33 2011-01-26 1
I know i need to groupby subject_id and perhaps get the counts based on the sequence of the dates.
Am stuck after the following codes, appreciate any form of help, thank you!
df %>%
group_by(subject_id) %>%
A:
We can use mutate after grouping by 'subject_id'
library(dplyr)
df %>%
arrange(subject_id, as.Date(admit_date)) %>%
group_by(subject_id) %>%
mutate(num_visits = row_number())
or with data.table
library(data.table)
setDT(df)[order(as.IDate(admit_date)), num_visits := rowid(subject_id)][]
# subject_id admit_date num_visits
# 1: 22 2010-10-20 1
# 2: 23 2010-10-20 1
# 3: 24 2010-10-21 1
# 4: 25 2010-10-21 1
# 5: 22 2010-12-30 3
# 6: 22 2010-12-22 2
# 7: 23 2010-12-25 2
# 8: 30 2011-01-14 1
# 9: 31 2011-01-14 1
#10: 33 2011-02-05 2
#11: 33 2011-01-26 1
data
df <- structure(list(subject_id = c(22L, 23L, 24L, 25L, 22L, 22L, 23L,
30L, 31L, 33L, 33L), admit_date = c("2010-10-20", "2010-10-20",
"2010-10-21", "2010-10-21", "2010-12-30", "2010-12-22", "2010-12-25",
"2011-01-14", "2011-01-14", "2011-02-05", "2011-01-26")), row.names = c(NA,
-11L), class = "data.frame")
| {
"pile_set_name": "StackExchange"
} |
Q:
Why do people think this question(Like Niobe, all tears) is proofreading?
This question(Is "Like Niobe, all tears" an apposition?) was put on hold as proofreading.
Why do people think it is proofreading?
The question is not about an English composition.
A:
The reason given in the closure message can be fairly arbitrary.
It's the majority reason. That doesn't necessarily mean that three out of five voted for it, though. There could be 2+1+1 votes for three different reasons. If the last vote is for reason A, then the voting is 3+1+1 and reason A is given in the box; if it's for reason B, then the voting is 2+2+1 and reason B wins because that was the last vote.
Because it's a majority decision, all votes count, even those cast before a post is edited. A post which is obviously proof-reading up to an edit could have gained three votes by that time, so that would the reason given even if there was another more "valid" reason after the edit.
Voters can make mistakes and choose the wrong close reason. It's not possible to change that, because once you retract a close vote you can't re-cast it. So it needs to be left as a "wrong" vote.
Voters can simply follow prior opinion: they know the question is bad and needs to be put on hold pending improvement, and simply follow the herd even though a different reason might [now] be more suitable.
It doesn't really matter what the reason is. The apposite fact is that the question is on-hold and needs to be improved. The closure message gives a hint as to where that improvement might usefully lie, but it's not foolproof because of the above reasons.
As has been stated on this page and elsewhere, once the post is improved, ask that it be reconsidered. That's the way to get it re-opened. Asking why it was closed in the first place is both pointless and counter-productive. No-one has to give any reason for any vote, and the community has come to a decision, either collectively or via delegation to a moderator, and that should only be gainsaid in the most egregious examples of error.
As to bullying, saying "No, you're wrong" is not bullying. Using the mechanics of the site to put poor questions on-hold pending improvement is not bullying. Providing civil answers pointing out how the site works and advising on the right course of action is not bullying, even if those answers aren't the answers you want.
And this is it. I'm not entering into discussion on this answer. Quod scripsi, scripsi.
A:
If an answer was given, and it doesn't say "We're closing this because of bullying" I'm likely to take it on face value and move on. I might even think those people might be jerks and they don't know what they're talking about.
Although, because the responses here have been reasonably civil, and within reason, not personal ("the post is poor quality, shows limited research, is proofreading" vs "the submitter is a pain in the ass"), I'm likely to infer that repeated submission of sub-quality requests, and that rather quickly by a single person on similar topics means that it may just be likely that the submitter is bullying the site into accepting such posts, feigning ignorance and calling foul in advance. I could be misreading this, though.
I haven't yet had a noticeable opinion on your posts nor have you explained the audience of interest in your post questions or answers besides yourself. (As in, who cares or what problem are you trying to solve? See: don't ask.) Don't jump the gun here. be nice is what we're trying to be here. It doesn't mean that because I used specific words in a comparison, that they actually apply to anyone, present company included.
Your questions have been answered before you asked them.
Your questions have been answered after you asked them.
You have been prompted to ask the original questions in a manner that better
reflects the type of questions that the community wants to see on this site.
You have been given information how to ask this meta question better that it might possibly reflect the ability to open the topic of this question.
You have been spending more time arguing semantics of the why than improving your posts
You have been accusing others of bullying and continually insisting it is bullying. If you continue this path, the suggestion to look up and understand irony is going to be continually presented. A hint: Those who have said their piece and have stopped returning comments have stopped feeding the troll, I have as yet not learned my lesson. Looks like I need to learn about irony.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I use NTFS links to merge folders A and B?
To save on disk space and to keep things tidy, I want to have two folders, A and B. Folder A contains "stock" files, and Folder B contains "modded" files. I want to have the contents of Folder B in Folder A so as to have a "union."
For example, this is how the files are organized right now:
Folder A Folder B
| |
\-1 \-4
| |
\-2 \-5
| |
\-3 \-6
This is how I want them to be:
Folder A Folder B
| |
\-1 /-------\-4
| | |
\-2 |-------\-5
| | |
\-3 |-------\-6
| |
\-4--/
| |
\-5--/
| |
\-6--/
You can do this easily with regular symlinks, but the catch is that when I add new files in Folder B, they should be automatically seen in Folder A as well.
How can I do this without using any manual scripts or extra software?
A:
You can't. While I've often thought it would be handy, to the point of considering writing software to create one, most file systems (NTFS definitely included) do not support unifying two directories the way you ask.
There are a bunch of problems that you'd need to create some solution for. What happens if you add a file to one folder when a file of the same name already exists in the other folder, or try to add a new file/folder directly to the union (which parent does it appear inside)? What happens to the union if you delete one of the folders, or rename it? What happens if their permissions differ, so one folder is readable by user X but the other is not? All of these questions (and many more that will be encountered trying this) have potential answers, but which answer is best for a given use case or implementation method will differ.
Now, with that said, Windows (Vista and later) has the concept of a "Library" that can store files from multiple directories. For example, each user has a "Music" library that, by default, holds the union of their personal Music folder and also the public (all users) Music folder. Libraries have a bunch of limitations, the most notable of which is that they aren't actually in the file system at all - there's no path to them that you can put in a script, and you can't open a command prompt that points to one - but they might be useful nonetheless. For more info, read here: http://windows.microsoft.com/en-US/windows7/Working-with-libraries
| {
"pile_set_name": "StackExchange"
} |
Q:
printing Java hotspot JIT assembly code
I wrote a very stupid test class in Java:
public class Vector3 {
public double x,y,z ;
public Vector3(double x, double y, double z) {
this.x=x ; this.y=y ; this.z=z ;
}
public Vector3 subst(Vector3 v) {
return new Vector3(x-v.x,y-v.y,z-v.z) ;
}
}
Then I wanted to see the code generated by the Java Hotspot JIT (Client VM build 23.7-b01). I used the "-XX:+PrintAssembly" option and the hsdis-i386.dll from http://classparser.blogspot.dk/2010/03/hsdis-i386dll.html
Here is the interesting part of the generated code (I have skipped the initialization of the new object. EDIT: the code for the subst method). Obviously, ebx is the "this" pointer and edx is the pointer to the argument.
lds edi,(bad)
sti
adc BYTE PTR [ebx+8],al ;*getfield x
mov edx,DWORD PTR [esp+56]
lds edi,(bad) ; implicit exception: dispatches to 0x02611f2d
sti
adc BYTE PTR [edx+8],cl ;*getfield x
lds edi,(bad)
sti
adc BYTE PTR [ebx+16],dl ;*getfield y
lds edi,(bad)
sti
adc BYTE PTR [edx+16],bl ;*getfield y
lds edi,(bad)
sti
adc BYTE PTR [ebx+24],ah ;*getfield z
lds edi,(bad)
sti
adc BYTE PTR [edx+24],ch ;*getfield z
lds edi,(bad)
sti
pop esp
rol ebp,0xfb
adc DWORD PTR [eax+8],eax ;*putfield x
lds ebp,(bad)
jmp 0x02611f66
rol ebp,cl
sti
adc DWORD PTR [eax+16],edx ;*putfield y
lds ebx,(bad)
fistp DWORD PTR [ebp-59]
sti
adc DWORD PTR [eax+24],esp ;*putfield z
Honestly, I am not very familar with x86 assembly but does that code make sense to you? What are those strange instructions like "adc BYTE PTR [edx+8],cl" doing? I would have expected some FPU instructions.
A:
Me again. I have built the hsdis-i386.dll using the latest binutils 2.23. It was easier than I expected thanks to the instructions in http://dropzone.nfshost.com/hsdis.htm
(at least for the x86 version. The 64-bit version compiles but stops the JVM immediately without any error message)
The output now looks much better:
vmovsd xmm0,QWORD PTR [ebx+0x8] ;*getfield x
mov edx,DWORD PTR [esp+0x40]
vmovsd xmm1,QWORD PTR [edx+0x8] ;*getfield x
vmovsd xmm2,QWORD PTR [ebx+0x10] ;*getfield y
vmovsd xmm3,QWORD PTR [edx+0x10] ;*getfield y
vmovsd xmm4,QWORD PTR [ebx+0x18] ;*getfield z
vmovsd xmm5,QWORD PTR [edx+0x18] ;*getfield z
vsubsd xmm0,xmm0,xmm1
vmovsd QWORD PTR [eax+0x8],xmm0 ;*putfield x
vsubsd xmm2,xmm2,xmm3
vmovsd QWORD PTR [eax+0x10],xmm2 ;*putfield y
vsubsd xmm4,xmm4,xmm5
vmovsd QWORD PTR [eax+0x18],xmm4 ;*putfield z
| {
"pile_set_name": "StackExchange"
} |
Q:
How to cast all columns of Spark dataset to string using Java
I have a dataset with so many columns and I want to cast all columns to the string using Java.
I tried below steps, I want to know if there is any better way to achieve this?
Dataset<Row> ds = ...;
JavaRDD<String[]> stringArrRDD = ds.javaRDD().map(row->{
int length = row.length();
String[] columns = new String[length];
for(int i=0; i<length;i++){
columns[i] = row.get(i) !=null? row.get(i).toString():"";
}
return columns;});
A:
You can iterate over columns:
for (String c: ds.columns()) {
ds = ds.withColumn(c, ds.col(c).cast("string"));
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Delete Duplicate values from specific column in mysql table based on query to other column
I am having one table in SQL Database where I record customer wise sales for specific products. I have monthly target for each product like as below
Product A - 50 pcs
Now in my table I am seeing customer wise sales and the monthly product sale target which is common.
Customer Product MonthlyTargetQty
Customer A Product 1 50
Customer B Product 1 50
Customer C Product 1 50
Customer D Product 1 50
I want to keep only distinct value in MonthlyTargetQty Column and do not want to delete Product name which is repeating in Product Column. Please help with a query
How I want it is : -
Customer Product MonthlyTargetQty
Customer A Product 1 50
Customer B Product 1 0
Customer C Product 1 0
Customer D Product 1 0
A:
from the comment it seems you want update I added with update
with cte as
(
select customer, product,
(case when row_number() over (partition by product order by customer) = 1 then monthlytargetqty end) as monthlytargetqty
from t
)
update a
set a.MontylyTargetQty= b.monthlytargetqty
from ProductAnalysisTable a join cte on
a.customer=cte.customer and a.product=b.product
btw 1st part is sir @gordon so accept his answer
| {
"pile_set_name": "StackExchange"
} |
Q:
What causes overflow on my page?
I'm working on my first project, which is supposed to become a blog one day. I'm currently trying to design the homepage, and, until a certain point, everything was pretty fine. But then something happened and an overflow appeared. I don't know what causes it. I'm using box-sizing: border-box just to be sure there are no hidden borders or margins or padding causing this problem, but it's still there.
By the way, my aim is to make the page responsive, that's why I'm trying to use scalable width and height as much as possible. Maybe that's where the problem lies?
width: calc(100vw); max-width: 4000px;
height: calc(5vh); max-height: 112.5px;
Here's the fiddle: https://jsfiddle.net/u7vqz0cq/
Any ideas?
A:
Sole reason of overflow here is use of 100vw. As soon as you set the width of some block tag, it will have a overflow. Similar is the case with 100vh. It makes the tag overflow vertically.
And using calc(100vw) is also pointless, instead you can use 100% if required like this.
#header {
width: 100%;
max-width: inherit;
height: calc(5vh); max-height: 112.5px;
}
Here is the updated jsfiddle.
https://jsfiddle.net/u7vqz0cq/1/
| {
"pile_set_name": "StackExchange"
} |
Q:
Binding an object in mustache Polymer 2.0
Property:-
static get properties() {
return {
currencies: {
type: Object,
notify: true,
reflectToAttribute: true,
value: {
"name": "currencies"
}
}
}
}
Function:-
_handleCryptoData(response) {
var responseArray = response.detail.__data.response.data;
var btc = responseArray[0];
var eth = responseArray[2];
var ltc = responseArray[3];
this.currencies.btc = btc.amount;
this.currencies.eth = eth.amount;
this.currencies.ltc = ltc.amount;
}
If do console.log(this.currencies.btc) -- I get the value I want...some number.
Problem:-
<p>BTC:- <span>{{currencies.btc}}</span> </p>
<p>ETH:- <span>{{currencies.eth}}</span> </p>
<p>LTC:- <span>{{currencies.ltc}}</span> </p>
This is how this is bound in the view. Problem is since currencies is an object in view currencies.btc does not work. On the other hand If I bind just {{currencies}} I can see the object output. But {{currencies.btc}} does not work.
HOW TO MAKE THIS BINDING WORK. WHAT AM I DOING WRONG HERE?
A:
These changes:
this.currencies.btc = btc.amount;
this.currencies.eth = eth.amount;
this.currencies.ltc = ltc.amount;
are not observable to Polymer.
There are a number of ways to solve it, but perhaps the easiest will be to call:
this.set('currencies.btc', btc.amount);
this.set('currencies.eth', eth.amount);
this.set('currencies.ltc', ltc.amount);
| {
"pile_set_name": "StackExchange"
} |
Q:
Use kubernetes secrets in nodejs application?
I have one kubernetes cluster on gcp, running my express and node.js application, operating CRUD operations with MongoDB.
I created one secret, containing username and password,
connecting with mongoDB specifiedcified secret as environment in my kubernetes yml file.
Now My question is "How to access that username and password
in node js application for connecting mongoDB".
I tried precess.env.SECRET_USERNAME and process.env.SECRET_PASSWORD
in Node.JS application, it is throwing undefined.
Any idea ll'be appreciated .
Secret.yaml
apiVersion: v1
data:
password: pppppppppppp==
username: uuuuuuuuuuuu==
kind: Secret
metadata:
creationTimestamp: 2018-07-11T11:43:25Z
name: test-mongodb-secret
namespace: default
resourceVersion: "00999"
selfLink: /api-path-to/secrets/test-mongodb-secret
uid: 0900909-9090saiaa00-9dasd0aisa-as0a0s-
type: Opaque
kubernetes.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:deployment.kubernetes.io/
revision: "4"
creationTimestamp: 2018-07-11T11:09:45Z
generation: 5
labels:
name: test
name: test
namespace: default
resourceVersion: "90909"
selfLink: /api-path-to/default/deployments/test
uid: htff50d-8gfhfa-11egfg-9gf1-42010gffgh0002a
spec:
replicas: 1
selector:
matchLabels:
name: test
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
name: test
spec:
containers:
- env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
key: username
name: test-mongodb-secret
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: test-mongodb-secret
image: gcr-image/env-test_node:latest
imagePullPolicy: Always
name: env-test-node
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2018-07-11T11:10:18Z
lastUpdateTime: 2018-07-11T11:10:18Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 5
readyReplicas: 1
replicas: 1
updatedReplicas: 1
A:
Yourkubernetes.yaml file specifies which environment variable to store your secret so it is accessible by apps in that namespace.
Using kubectl secrets cli interface you can upload your secret.
kubectl create secret generic -n node-app test-mongodb-secret --from-literal=username=a-username --from-literal=password=a-secret-password
(the namespace arg -n node-app is optional, else it will uplaod to the default namespace)
After running this command, you can check your kube dashboard to see that the secret has been save
Then from you node app, access the environment variable process.env.SECRET_PASSWORD
Perhaps in your case the secretes are created in the wrong namespace hence why undefined in yourapplication.
EDIT 1
Your indentation for container.env seems to be wrong
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
| {
"pile_set_name": "StackExchange"
} |
Q:
Deriving from Binding class (Silverlight 4.0)
Using the existing Binding class, we can write,
<TextBox Text="{Binding Email, Mode=TwoWay}"/>
So we can write anything as Email; there is no validity check by Binding itself. I started writing a class BindingMore deriving from Binding so that eventually I could write,
<TextBox Text="{local:BindingMore Email, Validate=SomeMethod, Mode=TwoWay}"/>
Where SomeMethod is some ICommand or delegate which will be triggered to validate the Email . That is my objective, and I've not written that yet.
As of now, I've written just this code,
public class BindingMore : System.Windows.Data.Binding
{
public BindingMore() : base()
{
}
public BindingMore(string path) : base(path)
{
}
}
So, at this stage, BindingMore is exactly equivalent to Binding, yet when I write
<TextBox Text="{local:BindingMore Email, Mode=TwoWay}"/>
It's giving me runtime error. But when I write,
<TextBox Text="{local:BindingMore Path=Email, Mode=TwoWay}"/>
It's working fine. Can anybody tell me why it's giving runtime error in the first case?
Unfortunately, the error is not shown. All it shows is this:
Also, I get the following error message from XAML (even when it builds perfectly and runs (in the second case)):
Type 'local:BindingMore' is used like
a markup extension but does not derive
from MarkupExtension.
A:
Custom Markup Extensions are not supported in Silverlight. Try using an Attached Property approach or a Behavior.
| {
"pile_set_name": "StackExchange"
} |
Q:
Swagger-codegen-maven: Change where files are generated to
I'm currently using swagger-codegen through the maven plugin: https://github.com/swagger-api/swagger-codegen/tree/master/modules/swagger-codegen-maven-plugin.
I've set the place where I want my api files to be generated with the <apiPackage></apiPackage> field in my pom.xml.
But when I generate my apis, it gets put in src/main/java/<apiPackage>. I would like to know where is src/main/java declared as part of the path and if it can be changed.
A:
src/main/java is a Maven convention.
This folder structure is hard-coded in Swagger Codegen's code and cannot be changed.
| {
"pile_set_name": "StackExchange"
} |
Q:
What are the strongest weapons in the game?
It seems like, just like the last game, this isn't quite a straightforward estimation. There are a few variable options (although not QUITE as many as last time). Which weapons should I shoot for at endgame, and what's the easiest way to get them? Does which weapons I use depend specifically on what I'm doing (again, much like the first game)?
A:
It really depends on what type of battler you are. For example, you may wish to trade-off Attack/Magic for an ability to increase the Chain quicker which can be very helpful in certain battles.
However, the weapons with the highest base stats are:
Serah: Arcus Chronica which has 140ATK and 200MAG and costs $150,000 (although less with fragment ability). You need Adamantite to buy this weapon off Chocolina (after end game) which you can get from Chocobo Racing in Serendipity or by defeating Long Gui in the Archylite Steppe and you must use two other weapons which you can get from defeating two of the quest bosses (Ochu and Immortal) in the same place under different weather conditions. You just speak to the guy by the weather machine to activate these quests.
Noel: In Paradisum which has 200ATK and 140MAG and costs $150,000 and can be obtained in the same manner as Serahs items.
Lastly, you can also buy the Chaos Crystal from Serendipity and give this to Hope in Academia 4XX AF and he will give you either OdinBlade(Noel) or OdinBolt(Serah) and you can buy whichever one you don't get from Serendipity (for 10,000 coins) or from Chocolina. These can have up to 220ATK/MAG but it is dependant on the amount of fragments you have collected with 160 equalling the max 220 on the weapons.
| {
"pile_set_name": "StackExchange"
} |
Q:
Comparing Values of 2 dictionaries in Python and Constructing New dictionary From Comparison
I am using Python 2.7.4. I am trying to compare values from two different dictionaries in python and construct a new dictionary based upon the results of the comparison.
my users input the post positions and mlodds1 and tbodds1 of horses into 3 lists then I do the following:
ml_dict = dict(zip(postpositions,mlodds1))
tb_dict = dict(zip(postpositions,tbodds1))
to construct two dictionaries from those lists.
I want a new dictionary: screened_dict[a,x] to be made of the value x < y in tb_dict[a,x] and ml_dict(a, y). Thanks in advance.
A:
combined = {}
for x in ml_dict:
try:
if tb_dict[x] < ml_dict[x]: combined[x] = ml_dict[x]
except KeyError: continue
| {
"pile_set_name": "StackExchange"
} |
Q:
Posting results of html form on the same page
I have the following form code in the nav bar:
<form id="form">
<p>
<label for="textarea"></label>
<textarea name="textarea" id="textarea" cols="100" rows="5">
Post here
</textarea>
<input id="button" type="submit" value="Submit!" name="submit" onclick = "get('textfield')"/>
</p>
<p>
<input type="radio" name="radio" id="radio-1" />
<label for="radio-1">Section 1</label>
<input type="radio" name="radio" id="radio-2" />
<label for="radio-2">Section 2</label>
<input type="radio" name="radio" id="radio-3" />
<label for="radio-3">Section 3</label>
<input type="radio" name="radio" id="radio-4" />
<label for="radio-4">Section 4</label>
<input type="radio" name="radio" id="radio-5" />
<label for="radio-5">Section 5</label>
<input type="radio" name="radio" id="radio-6" />
<label for="radio-6">Section 6</label>
</p>
</form>
And in the main body within the webpage,I have 6 sections. What I am trying to achieve is if I select one of the radio buttons, write something in the text area and click submit, it should appear within the selected section. So If write hello world and mark section 5, hello world should appear under section 5.
Is there any naive way of achieving this purely in HTML5? If there isn't, can anyone point to any tutorials/links or offer any suggestions?
Thanks in advance!
A:
try to use this Example
<p>
<label for="textarea"></label>
<textarea name="textarea" id="textarea" cols="100" rows="5"></textarea>
<input id="button" type="submit" value="Submit!" name="submit"
</p>
<p>
<input type="radio" name="radio" id="radio-1" />
<label for="radio-1">Section 1</label>
<input type="radio" name="radio" id="radio-2" />
<label for="radio-2">Section 2</label>
<input type="radio" name="radio" id="radio-3" />
<label for="radio-3">Section 3</label>
<input type="radio" name="radio" id="radio-4" />
<label for="radio-4">Section 4</label>
<input type="radio" name="radio" id="radio-5" />
<label for="radio-5">Section 5</label>
<input type="radio" name="radio" id="radio-6" />
<label for="radio-6">Section 6</label>
</p>
<ul>
<li><section id="radio-1"></section></li>
<li><section id="radio-2"></section></li>
<li><section id="radio-3"></section></li>
<li><section id="radio-4"></section></li>
<li><section id="radio-5"></section></li>
<li><section id="radio-6"></section></li>
</ul
JS
$(function () {
$("#button").click(function(){
var txt = $("#textarea").val();
if(txt.length > 0)
{
var id = $("input[type='radio']:checked").attr("id");
$("li section").text("");
$("li #"+id).text(txt);
}
});
});
| {
"pile_set_name": "StackExchange"
} |
Q:
get all attachment files using ListData.svc
I am trying to get all attachments in a custom list that has been attacht to the all items.
I have used /_vti_bin/listdata.svc/MyList/attachmentFiles but it returns the request URI is not valid. How do I get all attachments in a custom list? I cannot use CSOM or JSOM, because the list is located in different site collection. appreciate all kind of advice. (it is a SharePoint 2010 environment).
A:
First of all, SP.ListItem.attachmentFiles property is not available in SharePoint 2010.
The following REST query returns all attachments in a List in SharePoint 2010:
http://<sitecollection>/<site>/_vti_bin/ListData.svc/Requests?$select=Attachments&$expand=Attachments
| {
"pile_set_name": "StackExchange"
} |
Q:
Parameterizing values for oracle sql
I am using Oracle - SQL developer
Want to check the count of null values for each column .
Currently I am using the below to achieve results.
select COLUMN_NAME from all_tab_columns where table_name = 'EMPLOYEE'
SELECT COUNT (*) FROM EMPLOYEE WHERE <Column_name1> IS NULL
UNION ALL
SELECT COUNT (*) FROM EMPLOYEE WHERE <Column_name2> NULL
UNION ALL
SELECT COUNT (*) FROM EMPLOYEE WHERE <Column_name3> IS NULL
UNION ALL ......................
How can we use bind value to run the below query like
DEFINE Column_name = Column_name1
SELECT COUNT (*) FROM EMPLOYEE WHERE &&Column_name IS NULL .
A:
You can't use bind variables when you're constructing the select statement, you can pass values via bind variables, but the select statement itself cannot be constructed. You have to go the dynamic SQL way, using EXECUTE IMMEDIATE.
Here's an example:
DECLARE
v_sql_statement VARCHAR2(2000);
n_null_count NUMBER;
BEGIN
FOR cn IN (SELECT column_name
FROM user_tab_cols
WHERE table_name = 'EMPLOYEE') LOOP
v_sql_statement := 'SELECT COUNT(1) FROM EMPLOYEE where '
|| cn.column_name
|| ' IS null';
EXECUTE IMMEDIATE v_sql_statement INTO n_null_count;
dbms_output.Put_line('Count of nulls for column: '
|| cn.column_name
|| ' is: '
|| n_null_count);
END LOOP;
END;
This is what the above query will fetch
Count of nulls for column: EMPNO is: 0
Count of nulls for column: NAME is: 0
Count of nulls for column: JOB is: 0
Count of nulls for column: BOSS is: 1
Count of nulls for column: HIREDATE is: 0
Count of nulls for column: SALARY is: 0
Count of nulls for column: COMM is: 20
Count of nulls for column: DEPTNO is: 0
| {
"pile_set_name": "StackExchange"
} |
Q:
Hyperledger Sawtooth Supply Chain Send Transaction
I'm trying to add the first block to the supply chain, because the validator doesn't accept the batch.
[2019-11-05 17:40:31.594 DEBUG publisher] Batch c14df4b31bd7d52cf033a0eb1436b98be3d9ff6b06affbd73ae55f11a7cc0cc33aa6f160f7712d628e2ac644b4b1804e3156bc8190694cb9c468f4ec70b9eb05 invalid, not added to block.
[2019-11-05 17:40:31.595 DEBUG publisher] Abandoning block (1, S:, P:eb6af88e): no batches added
I installed Sawtooth and the Supply Chain on Ubuntu 16.04. I'm using the following Python Code to send the transaction. I'm not sure about the payload. I've got this from sample data of the fish client example. Is it perhaps necessary to change the keys?
#Creating a Private Key and Signer
from sawtooth_signing import create_context
from sawtooth_signing import CryptoFactory
from hashlib import sha512
context = create_context('secp256k1')
private_key = context.new_random_private_key()
signer = CryptoFactory(context).new_signer(private_key)
public_key = signer.get_public_key()
#Encoding Your Payload
import cbor
payload = {
"username": "ahab",
"password": "ahab",
"publicKey": public_key.as_hex(),
"name": "Ahab",
"email": "[email protected]",
"privateKey": "063f9ca21d4ef4955f3e120374f7c22272f42106c466a91d01779efba22c2cb6",
"encryptedKey": "{\"iv\":\"sKGty1gSvZGmCwzkGy0vvg==\",\"v\":1,\"iter\":10000,\"ks\":128,\"ts\":64,\"mode\":\"ccm\",\"adata\":\"\",\"cipher\":\"aes\",\"salt\":\"lYT7rTJpTV0=\",\"ct\":\"taU8UNB5oJrquzEiXBV+rTTnEq9XmaO9BKeLQQWWuyXJdZ6wR9G+FYrKPkYnXc30iS/9amSG272C8qqnPdM4OE0dvjIdWgSd\"}",
"hashedPassword": "KNYr+guWkg77DbaWofgK72LrNdaQzzJGIkk2rEHqP9Y="
}
payload_bytes = cbor.dumps(payload)
#Create the Transaction Header
from hashlib import sha512
from sawtooth_sdk.protobuf.transaction_pb2 import TransactionHeader
txn_header_bytes = TransactionHeader(
family_name='supply_chain',
family_version='1.1',
#inputs=[],
#outputs=[],
signer_public_key=signer.get_public_key().as_hex(),
# In this example, we're signing the batch with the same private key,
# but the batch can be signed by another party, in which case, the
# public key will need to be associated with that key.
batcher_public_key=signer.get_public_key().as_hex(),
# In this example, there are no dependencies. This list should include
# an previous transaction header signatures that must be applied for
# this transaction to successfully commit.
# For example,
# dependencies=['540a6803971d1880ec73a96cb97815a95d374cbad5d865925e5aa0432fcf1931539afe10310c122c5eaae15df61236079abbf4f258889359c4d175516934484a'],
dependencies=[],
payload_sha512=sha512(payload_bytes).hexdigest()
).SerializeToString()
#Create the Transaction
from sawtooth_sdk.protobuf.transaction_pb2 import Transaction
signature = signer.sign(txn_header_bytes)
txn = Transaction(
header=txn_header_bytes,
header_signature=signature,
payload= payload_bytes
)
#Create the BatchHeader
from sawtooth_sdk.protobuf.batch_pb2 import BatchHeader
txns = [txn]
batch_header_bytes = BatchHeader(
signer_public_key=signer.get_public_key().as_hex(),
transaction_ids=[txn.header_signature for txn in txns],
).SerializeToString()
#Create the Batch
from sawtooth_sdk.protobuf.batch_pb2 import Batch
signature = signer.sign(batch_header_bytes)
batch = Batch(
header=batch_header_bytes,
header_signature=signature,
transactions=txns
)
#Encode the Batch(es) in a BatchList
from sawtooth_sdk.protobuf.batch_pb2 import BatchList
batch_list_bytes = BatchList(batches=[batch]).SerializeToString()
#Submitting Batches to the Validator
import urllib.request
from urllib.error import HTTPError
try:
request = urllib.request.Request(
'http://localhost:8008/batches',
batch_list_bytes,
method='POST',
headers={'Content-Type': 'application/octet-stream'})
response = urllib.request.urlopen(request)
except HTTPError as e:
response = e.file
A:
I found a solution for my case. I used the .proto files of the Supply Chain Repo and I compiled these files to Python files. Then I took the addressing.py and the supply_chain_message_factory.py of the Supply Chain Repo. I changed the imports in supply_chain_message_factory.py and now I can create an agent with main.py.
from sc_message_factory import SupplyChainMessageFactory
import urllib.request
from urllib.error import HTTPError
#Create new agent
new_message = SupplyChainMessageFactory()
transaction = new_message.create_agent('test')
batch = new_message.create_batch(transaction)
try:
request = urllib.request.Request(
'http://localhost:8008/batches',
batch,
method='POST',
headers={'Content-Type': 'application/octet-stream'})
response = urllib.request.urlopen(request)
except HTTPError as e:
response = e.file
| {
"pile_set_name": "StackExchange"
} |
Q:
"He was surpassed in all Hobbit records only by two famous characters of old." - How to interpret this sentence
In the Lords Of The Rings prologue there is a famous phrase:
"Their height is variable, ranging between two and four feet of our
measure. They seldom now reach three feet; but they have dwindled,
they say, and in ancient days they were taller.
According to the Red
Book, Bandobras Took (Bullroarer), son of Isengrim the Second, was
four foot five and able to ride a horse. He was surpassed in all
Hobbit records only by two famous characters of old"
I am not sure how to interpret this sentence:
He was surpassed in all Hobbit records only by two famous characters
of old
What does it mean to be surpassed in all Hobbit records only by two famous characters of old ?
I understand it means he was one of the tallest of all times.
A:
"According to Hobbit records, only two famous characters (ones from times long past) were taller than him."
Since the section involves body height, it's all about being tall, "surpassing" meaning "being taller" in this context.
"Characters of old" - characters from old times, not in recent records.
"in all Hobbit records" - the source of the information.
A:
"of old" is an expression referring to the past, in particular to beyond living memory. It is used more often in literary or poetic works than in spoken English.
The expression should just be taken to mean two famous characters from the past.
A:
I know it's kind of old, but I don't really like the existing answers. They were given before the question was edited, but it appears the asker already knew what it meant, they just didn't understand how the sentence should be parsed to get to that meaning.
So, let's break it down in case anyone else happens across this question and has trouble understanding. Sorry if it's kind of long. I tried to keep it short but didn't do a great job.
Original sentence:
He was surpassed in all Hobbit records only by two famous characters of old.
Central clause:
He was surpassed.
As mentioned in the other answers, surpassed means "beaten" or "bested". The idea here is there's an imaginary contest where the winner is the tallest hobbit in the world. In this case, "he" (Bullroarer) was beaten in this contest, which means another hobbit was taller.
Prepositional phrase #1:
He was surpassed by two characters.
This is called the passive voice. The active voice is often easier to understand:
Two characters surpassed him.
Both sentences mean the same thing. Namely, that two "characters" (just a fancy way to say "two hobbits") were taller than Bullroarer, which means they surpassed him in the imaginary contest to be the tallest hobbit in the world.
The active voice (second sentence) puts the focus on the two characters, while the passive voice (original sentence) keeps the focus on Bullroarer.
Adverb:
He was surpassed only by two characters.
We can rearrange the sentence a bit to see that "only" is an adverb modifying the verb "surpassed".
He was only surpassed by two characters.
"Only" implies that even though Bullroarer wasn't the absolute tallest hobbit in the world, he was still really tall, and third place is very good considering there were thousands or millions of hobbits he was competing against in our imaginary contest.
Prepositional phrase #2:
He was surpassed only by two characters of old.
The phrase "of old" is pretty straightforward, but it's not quite the same as saying "two old characters". "Two old characters" implies the characters are alive, but have lived a long time. "Of old" means the stories of the two hobbits are very old, and generally means the hobbits themselves are long dead.
He was surpassed only by two characters who lived a long time ago.
(It's possible to have a situation where the story is from a long time ago, but the character is a deity or vampire or similar and isn't actually dead. However, that isn't likely in this case since elves are the only people in Lord of the Rings who generally live extremely long, and we're talking about hobbits who don't generally live to be much over a hundred.)
Adjective:
He was surpassed only by two famous characters of old.
The word "famous" is pretty simple here, just meaning they were well-known. There is a little nuance here, since we're not sure if they are currently famous, or were famous when they were alive, or both. My guess is they are currently famous for being very tall hobbits. But it's not really important since their fame doesn't affect the fact that Bullroarer was very tall for a hobbit.
Prepositional phrase #3:
He was surpassed in all Hobbit records only by two famous characters of old.
Taken very literally, this one could be a little tricky. In some contexts, "in all records" could mean that every single record includes a note that Bullroarer has been surpassed. For example:
In all television broadcasts, there is a three-second delay so the producer can stop the broadcast if something too graphic is accidentally shown.
In that sentence, we're stating that every single broadcast has a delay, and you might be tempted to interpret the hobbit sentence the same way. But that would be wrong.
Instead, the meaning is "if we compile a list of tall hobbits using every hobbit record as a source, we will find records of two hobbits taller than Bullroarer".
It's possible the hobbit records missed a third hobbit who was even taller (perhaps the third hobbit lived long before any existing records were written). But the sentence basically says "Bullroarer was the third-tallest hobbit to ever live".
| {
"pile_set_name": "StackExchange"
} |
Q:
NHibernate - auto generate timestamp on create and update?
I am trying to map an entity in NHibernate, that should have an Updated column. This should be the DateTime when the entity was last written to the database (either created or updated). I'd like NHibernate to control the update of the column, so I don't need to remember to set a property to the current time before updating.
Is there a built-in feature in NHibernate, that can handle this for me ?
A:
Use a Listener that implements IPreUpdateEventListener and IPreInsertEventListener. This article explains how. Note that this uses the user's time and that may not be appropriate for your application.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I prevent my AC condensate pipe from making my soil soggy?
I live in a Townhouse in NC, in a community under a HOA management. I've been doing some improvements on my patio. One of them that I couldn't find a solution is what to do with the condensed water from the AC draining. My AC is installed in the attic. 2 drain lines come from it. One is the main one which ends on my patio , the other line has an end on the soffit area of my roof (The secondary / auxiliary line coming from the dripping pan in the attic).
The main line, specially in summers where I use AC a lot, creates a soggy area on my lawn, so wet that even the guys from the HOA landscape company have had hard time trying to mow there.
Since I live in a Townhouse, and no way to redirect the water beyond my property line, I've read several workarounds to solve this problem:
Some suggests to connect indirectly to sewer system like this post:
Connecting condensate pump to sewer
Other suggests a mini drain well with gravel:
How do I eliminate stagnant water caused by central A/C draining outside?
https://www.youtube.com/watch?v=bLNb1caVupo
In my Case, I would like that mini drain well option, however, my concerning is the heavy clay soil I have. After just a few inches of topsoil, I see underneath just pure Clay. I guess that's the reason why water is ponding and creating that soggy area (see pic), specially in summers. If I create that dry well with gravel, what are the chances that the rate of water absorption of the clay underneath and around the dry well underground will be faster than the rate of water saturation of the dry well itself (being continuously and daily watered by the AC condensation line) before starting to pond water around again? I've read that a 2 Tons AC can create between 10 - 20 gallons of condensed water per day in summers. Is there a minimum size and depth for this dry well on a heavy clay soil to be effective, or is this solution not worth it on a clay soil?
A:
Connecting condensate and sump pumps to sewers is rarely legal these days in most places, so concentrating on a dry well approach is probably best unless you have dedicated storm sewers you can legally connect to.
One "crude but functional" test of "size of hole" is to dig it (and not fill it in, though depending on accessibility to the public you may need a temporary fence to keep children and people who should know better from falling into it) and either add water or let the condensate drip, and see how fast it goes away. The point of a dry well is to provide storage, and a larger surface area to drain away into. If you get lucky (don't count on it) you might get through the clay layer and have much better drainage - but in any case you will have a larger area to absorb water. Most dry well systems are amenable to put one in, if insufficient, add another, (connected by pipes underground) until you get sufficient percolation into the subsoil.
| {
"pile_set_name": "StackExchange"
} |
Q:
Seeking special-use fingerprinting/hashing algorithm
For a project I wonder if there exists some kind of fixed-size checksumming/fingerprinting function in which based on this fingerprint given data block 1, it is easy to generate more data blocks that share the same fingerprint/hash key.
Therefore this is unlike an MD5 sum. (In that I don't know how to easily go back from MD5 sum → new matching file.)
Basically, I am looking to generate the set of data blocks that hash to the same fingerprint of data block 1. Data blocks 2, 3, 4 etc... may be the same size or even smaller than 1 – ideally the less information entropy the better – but the series must be deterministic and finite, and computationally easy to find.
Comment#1 helps me tighten some properties.
In particular, considering the mapping
hash[key][index] -> datablock
Given a 32-byte key for instance, index should be consecutive integers like you would expect in arrays, and a fixed index should map to a fixed datablock (for that key). Within that set of datablocks, each datablock should calculate to the same hash key, but when enumerated should maintain their relative index position (eg. datablockYYY should always and only be found at 15 index positions higher than datablockBBB).
The tricky part might be that the index range of hashes should not have to be so vast as to need as many bits of entropy as the data blocks themselves generally, but substantially less: limited to a 31bit unsigned integer for simpler testing, let's say. In fact, when I come to calculating the key1 given my starting datablock1, I hope to find a datablockN whose index in the hash is not too far from datablock1's index. The data blocks need not all be the same size, nor all possible blocks of a certain size be mappable by a particular key (pigeonhole principle in reverse). An onto function? Not sure about the terminology.
Hoping that a much reduced problem using familiar things will help shed some light, though there are significant differences here.
A:
For cryptographic hash functions we usually want to avoid collisions as much as possible (and even more we want to avoid any way to get from the output back to the preimage).
So what you want certainly is not a cryptographic hash function, but something else.
On the first look, something like a CRC (cyclic redundancy check) could fit your bill. These have the property that they reduce arbitrary-length messages to fixed-length checksums ($n$ bit for a CRC-$n$), and given a $n$-bit checksum and a prefix of a string with only $n$ bits missing, it requires just some linear algebra to compute the exactly one postfix missing. (You can have other parts missing instead, but then the computation is a bit more complicated.)
So assuming your datablocks are $m$ bit, we have the functions
$$\def\Z{\mathbb Z}\def\invC{\operatorname{inv}\!CRC} CRC : \Z_2^m \to \Z_2^n,$$
$$ \invC : \Z_2^{m-n} \times \Z_2^n \to \Z_2^m, $$
where $\invC(x,y)$ has $x$ as the first $m-n$ bits, and $CRC(\invC(x,y)) = y$.
I'm not sure if this fits your bill - you could have $y$ as key and $x$ as index, or the other way around.
| {
"pile_set_name": "StackExchange"
} |
Q:
Automatically restart stopped VM, Hyper-V
Very little experience of Hyper-V, but I'm sure this should be very simply accomplished. I have a single VM running in Hyper-V. If it fails (stops, reboots, powers down) for which ever reason, I would like Hyper-V to restart the VM.
Basically, I would like th VM to run at all times if possible, with minimal house keeping.
The server this is running on does nothing else.
Many thanks for your help!
LMT
A:
Unfortunately this is Microsoft not VMware ! Vmware esxi has High Availability and Fault Tolerance feature and are the most advance in market.
You can achieve that kind feature in Hyper-V but not exactly that, which is called Failover Clustering . But in order to achieve Failover Clustering your physical setup should have a cluster and shared storages.
For more information read about Hyper-V and Failover Clustering .
Now If you want to power on your VM programmatically then tell me I will post that in edit.
| {
"pile_set_name": "StackExchange"
} |
Q:
Logback stops logging after configuration reload
We are evaluating the use of Logback in a multi-server Weblogic environment. On one machine we have two Weblogic server instances (basically two separate JVM processes) running on a same Weblogic domain. The servers log to the same log file (application.log). The Logback configuration (logback.xml) is the same for both servers (shown below):
<configuration scan="true" debug="true">
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>log/application.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>log/application.%d{yyyy-MM-dd}.log</FileNamePattern>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%d{HH:mm:ss.SSS} [%31.31logger] [%-5level] [%28.-28thread] %msg %xEx %n</pattern>
</encoder>
</appender>
<logger name="org" level="ERROR"/>
<root level="DEBUG">
<appender-ref ref="FILE" />
</root>
</configuration>
Everything works fine until the configuration is edited (e.g. the root logging level changed or a new logger added) after which logging stops completely. Nothing gets printed in the logs, and no Logback error message is visible in the console either. Logback is in debug mode already, which is verified by the following being written to each server's console upon server startup:
18:06:37,949 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
18:06:37,951 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
18:06:37,957 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/opt/bea10/user_projects/KG/resources/config/logback.xml]
18:06:39,457 |-INFO in ch.qos.logback.classic.turbo.ReconfigureOnChangeFilter@158ef4f - Will scan for changes in file [/opt/bea10/user_projects/KG/resources/config/logback.xml] every 60 seconds.
18:06:39,457 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - Adding ReconfigureOnChangeFilter as a turbo filter
18:06:39,471 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
18:06:39,556 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE]
18:06:40,061 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Pushing component [rollingPolicy] on top of the object stack.
18:06:40,533 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy - No compression will be used
18:06:40,563 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy - Will use the pattern log/application.%d{yyyy-MM-dd}.log for the active file
18:06:40,652 |-INFO in c.q.l.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy - The date pattern is 'yyyy-MM-dd' from file name pattern 'log/application.%d{yyyy-MM-dd}.log'.
18:06:40,652 |-INFO in c.q.l.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy - Roll-over at midnight.
18:06:40,654 |-INFO in c.q.l.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy - Setting initial period to Wed Oct 20 17:43:20 EEST 2010
18:06:40,685 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Pushing component [encoder] on top of the object stack.
18:06:41,256 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - Active log file name: log/application.log
18:06:41,257 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - File property is set to [log/application.log]
18:06:41,307 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org] to ERROR
18:06:41,307 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting additivity of logger [org] to true
18:06:41,307 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to DEBUG
18:06:41,308 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [FILE] to Logger[ROOT]
18:06:41,351 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
The version of Logback is 0.9.24, slf4j is 1.6.0, Weblogic is 10.3 (doubt if that matters) and Java is 1.6.0_12. OS is Solaris. I even tried putting the Java option
-XX:-UseVMInterruptibleIO
because this was suggested to a Logback problem on Solaris here, but this did not help.
Is there a way to make this work? Is it a bad idea altogether to have the two servers to write to the same log file?
A:
Does the prudent property help? It adds overhead but can work around issues with multiple JVMs. I'm not sure your symptoms quite match, but it might be worth a try.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to pass an overloaded member-function as parameter?
Here is the problem I am facing: I have an overloaded function in a class, and I want to pass one of its overloads as a parameter. But when doing so, I get the following error :
"no suitable constructor exists to convert from <unknown-type> to std::function<...>"
Here's a code sample to illustrate that :
#include <functional>
#include <string>
class Foo
{
private:
int val1 , val2;
};
class Bar
{
public:
void function ( ) {
//do stuff;
Foo f;
function ( f );
}
void function ( const Foo& f ) {
//do stuff
}
private:
//random attribute
std::string str;
};
void otherFunction ( std::function<void ( Bar& , const Foo& ) > function ) {
Bar b;
Foo f;
function(b,f);
}
int main ( ) {
otherFunction ( &Bar::function );
^^^
error
}
I understand that the compiler cannot deduce which overload to use, so the next best thing to do is a static_cast, but the following code still has the same error
std::function<void ( Bar& , const Foo& )> f = static_cast< std::function<void ( Bar& , const Foo& )> > ( &Bar::function );
A:
You need to cast to member-function pointer, not to std::function:
otherFunction ( static_cast<void(Bar::*)(const Foo&)>(&Bar::function) );
Live
[EDIT]
Explanation:
otherFunction ( &Bar::function );
otherFunction takes std::function as a parameter. std::function has an implicit constructor (an implicit conversion) from a function pointer (a member function or a free function, and other "callable" types, doesn't matter here). It looks like this:
template< class F >
function( F f );
it's a template parameter
while F is "callable", it doesn't specify the signature of F
This means that compiler doesn't know which Bar::function you meant, because this constructor doesn't put any restrictions on input parameter. That's what compiler is complaining about.
You tried
static_cast< std::function<void ( Bar& , const Foo& )> > ( &Bar::function );
While it looks like compiler has all details it needs here (the signature), actually the same constructor is called, so nothing effectively changed.
(Actually, the signature is incorrect, but even correct one wouldn't work)
By casting to a function pointer we provide its signature
static_cast<void(Bar::*)(const Foo&)>(&Bar::function)
So ambiguity is resolved as there's only one such function so compiler is happy.
A:
If you use the typed member function pointer along with a templated otherFunction, your code will work. That means, change your otherFunction() to:
template<typename Class, typename T>
void otherFunction(void(Class::*)(T) ) {
Bar b;
Foo f;
b.function(f);
}
If the syntax is confusing, use a helper (template)alias for member function pointer:
template<typename Class, typename T>
using MemFunPtr = void(Class::*)(T);
template<typename Class, typename T>
void otherFunction(MemFunPtr<Class, T> function) {
Bar b;
Foo f;
b.function(f);
}
Now you can call the function without typecasting.
int main()
{
otherFunction(&Bar::function);
return 0;
}
(See Online)
| {
"pile_set_name": "StackExchange"
} |
Q:
HDIV with Richfaces 4
I am using HDIV with JSF I want to add Richfaces to my application but the use of rich components like calender results HDIV exception since those components creates client side elements which are not part of JSF component tree. How can I aproach this problem.
A:
As you probably Know RichFaces is not supported by HDIV. In order to solve the problem the are two possible solutions:
Define the new client side parameters as start parameters. In that case HDIV is not going to validate them but it will work. It seems they are editable parameters so maybe is enough.
Extend the RichFaces component in order to register the parameters within JSF tree. This solution could work but takes much more work.
| {
"pile_set_name": "StackExchange"
} |
Q:
In tmux I only have 2 groups
In tmux I only have 2 groups, as opposed to the expected 5:
$ groups
username sudo staff website1 website2
$ tmux
$ groups
username sudo
Why is this and how do I fix it?
A:
Perhaps your tmux server was started before you were added to the additional groups. The server process and any processes which it starts will only have the permissions that were in place when the server was started.
You can fix this by closing all sessions and starting a new server. Once you've quit any programs that you care about which are running inside tmux sessions you can use tmux kill-server to ensure that the old server process is ended. Then when you run tmux again it will automatically start a new server which should have all of your current permissions.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why can an elf never become overweight?
Elves and human share many similarities, from body type to the ability to perform magic. However, elves retain a slim, lean build throughout their life, and are incapable of becoming obese. The biggest they could get would be that of a swimmer's body, toned and lightly muscled. This is true regardless of how much they consume. My original hypothesis for this was "because magic", but humans are able to match them in this regard. What biological reason would there be for elves not being able to become fat?
A:
From a genetic heritage standpoint, the Elves evolved in Underhill, which is an ideal garden-like environment, free from both harsh winters and famine-inducing dry months.
As a result, their bodies never developed the ability to store calories for times when food was scarce. Food was always abundant, so fat elves had no survival advantage over thin elves.
and in the end, Shakespeare was right...
Let me have men about me that are fat,
Sleek-headed men and such as sleep a-nights.
Yon Cassius has a lean and hungry look,
He thinks too much; such men are dangerous.
A:
The long eared folk have a metabolism just like that of humans. They don't usually get obese not because of faster metabolism, but because of a combination of factors:
They have less adipose tissue. Meaning that they don't store fat as efficiently as we do.
Their bowels naturally produce tetrahydrolipstatin. Trust me,.you don't want to see a chamber pot that has been used by an ellf.
On top of that, they willingly and ritually eat tapeworm infested food. They insist that as part of nature, those tapeworms are symbiotes and not parasites.
And that's how they stay slim without making any exercise.
| {
"pile_set_name": "StackExchange"
} |
Q:
R foreach error when using formula notation in randomForest
I have an issue running a randomForest in parrallel using fore each.
See this example, I create some data,then a formula notation.
The formula works on a randomForest by itself.
But fails when used in a foreach parrallel loop...?
# rf on big training set
# use parallel foreach
library(foreach)
library(doMC)
registerDoMC(4) #change the 2 to your number of CPU cores
# info on parrallell backend
getDoParName()
getDoParWorkers()
# bogus data
set.seed(123)
ssize <- 100000
x1 <- sample( LETTERS[1:9], ssize, replace=TRUE, prob=c(0.1, 0.2, 0.15, 0.05,0.1, 0.2, 0.05, 0.05,0.1) )
x2 <- rlnorm(ssize,0,0.25)
x3 <- rlnorm(ssize,0,0.5)
y <- sample( c("Y","N"), ssize, replace=TRUE, prob=c(0.05, 0.95))
df <- data.frame(x1,x2,x3,y)
df$p_y <- as.numeric(df$y)-1
# use strata to sample whole dataset
library(sampling)
s1 = strata(df,stratanames = "y", size = c(2500,2500))
s2 = strata(df,stratanames = "y", size = c(2500,2500))
s3 = strata(df,stratanames = "y", size = c(2500,2500))
s4 = strata(df,stratanames = "y", size = c(2500,2500))
s_list <- list(s1$ID_unit, s2$ID_unit, s3$ID_unit, s4$ID_unit)
# model function
rf.formula <- as.formula(paste("y","~",paste("x1","x2",sep="+")))
library(randomForest)
# simple stuff works but takes some time
model.rf <-randomForest(y ~ x1 + x2, df, ntree=100, nodesize = 50)
# build rf with dopar on explicit formula works and is quick
model.rf.dopar <- foreach(subset=s_list, .combine=combine, .packages='randomForest') %dopar%
randomForest(y ~ x1 + x2, df, ntree=100, nodesize = 50, subset=subset)
# build rf with dopar on rf.formula fails
model.rf.s.b2 <- foreach(subset=s_list, .combine=combine, .packages='randomForest') %dopar%
randomForest(rf.formula, df, ntree=100, nodesize = 50, subset=subset)
# > model.rf.s.b2 <- foreach(subset=s_list, .combine=combine, .packages='randomForest') %dopar%
# + randomForest(rf.formula, df, ntree=100, nodesize = 50, subset=subset)
# Error in randomForest(rf.formula, df, ntree = 100, nodesize = 50, subset = subset) :
# task 1 failed - "invalid subscript type 'closure'"
The error:
model.rf.s.b2 <- foreach(subset=s_list, .combine=combine, .packages='randomForest') %dopar%
+ randomForest(rf.formula, df, ntree=100, nodesize = 50, subset=subset)
Error in randomForest(rf.formula, df, ntree = 100, nodesize = 50, subset = subset) :
task 1 failed - "invalid subscript type 'closure'"
Any suggestions?
Tx
A:
The problem seems to be due to an indexing operation going wrong deep down in the model.frame.default function, which is indirectly called by randomForest.formula. I'm not at all sure what is triggering the problem because there are a lot of tricky evals happening in model.frame.default, but modifying the environment of the formula seems to fix the problem:
r <- foreach(subset=s_list, .combine='combine', .multicombine=TRUE,
.packages='randomForest') %dopar% {
environment(rf.formula) <- environment()
randomForest(rf.formula, df, ntree=100, nodesize = 50, subset=subset)
}
In particular, this causes subset to be evaluated correctly, otherwise it evaluates to the subset function. I tried renaming the iteration variable, but it didn't help.
Note that I also set .multicombine to TRUE since the randomForest combine function accepts multiple objects, and that can improve performance significantly.
Update
The problem can be reproduced with:
fun <- function(subset) {
randomForest(rf.formula, df, ntree=100, nodesize = 50, subset=subset)
}
fun(s_list[[1]])
If the variable subset is changed to s, for example, it also fails, but with a less misleading error message:
> fun <- function(s) {
> randomForest(rf.formula, df, ntree=100, nodesize = 50, subset=s)
> }
> fun(s_list[[1]])
Error in eval(expr, envir, enclos) : object 's' not found
Calls: fun ... eval -> model.frame -> model.frame.default -> eval -> eval
Execution halted
As with the foreach example, resetting the environment of the formula seems to work-around the problem.
| {
"pile_set_name": "StackExchange"
} |
Q:
Regarding spatial resolution conversion programming - bmp image
I want to ask something about simple spatial resolution manipulating just by using c language. I have done my programming below, it managed to be compiled but for some reasons the program stucked in the middle when I try to run it. Really hope you guys can help. I am extremely beginner on this.
#include<stdio.h>
#define width 640
#define height 581
int main(void)
{
FILE *fp;
unsignedchar header[54];
unsignedchar img_work[width][height][3];
char input_file[128],output_file[128];
int v, h, w, i, c, s, ave_w[width], ave_h[height], average_h, average_w;
/*------------Reading image------------*/
printf("Enter name of the file¥n---");
scanf("%s",input_file);
printf("The file that would be processed is %s.¥n", input_file);
fp=fopen(input_file,"rb");
fread(header,1,54,fp);
fread(img_work,1,width*height*3,fp);
fclose(fp);
/*------------Spatial Resolution Program------------*/
printf ("enter level of spatialization-- ");
scanf ("%d", &v);
for (i=0; i<v; i++) {
s = s + s;
}
for(c=0; c<3; c++){
for(h=0; h<height; h++){
for(w=0; w<width; w=w+s){
average_w = 0;
for (i=0; i<s; i++) {
ave_w = img_work[w+i][h][c] / s;
average_w = average_w + ave_w;
}
for (i=0; i<width; i=i+s) {
img_work[w+i][h][c] = average_w;
}
}
}
}
for(c=0; c<3; c++){
for(w=0; w<width; w++){
for(h=0; h<height; h=h+s){
average_h = 0;
for (i=0; i<s; i++) {
ave_h = img_work[w][h+i][c] / s;
average_h = average_h + ave_h;
}
for (i=0; i<height; i=i+s) {
img_work[w][h+i][c] = average_h;
}
}
}
}
/*------------Writing File------------*/
printf("Enter the name of the file that would be saved.¥n---");
scanf("%s",output_file);
printf("Name of the file that would be saved is %s.¥n",output_file);
fp=fopen(output_file,"wb");
fwrite(header,1,54,fp);
fwrite(img_work,1,width*height*3,fp);
fclose(fp);
printf("End.¥n");
return 0;
}
I am really a beginner, so, sorry if this is lacking too much.
A:
There are several issues with your code:
s is uninitialised. Hence, when you access its value in the assignment s = s + s, the result is undefined. s may even be negative. Initialise it: s = 1;
You've got the representation of your image wrong. You read the pixel data verbatim from the file. The BMP format is row major, so your pixel data should be img_work[height][width][3] and all accesses should have their first and second dimensions swapped.
The BMP format also requires padding at the end of each row. Your fixed-size width of 640 doesn't require it, but it's worth keeping in mind when you want to make your implementation more general.
You don't really need the auxiliary variables ave_w and ave_h. Most importantly, you don't need them to be arrays.
Your height isn't evenly divisible by s. That means that in the last pass through the loop, h + i will go out of bounds. (The same applies to the width, but the value 640 is safe up to at least a level of 7.) You could calculate an "actual s" that would be adjusted for the top and right sides.
When you calculate the average, it is better to sum the values first and then divide by s once. You are dealing with integers and integer division truncates. For example 3/4 is zero. Consequently, (3/4 + 3/4 + 3/4 + 3+4) is also zero, but (3 + 3 + 3 + 3) / 4 is 3. You can notice the effect for larger levels of reduction, where a predominantly white imagfe becomes darker if you divide on summation.
Here's a program based on yours, that puts the points raised above into practice:
#include <stdio.h>
#define width 640
#define height 581
int main(void)
{
FILE *fp;
unsigned char header[54];
unsigned char img_work[height][width][3];
char input_file[128];
char output_file[128];
int v, h, w, i, c, s;
/*------------Reading image------------*/
printf("Enter name of the file\n---");
scanf("%s",input_file);
printf("The file that would be processed is %s.\n", input_file);
fp=fopen(input_file,"rb");
fread(header,1,54,fp);
fread(img_work,1,width*height*3,fp);
fclose(fp);
/*------------Spatial Resolution Program------------*/
printf("enter level of spatialization-- ");
scanf("%d", &v);
s = 1;
for (i = 0; i < v; i++) {
s = s + s;
}
for (c = 0; c < 3; c++) {
for (h = 0; h < height; h++) {
for (w = 0; w < width; w = w + s) {
int average_w = 0;
int ss = s;
if (w + ss > width) ss = width % s;
for (i = 0; i < ss; i++) {
average_w = average_w + img_work[h][w + i][c];
}
for (i = 0; i < ss; i++) {
img_work[h][w + i][c] = average_w / ss;
}
}
}
}
for (c = 0; c < 3; c++) {
for (w = 0; w < width; w++) {
for (h = 0; h < height; h = h + s) {
int average_h = 0;
int ss = s;
if (h + ss > height) ss = height % s;
for (i = 0; i < ss; i++) {
average_h = average_h + img_work[h + i][w][c];
}
for (i = 0; i < ss; i++) {
img_work[h + i][w][c] = average_h / ss;
}
}
}
}
/*------------Writing File------------*/
printf("Enter the name of the file that would be saved.\n---");
scanf("%s",output_file);
printf("Name of the file that would be saved is %s.\n",output_file);
fp=fopen(output_file,"wb");
fwrite(header,1,54,fp);
fwrite(img_work,1,width*height*3,fp);
fclose(fp);
printf("End.\n");
return 0;
}
That's still a quick-and-dirty program with fixed image sizes. It doesn't enforce that the actual size of the image, which can be read from the header, and the fixed sizes match or that the colour depth is the same or that you even get enough pixel data, for which you should check the return value of fread.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to color cells after creat a Qtableview using a custom QAbstractTableModel
I create a class 'pandasModel' based on QAbstractTableModel, shown below:
import sys
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
from PyQt5.QtCore import *
class pandasModel(QAbstractItemModel):
def __init__(self, data, parent=None):
QAbstractItemModel.__init__(self, parent)
self._data = data
def rowCount(self, parent=None):
return self._data.index.size
def columnCount(self, parent=None):
return self._data.columns.size
def data(self, index, role=Qt.DisplayRole):
if index.isValid():
if role == Qt.DisplayRole:
return str(self._data.iloc[index.row(), index.column()])
if role == Qt.EditRole:
return str(self._data.iloc[index.row(), index.column()])
return None
def headerData(self, rowcol, orientation, role):
if orientation == Qt.Horizontal and role == Qt.DisplayRole:
return self._data.columns[rowcol]
if orientation == Qt.Vertical and role == Qt.DisplayRole:
return self._data.index[rowcol]
return None
def flags(self, index):
flags = super(self.__class__, self).flags(index)
flags |= Qt.ItemIsEditable
flags |= Qt.ItemIsSelectable
flags |= Qt.ItemIsEnabled
flags |= Qt.ItemIsDragEnabled
flags |= Qt.ItemIsDropEnabled
return flags
def sort(self, Ncol, order):
"""Sort table by given column number.
"""
try:
self.layoutAboutToBeChanged.emit()
self._data = self._data.sort_values(self._data.columns[Ncol], ascending=not order)
self.layoutChanged.emit()
except Exception as e:
print(e)
Also I create a QTableView to show the Model, shown below:
class TableWin(QWidget):
pos_updown = -1
pos_save = []
def __init__(self):
super(TableWin, self).__init__()
self.resize(200, 100)
self.table = QTableView(self)
self.v_layout = QVBoxLayout()
self.v_layout.addWidget(self.table)
self.setLayout(self.v_layout)
self.showdata()
def showdata(self):
data = pd.DataFrame([[1,2,3,4],[5,6,7,8]])
model = pandasModel(data)
self.table.setModel(model)
def set_cell_color(self, row, column)
'''
Pass two arguments to this function, which is called to set
the background color of the cell corresponding to the row and column
'''
if __name__ == '__main__':
app = QApplication(sys.argv)
tableView = TableWin()
# I want to change cell's color by call function 'set_cell_color' here
# tableView.set_cell_color(row=1,column=1)
tableView.show()
sys.exit(app.exec_())
We can show data in QTableview now, but question is how can i call function 'set_cell_color' to set the background color for cell with given row and column, so could you please tell me how to finish the code in def set_cell_color?
once i want to set cell's color by using 'model.item(row, col).setBackground(QColor(240, 255, 240))' just like QStandardItemModel, but raise error ''model' has no attribute 'item''
this link shows a method to set cell's color
code shows below:
import sys
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
from PyQt5.QtCore import *
class Model(QAbstractTableModel):
def __init__(self, parent=None):
super(Model, self).__init__(parent)
self._data = [[['%d - %d' % (i, j), False] for j in range(10)] for i in range(10)]
def rowCount(self, parent):
return len(self._data)
def columnCount(self, parent):
return len(self._data[0])
def flags(self, index):
return Qt.ItemIsSelectable | Qt.ItemIsEnabled | Qt.ItemIsEditable
def data(self, index, role):
if index.isValid():
data, changed = self._data[index.row()][index.column()]
if role in [Qt.DisplayRole, Qt.EditRole]:
return data
if role == Qt.BackgroundRole and data == "In error": # <---------
return QBrush(Qt.red)
def setData(self, index, value, role):
if role == Qt.EditRole:
self._data[index.row()][index.column()] = [value, True]
self.dataChanged.emit(index, index)
return True
return False
if __name__ == '__main__':
app = QApplication(sys.argv)
tableView = QTableView()
m = Model(tableView)
tableView.setModel(m)
tableView.show()
sys.exit(app.exec_())
Use 'return QBrush(Qt.red)' in 'data' function upon can set the background-color of cells with value 'In error', but the background-color was already set when the Qtableview finish created, i just want to set cell's background color when i call function 'set_cell_color' ,that means i can control the cell's background even after Qtableview already been created, i will really appreciate for your help.
A:
The logic is to save the information in the model associating the item's position and the item's color, and to update it, the dataChanged signal must be emitted.
Note:Your model is of type table so you must inherit from QAbstractTableModel and not from QAbstractItemModel
Considering the above, the solution is:
class pandasModel(QAbstractTableModel):
def __init__(self, data, parent=None):
QAbstractItemModel.__init__(self, parent)
self._data = data
self.colors = dict()
def rowCount(self, parent=None):
return self._data.index.size
def columnCount(self, parent=None):
return self._data.columns.size
def data(self, index, role=Qt.DisplayRole):
if index.isValid():
if role == Qt.DisplayRole:
return str(self._data.iloc[index.row(), index.column()])
if role == Qt.EditRole:
return str(self._data.iloc[index.row(), index.column()])
if role == Qt.BackgroundRole:
color = self.colors.get((index.row(), index.column()))
if color is not None:
return color
return None
def headerData(self, rowcol, orientation, role):
if orientation == Qt.Horizontal and role == Qt.DisplayRole:
return self._data.columns[rowcol]
if orientation == Qt.Vertical and role == Qt.DisplayRole:
return self._data.index[rowcol]
return None
def flags(self, index):
flags = super(self.__class__, self).flags(index)
flags |= Qt.ItemIsEditable
flags |= Qt.ItemIsSelectable
flags |= Qt.ItemIsEnabled
flags |= Qt.ItemIsDragEnabled
flags |= Qt.ItemIsDropEnabled
return flags
def sort(self, Ncol, order):
"""Sort table by given column number.
"""
try:
self.layoutAboutToBeChanged.emit()
self._data = self._data.sort_values(
self._data.columns[Ncol], ascending=not order
)
self.layoutChanged.emit()
except Exception as e:
print(e)
def change_color(self, row, column, color):
ix = self.index(row, column)
self.colors[(row, column)] = color
self.dataChanged.emit(ix, ix, (Qt.BackgroundRole,))
class TableWin(QWidget):
pos_updown = -1
pos_save = []
def __init__(self):
super(TableWin, self).__init__()
self.resize(200, 100)
self.table = QTableView(self)
self.v_layout = QVBoxLayout()
self.v_layout.addWidget(self.table)
self.setLayout(self.v_layout)
self.showdata()
def showdata(self):
data = pd.DataFrame([[1, 2, 3, 4], [5, 6, 7, 8]])
self.model = pandasModel(data)
self.table.setModel(self.model)
def set_cell_color(self, row, column):
self.model.change_color(row, column, QBrush(Qt.red))
| {
"pile_set_name": "StackExchange"
} |
Q:
Insert multiple documents referenced by another Schema
I have the following two schemas:
var SchemaOne = new mongoose.Schema({
id_headline: { type: String, required: true },
tags: [{ type: mongoose.Schema.Types.ObjectId, ref: 'Tag' }]
});
var tagSchema = new mongoose.Schema({
_id: { type: String, required: true, index: { unique: true } }, // value
name: { type: String, required: true }
});
As you can see, in the first schema there is an array of references to the second schema.
My problem is:
Suppose that, in my backend server, I receive an array of tags (just the id's) and, before creating the SchemaOne document, I need to verify if the received tags already exist in the database and, if not, create them. Only after having all the tags stored in the database, I may assign this received array to the tags array of the to be created SchemaOne document.
I'm not sure on how to implement this? Can you give me a helping hand?
A:
So lets assume you have input being sent to your server that essentially resolves to this:
var input = {
"id_headline": "title",
"tags": [
{ "name": "one" },
{ "name": "two" }
]
};
And as you state, you are not sure whether any of the "tags" entries alredy exists, but of course the "name" is also unique for lookup to the associated object.
What you are basically going to have to do here is "lookup" each of the elements within "tags" and return the document with the reference to use to the objects in the "Tag" model. The ideal method here is .findOneAndUpdate(), with the "upsert" option set to true. This will create the document in the collection where it is not found, and at any rate will return the document content with the reference that was created.
Note that natually, you want to ensure you have those array items resolved "first", before preceeding to saving the main "SchemaOne" object. The async library has some methods that help structure this:
async.waterfall(
[
function(callback) {
async.map(input.tags,function(tag,callback) {
Tag.findOneAndUpdate(
{ "name": tag.name },
{ "$setOnInsert": { "name": tag.name } },
{ "upsert": true, "new": true },
callback
)
},callback);
},
function(tags,callback) {
Model.findOneAndUpdate(
{ "id_headline": input.id_headline },
{ "$addToSet": {
"tags": { "$each": tags.map(function(tag) { return tag._id }) }
}},
{ "upsert": true, "new": true },
callback
)
}
],
function(err,result) {
// if err then do something to report it, otherwise it's done.
}
)
So the async.waterfall is a special flow control method that will pass the result returned from each of the functions specified in the array of arguments to the next one, right until the end of execution where you can optionally pass in the result of the final function in the list. It basically "cascades" or "waterfalls" results down to each step. This is wanted to pass in the results of the "tags" creation to the main model creation/modification.
The async.map within the first executed stage looks at each of the elements within the array of the input. So for each item contained in "tags", the .findOneAndUpdate() method is called to look for and possibly create if not found, the specified "tag" entry in the collection.
Since the output of .map() is going to be an array of those documents, it is simply passed through to the next stage. Therefore each iteration returns a document, when the iteration is complete you have all documents.
The next usage of .findOneAndUpdate() with "upsert" is optional, and of course considers that the document with the matching "id_headline" may or may not exist. The same case is true that if it is there then the "update" is processed, if not then it is simply created. You could optionally .insert() or .create() if the document was known not to be there, but the "update" action gives some interesting options.
Namely here is the usage of $addToSet, where if the document already existed then the specified items would be "added" to any content that was already there, and of course as a "set", any items already present would not be new additions. Note that only the _id fields are required here when adding to the array with an atomic operator, hence the .map() function employed.
An alternate case on "updating" could be to simply "replace" the array content using the $set atomic operation if it was the intent to only store those items that were mentioned in the input and no others.
In a similar manner the $setOnInsert shown when "creating"/"looking for" items in "Tags" makes sure that there is only actual "modification" when the object is "created/inserted", and that removes some write overhead on the server.
So the basic priciples of using .findOneAndUpdate() at least for the "Tags" entries is the most optimal way of handling this. This avoids double handling such as:
Querying to see if the document exists by name
if No result is returned, then send an additional statement to create one
That means two operations to the database with communication back and forth, which the actions here using "upserts" simplifies into a single request for each item.
| {
"pile_set_name": "StackExchange"
} |
Q:
PostGIS ST_Intersects GeometryCollection
If i'm not terribly mistaken, ST_Intersects does not support GeometryCollection.
ERROR: Relate Operation called with a LWGEOMCOLLECTION type. This is unsupported.
HINT: Change argument 2: 'GEOMETRYCOLLECTION(POLYGON((.......
Currently i am using a wrapper that cycles through the different types in a GeometryCollection and performing a ST_Intersect on each. Needless to say, this is terribly inneficient.
There must be a better way?
A:
Spatial relation operators do not work on geometry collections by design, as inherited from JTS.
Try using ST_CollectionHomogenize, which will return a regular geometry, if possible. Or ST_CollectionExtract to specify a geometry type.
A:
According to the PostGIS documentation for the ST_Intersects function, the PostGIS version "2.5.0 Supports GEOMETRYCOLLECTION".
It may be good to note that, the PostGIS version 2.5.0 release news states that "this release will work for PostgreSQL 9.4 and above but to take full advantage of what PostGIS 2.5 offers, you should be running PostgreSQL 11beta4+ and GEOS 3.7.0".
| {
"pile_set_name": "StackExchange"
} |
Q:
Alt+Tab and Super+Tab not working after upgrade from 17.04 to 17.10
I just upgraded to Ubuntu Gnome 17.10 after 17.04 went out of support.
After the reboot finalizing the upgrade the Alt+Tab and Alt+Super stopped working.
However, the "Switch Application Windows" Hotkey keeps working, and, if I keep the switcher open by holding down Alt, I can then switch between Applications using Tab.
I already tried resetting those settings and setting the shortcuts using the terminal:
gsettings set org.gnome.desktop.wm.keybindings switch-applications "[]"
gsettings set org.gnome.desktop.wm.keybindings switch-applications-backward "[]"
gsettings set org.gnome.desktop.wm.keybindings switch-windows "['<Alt>Tab', '<Super>Tab']"
gsettings set org.gnome.desktop.wm.keybindings switch-windows-backward "['<Alt><Shift>Tab', '<Super><Shift>Tab']"
I also switched these around a bit to experiment but couldn't get it to work.
But as of now this is pretty annoying.
How can I fix this?
A:
Update:
You can try reloading Gnome via ALT-F2, pressing R and then Enter.
Thanks to Rob Hendricks for this hint.
Original Answer
Interestingly, after issuing the above commands, a reboot fixed it, when reloading the desktop environment didn't.
So for anyone encountering this, set your preferred settings using:
gsettings set org.gnome.desktop.wm.keybindings switch-applications "[]"
gsettings set org.gnome.desktop.wm.keybindings switch-applications-backward "[]"
gsettings set org.gnome.desktop.wm.keybindings switch-windows "['<Alt>Tab', '<Super>Tab']"
gsettings set org.gnome.desktop.wm.keybindings switch-windows-backward "['<Alt><Shift>Tab', '<Super><Shift>Tab']"
And reboot.
| {
"pile_set_name": "StackExchange"
} |
Q:
keySet field in HashMap is null
I am trying to loop over a HashMap with the keySet() method as below:
for (String key : bundle.keySet()) {
String value = bundle.get(key);
...
}
I use a lot of for-each loops on HashMaps in other parts of my code, but this one as a weird behavior: its size is 7 (what's normal) but keySet, entrySet and values are null (according to the Eclipse debugger)!
The "bundle" variable is instantiated and populated as follows (nothing original...):
Map <String, String> privVar;
Constructor(){
privVar = new HashMap<String, String>();
}
public void add(String key, String value) {
this.privVar.put(key, value);
}
A:
What do you mean by keySet, entrySet and values? If you mean the internal fields of HashMap, then you should not look at them and need not care about them. They are used for caching.
For example in the Java 6 VM that I use keySet() is implemented like this:
public Set<K> keySet() {
Set<K> ks = keySet;
return (ks != null ? ks : (keySet = new KeySet()));
}
So the fact that keySet is null is irrelevant. keySet() (the method) will never return null.
The same is true for entrySet() and values().
| {
"pile_set_name": "StackExchange"
} |
Q:
Ruby On Rails: Render Collection
I'd like to show a specific image on the first iteration when the partial template is called, but not the rest:
view file:
<% @categories.each do |category,i,category_locales| %>
<div class="featured_deals">
<%= render :partial => "medium_deal", :collection => category_locales, :as => :deal_locale %>
</div>
<% end %>
medium_deal file:
<% deal = deal_locale.deal %>
<%= image_tag 'layout/featured_deal_left_blue.png', :style => 'float: left; padding-top: 11px;' %>
# I only want this image to show for the FIRST element of category_locales, but not the rest.
<div class="featured_deal_wrapper">
Hello
</div>
I tried passing a counter in the first view file, but it is not incremented until the "each" passes through again.
A:
You should be able to do this:
<% if deal_locale_counter == 0 %>
<%= image_tag 'layout/featured_deal_left_blue.png', :style => 'float: left; padding-top: 11px;' %>
<% end %>
See: http://guides.rubyonrails.org/layouts_and_rendering.html#local-variables
| {
"pile_set_name": "StackExchange"
} |
Q:
Meaning of function (global) in Javascript
I am trying to understand what function (global) means in code below, and is 'window' the parameter value passed to the function or its the name of a parameter rather than the parameter value?
May be this is simple JavaScript using an uncommon style of coding.
(function (global) {
var mobileSkin = "",
app = global.app = global.app || {};
app.application = new kendo.mobile.Application(document.body,
{ layout: "tabstrip-layout", skin:"flat"});
})(window);
A:
There are common JavaScript patterns in this code:
The namespace pattern.
The immediate function pattern.
The Namespace Pattern
In a browser, the window object is the global scope object. In this example of code that you shared, the programmer created a immediately-invoked function expression and passes the global object window as a parameter, which in the context of the IIFE is bound to the local variable global.
The function, as it names suggests, is immediately invoked when this file is parsed by the browser.
From this point on, global is just an alias for the global scope object window, and the programer uses it to define a namespace app in it.
The namespace basically avoids cluttering the global scope with the objects you need to define and allows the programmer to be more in control of exactly what is defined within his custom scope.
The idea is that from this point forward, you should define all application globals within this customised scope and not within the window global scope, by this avoiding name collisions with other third-party libraries that you are using. This will be a pseudo-equivalent of packages or namespaces in other languages like Java or C#.
Stoyan Stefanov in his book JavaScript Patterns explains it as follows:
Namespaces help reduce the number of globals required by our programs
and at the same time also help avoid naming collisions or excessive
name prefixing.
JavaScript doesn’t have namespaces built into the language syntax, but
this is a feature that is quite easy to achieve. Instead of polluting
the global scope with a lot of functions, objects, and other
variables, you can create one (and ideally only one) global object for
your application or library. Then you can add all the functionality to
that object.
The Immediate Function Pattern
The immediately-invoked function is another common JavaScript pattern. It is simply a function that is executed right after it is defined.
Stefanov describes its importance as follows:
This pattern is useful because it provides a scope sandbox for your
initialization code. Think about the following common scenario: Your
code has to perform some setup tasks when the page loads, such as
attaching event handlers, creating objects, and so on. All this work
needs to be done only once, so there’s no reason to create a reusable
named function. But the code also requires some temporary variables,
which you won’t need after the initialization phase is complete. It
would be a bad idea to create all those variables as globals. That’s
why you need an immediate function—to wrap all your code in its local
scope and not leak any variables in the global scope:
(function () {
var days = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'],
today = new Date(),
msg = 'Today is ' + days[today.getDay()] + ', ' + today.getDate();
alert(msg);
}()); // "Today is Fri, 13"
If this code weren’t wrapped in an immediate function, then the
variables days, today, and msg would all be global variables,
leftovers from the initialization code.
| {
"pile_set_name": "StackExchange"
} |
Q:
MassTransit fault consumer not invoked for request/response
What is the best practice for handling exceptions in MassTransit 3+ with regard to Request/Response pattern? The docs here mention that if a ResponseAddress exists on a message, the Fault message will be sent to that address, but how does one consumer/receive the messages at that address? The ResponseAddress for Bus.Request seems to be an auto-generated MassTransit address that I don't have control over, so I don't know how to access the exception thrown in the main consumer. What am I missing? Here's my code to register the consumer and its fault consumer using Unity container:
cfg.ReceiveEndpoint(host, "request_response_queue", e =>
{
e.Consumer<IConsumer<IRequestResponse>>(container);
e.Consumer(() => container.Resolve<IMessageFaultConsumer<IRequestResponse>>() as IConsumer<Fault<IRequestResponse>>);
});
And here's my attempt at a global message fault consumer:
public interface IMessageFaultConsumer<TMessage>
{
}
public class MessageFaultConsumer<TMessage> : IConsumer<Fault<TMessage>>, IMessageFaultConsumer<TMessage>
{
public Task Consume(ConsumeContext<Fault<TMessage>> context)
{
Console.WriteLine("MessageFaultConsumer");
return Task.FromResult(0);
}
}
This approach DOES work when I use Bus.Publish as opposed to Bus.Request. I also looked into creating an IConsumeObserver and putting my global exception logging code into the ConsumeFault method, but that has the downside of being invoked every exception prior to the re-tries giving up. What is the proper way to handle exceptions for request/response?
A:
First of all, the request/response support in MassTransit is meant to be used with the .Request() method, or the request client (MessageRequestClient or PublishRequestClient). With these methods, if the consumer of the request message throws an exception, that exception is packaged into the Fault<T>, which is sent to the ResponseAddress. Since the .Request() method, and the request client are both asynchronous, using await will throw an exception with the exception data from the fault included. That's how it is designed, await the request and it will either complete, timeout, or fault (throw an exception upon await).
If you are trying to put in some global "exception handler" code for logging purposes, you really should log those at the service boundary, and an observer is the best way to handle it. This way, you can just implement the ConsumeFault method, and log to your event sink. However, this is synchronous within the consumer pipeline, so recognize the delay that could be introduced.
The other option is to of course just consume Fault<T>, but as you mentioned, it does not get published when the request client is used with the response address in the header. In this case, perhaps your requester should publish an event indicating that operation X faulted, and you can log that -- at the business context level versus the service level.
There are many options here, it's just choosing the one that fits your use case best.
| {
"pile_set_name": "StackExchange"
} |
Q:
Configuration file of switches format
Do we need different config files for each category [aaa, bfd, ptp, lldp, vlan etc] for deployment or can there be a single config file that contains all [aaa, bfd, ptp, lldp, vlan etc] and do they need to be in certain order if they are in combined to a single file?
A:
Converting comment to an answer.
For Cisco products, devices typically use one configuration file. Vlans are stored in a database file called vlan.dat. The configuration saves information regarding Vlans and creates them on boot up. The order in which configuration should be loaded is irrelevant for Cisco products.
| {
"pile_set_name": "StackExchange"
} |
Q:
match two strings with letters in random order in python
if I have 2 strings like:
a = "hello"
b = "olhel"
I want to use a regular expression (or something else?) to see if the two strings contain the same letters. In my example a would = b because they have the same letters. How can this be achieved?
A:
a = "hello"
b = "olhel"
print sorted(a) == sorted(b)
A:
An O(n) algorithm is to create a dictionary of counts of each letter and then compare the dictionaries.
In Python 2.7 or newer this can be done using collections.Counter:
>>> from collections import Counter
>>> Counter('hello') == Counter('olhel')
True
| {
"pile_set_name": "StackExchange"
} |
Q:
How I can find my .RDA has been loaded to R
I have a scenario where I want to check If R has loaded the .RDA(which is a model)
I want this for getting prediction call as I don`t want to load every time I am asking for a prediction.
I tried with this below code
if(!is.na(T2I_Vendor_Eval1.rda)){
print("started")
bar<-load(file = "C:\\T2I_Vendor_Eval1.rda")
print("ended ")
}
Result I get is
Error: object 'T2I_Vendor_Eval1.rda' not found
A:
Instead of doing this
if(!is.na(T2I_Vendor_Eval1.rda)){
print("started")
bar<-load(file = "C:\\T2I_Vendor_Eval1.rda")
print("ended end")
}
I did this
if(!exists("T2I_Vendor_Eval1")){
print("started")
load(file = "C:\\T2I_Vendor_Eval1.rda")
print("ended end")
}
It worked for me.
thanks for you help @JonGrub
| {
"pile_set_name": "StackExchange"
} |
Q:
Google Play Services partial integration
I've integrated Google Play Services in an Android app which added 500K to my APK file, all I need from Google Pay Services is the ability to +1 a URL.
Is there a way to narrow down the integration and to minimize the impact on the APK size?
A:
I'm not aware of an explicit way to +1 a URL without using the GMS library. You can manually generate a share by passing a URL which might be helpful for you. As an example:
https://plus.google.com/share?url=http://example.com/
Will generate a share link for the site http://example.com. From Android, you could trigger the share by building the link and then encouraging the user to click it or by associating a share button with the action of opening the URL for sharing.
| {
"pile_set_name": "StackExchange"
} |
Q:
find date and time coming in between two given date and time that increment by 5 minutes
JAVA | How to find date and time coming in between two given date and time that increment by 5 minutes?
For example, if the starting point is [29/1/2017 5:40:00 AM]
And the stopping point is [29/1/2017 6:00:00 AM]
The list (based on starting and stopping points ) and it is must be like this
29/1/2017 5:40:00 AM
29/1/2017 5:45:00 AM
29/1/2017 5:50:00 AM
29/1/2017 5:55:00 AM
29/1/2017 6:00:00 AM
Please don’t use JODA.
A:
Heres the Java8 Version using the new DateTime API. If you're on Java8 you should use this code.
DateTimeFormatter dateTimeFormatter = DateTimeFormatter.ofPattern("d/M/u h:m:s a");
LocalDateTime dateTime = LocalDateTime.parse("29/1/2017 5:40:00 AM", dateTimeFormatter);
LocalDateTime endDateTime = LocalDateTime.parse("29/1/2017 6:00:00 AM", dateTimeFormatter);
for(; !dateTime.isAfter(endDateTime); dateTime = dateTime.plusMinutes(5))
System.out.println(dateTime.format(dateTimeFormatter));
| {
"pile_set_name": "StackExchange"
} |
Q:
What's the difference between "to frighten" and "to scare"?
What's the difference between "to frighten" and "to scare"? I've heard both, but have never been able to figure out the difference.
A:
I would suggest that 'frighten' is more intense than 'scare'. Although they are (very) similar, being scared is less serious than being frightened. That is definitely a second-order effect though; to a first approximation, they are (almost) equivalent.
| {
"pile_set_name": "StackExchange"
} |
Q:
Initialization in polymorphism of variables
Suppose you have the following code
class A {
int i = 4;
A() {
print();
}
void print () {
System.out.println("A");
}
}
class B extends A {
int i = 2; //"this line"
public static void main(String[] args){
A a = new B();
a.print();
}
void print () {
System.out.println(i);
}
}
this will print 0 2
Now, if you remove line labeled "this line"
the code will print 4 4
I understand that if there was no int i=2; line,
A a = new B(); will call class A, initializes i as 4, call constructor,
which gives control over to print() method in class B, and finally prints 4.
a.print() will call print() method in class B because the methods will bind at runtime, which will also use the value defined at class A, 4.
(Of course if there is any mistake in my reasoning, let me know)
However, what i don't understand is if there is int i=2.
why is it that if you insert the code, the first part (creating object) will all of sudden print 0 instead of 4? Why does it not initialize the variable as i=4, but instead assigns default value?
A:
It is a combination of several behaviors in Java.
Method overriding
Instance variable shadowing
order of constructors
I will simply go through what happened in your code, and see if you understand.
Your code conceptually looks like this (skipping main()):
class A {
int i = 0; // default value
A() {
A::i = 4; // originally in initialization statement
print();
}
void print () {
System.out.println("A");
}
}
class B extends A {
int i = 0; // Remember this shadows A::i
public B() {
super();
B::i = 2;
}
void print () {
System.out.println(i);
}
}
So, when in your original main(), you called A a = new B();, it is constructing a B, for which this happens:
A::i and B::i are all in default value 0
super(), which means A's constructor is called
A::i is set to 4
print() is called. Due to late-binding, it is bound to B::print()
B::print() is trying to print out B::i, which which is still 0
B::i is set to 2
Then when you call a.print() in your main(), it is bounded to B::print() which is printing out B::i (which is 2 at this moment).
Hence the result you see
A:
All the instance variables in the new object, including those declared in superclasses, are initialized to their default values - JLS 12.5
Therefore, your variable B::i will be initialized to 0. The constructor in B will be like:
B() {
super();
i = 2;
}
So when you call
A a = new B();
The constructor in A will call the print method in B, which will prints the i in class B, which is 0.
A:
In your case the class B, the declaration of "i" hides the declaration of "i" in A, and all references to "i" in the child class refer to the B.i not the A.i.
And so what you see in A.i is the default value of any int attribute in java which is zero.
Java instance variables cannot be overridden in a subclass.
You want to try this for more clarification.
class B extends A {
int i = 2; //"this line"
public static void main(String[] args){
B b = new B();
A a = b;
System.out.println("a.i is " + a.i);
System.out.println("b.i is " + b.i);
}
void print () {
System.out.println(i);
}
}
Ouput:
a.i is 4
b.i is 2
| {
"pile_set_name": "StackExchange"
} |
Q:
SyntaxError: missing ) after argument list in jquery whilst appending
I am getting this error! missing ) even through there seems nothing wrong in my jquery line.
$('.product_area').append('
<div class="product">
<a href="/index.php?store='+$store_username+'&view=single&product='+json[i].product_id'"><div class="product-image">
<img src="<?php echo image_check('+json[i].product_image1+'); ?>" />
</div>
</a>
<div class="product_details"><div class="product-name">
<a class="name" href="/index.php?store='+$store_username+'&view=single&product='+json[i].product_id'">
<span><?php echo substr('+json[i].product_name+',0,23); ?></span>
</a>
</div>
<div class="product-price">
<span class="price">@ Rs. <?php echo number_format('+json[i].product_price+',2); ?></span>/-</div>
<div class="product-discount">
<span class="discount">Discount:
<span class="color-text"><?php echo '+json[i].product_discount+' ?></span>%</span>
</div>
</div>
</div>').animate({width:'toggle'},150);
I have tried to write it as clean as possible. Can anyone check! It's irritating a lot
A:
This looks like a problem with splitting up a string over multiple lines. (Also you have a typo, as others have commented. If this is an error in your code you'll need to fix it, but if it's just a typo here on SO here's the other problem you're facing)
There are a couple of ways you could go about solving this.
1) If you can use ES6, consider using string templates with `` (backticks). This solution might look like this:
let html = `<div class="product">
<a href="/index.php?store=${store_username}&view=single&product...`
Notice that you don't need to use + in this solution, you can just use ${var_name} and get the value of your variable. You can also split over multiple lines and be OK. I think you could also just replace the entire string in your append() method with a string template and be good.
2) Prepackage your HTML into a variable before appending it, using the += operator. Here it might look something like this:
var html = '<div class="product">';
html += '<a href="/index.php?store=';
html += $store_username;
html += '&view=single&product=';
And so on, and then you would
.append(html);
3) Finally, you can split lines with the \
... .append('<div class="product"> \
<a href="/index.php?store= \
'+$store_username+'&view=single&product=' ... );
| {
"pile_set_name": "StackExchange"
} |
Q:
Probation Period UK and sacking notice
Currently working for a Company, had no negative reviews or anything, in fact not had one meeting with my line manager or anyone to give me a performance review. Have assumed that this is a good thing. I have always felt that they are proud of my work.
The boss of the company is a 70 year old who knows very little about IT and wants things done instantly. When I told him the changes he wanted me to make would take a few weeks, He told me he was disappointed and that he would no longer be using my code.
I was always under the impression that in the UK you HAD to have a valid reason to fail someones probation or to extend it. My mother runs a business, has wanted to fail someones probation but has been told that there has to be valid chain i.e. meetings and a plan put in place so that they can resolve before they are allowed to terminate your contract. This was told to her by Peninsula who are the UKs leading HR people.
Having done some research, it looks like you can end someones probation period and terminate the contract just because? any employers or people had experience with this?
A:
From the UK government site (https://www.gov.uk/dismissal)
Dismissal is when your employer ends your employment - they don’t always have to give you notice.
If you’re dismissed, your employer must show they’ve:
a valid reason that they can justify
acted reasonably in the circumstances
They must also:
be consistent - eg not dismiss you for doing something that they let other employees do
have investigated the situation fully before dismissing you - eg if a complaint was made about you
...
You have the right to ask for a written statement from your employer giving the reasons why you’ve been dismissed if you’re an employee and have completed 2 years’ service (1 year if you started before 6 April 2012).
Talk to a lawyer, but your rights are limited until 2 years service.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I detect if my fetched email is bounced or not
I fetched emails from servers by using IMAP or POP3 and entered the fetched emails to database but I noticed there are a lot of bounced emails entered to the system so I searched a lot on google to check fetched email and if it's bounced email I'll not enter it to the system and I found library BounceDetectResult to detect if email is bounced or not but this library working only with message type MimeMessage so It's useful when I use IMAP but it's not work with message type OpenPop.Mime.Message so I can't use It when I use POP3
var result= BounceDetectorMail.Detect(message);//message type MimeMessage
if (result.IsBounce)
{
em.DelivaryFailure = true;
}
so my problem I didn't find way to detect if my retrieved message is bounced or not when I use pop3 in retrieving
A:
It looks like the MailBounceDetector library that you mentioned uses my MimeKit library to detect if a message is a bounced message or not.
The good news is that you can use that library because I also have a library that does POP3 called MailKit, so you can use that instead of OpenPOP.NET.
| {
"pile_set_name": "StackExchange"
} |
Q:
Volume of the solid of revolution generated when the parabola spins around the $x$ axis
Consider the bounded area by the straights $x=0,\;y=1$, and the parabola $y^2=4y-x$, calculate the volume of the solid of revolution generated when the parabola spins around the $x$ axis.
I think that the volume can be calculated by $$V=\pi\int_{0}^{4}{\left( \sqrt{4-x}+2\right)^{2}dx}+\pi\int_{0}^{3}{4dx}-\pi\int_{0}^{3}{dx}+\pi\int_{3}^{4}{4dx}-\pi\int_{3}^{4}{\left(-\sqrt{4-x}+2\right)^2dx}.$$
A:
Draw a picture. We have a parabola whose axis of symmetry is the line $y=2$, and which opens leftward. The apex of the parabola is at $(4,2)$.
The line $y=1$ divides the part of the parabola to the right of the $y$-axis into two parts, the fat part above the line $y=1$, and a much thinner part below the line $y=1$. It is not clear from the wording which part is being spun. We will assume it is the fat part. Minor modification will take care of things if it is the thinner part.
It is most convenient to use the Method of Cylindrical Shells. Look at a thin horizontal strip, going from height $y$ to height "$y+dy$."
Spin this strip about the $x$-axis. We get a cylindrical shell of thickness "$dy$." The shell has radius $y$ and height $x=4y-y^2$. So the shell has volume approximately equal to $2\pi y(4y-y^2)\,dy$. "Add up" (integrate) from $y=1$ to $y=4$. We get volume
$$\int_{y=1}^4 2\pi y(4y-y^2)\,dy.$$
Remark: If we want to integrate with respect to $x$, things get a little more complicated. You did it along those lines, so we write down suitable expressions. The volume from $x=0$ to $x=3$ is equal to
$$\int_{x=0}^3 \pi[(2+\sqrt{4-x})^2-1^2]\,dx.$$
To this we must add the volume from $x=3$ to $x=4$, which is
$$\int_{x=3}^4 \pi[(2+\sqrt{4-x})^2-(2-\sqrt{4-x})^2]\,dx.$$
| {
"pile_set_name": "StackExchange"
} |
Q:
How to clone array of hashes and add key value using each loop
I want to clone an array of hashes and then to clone it into more than one.
irb(main):001:0> arr = [{a: "one", b: "two"}, {a: "uno", b: "due"}, {a: "en", b: "to"}]
=> [{:a=>"one", :b=>"two"}, {:a=>"uno", :b=>"due"}, {:a=>"en", :b=>"to"}]
irb(main):002:0> arr_1 = arr.clone
=> [{:a=>"one", :b=>"two"}, {:a=>"uno", :b=>"due"}, {:a=>"en", :b=>"to"}]
irb(main):003:0> arr_2 = arr.clone
=> [{:a=>"one", :b=>"two"}, {:a=>"uno", :b=>"due"}, {:a=>"en", :b=>"to"}]
Dynamically I want to add id into the hashes.
irb(main):004:0> arr_1.each { |k| k[:id] = 1 }
=> [{:a=>"one", :b=>"two", :id=>1}, {:a=>"uno", :b=>"due", :id=>1}, {:a=>"en", :b=>"to", :id=>1}]
irb(main):005:0> arr_2.each { |k| k[:id] = 2 }
=> [{:a=>"one", :b=>"two", :id=>2}, {:a=>"uno", :b=>"due", :id=>2}, {:a=>"en", :b=>"to", :id=>2}]
But the result of arr_1's id is affected by arr_2 each loop operation which is become 2
irb(main):006:0> arr_1
=> [{:a=>"one", :b=>"two", :id=>2}, {:a=>"uno", :b=>"due", :id=>2}, {:a=>"en", :b=>"to", :id=>2}]
I have tried by using
arr_1 = arr
arr_2 = arr
but the result keeps showing the same result.
How to make the arr_1 hashes :id = 1 and arr_2 hashes :id = 2 ?
A:
Let's see what is happening.
arr = [{a: "cat", b: "dog"}, {a: "uno", b: "due"}]
arr.object_id
#=> 4557280
arr1 = arr
arr1.object_id
#=> 4557280
As you see, the variables arr and arr1 hold the same object, because the objects have the same object id.1 Therefore, if that object is modified, arr and arr1 will still both hold that object. Let's try it.
arr[0] = {a: "cat", b: "dog"}
arr
#=> [{:a=>"cat", :b=>"dog"}, {:a=>"uno", :b=>"due"}]
arr.object_id
#=> 4557280
arr1
#=> [{:a=>"cat", :b=>"dog"}, {:a=>"uno", :b=>"due"}]
arr1.object_id
#=> 4557280
If we want to be able to modify arr in this way without it affecting arr1, we use the method Kernel#dup.
arr
#=> [{:a=>"cat", :b=>"dog"}, {:a=>"uno", :b=>"due"}]
arr1 = arr.dup
#=> [{:a=>"cat", :b=>"dog"}, {:a=>"uno", :b=>"due"}]
arr.object_id
#=> 4557280
arr1.object_id
#=> 3693480
arr.map(&:object_id)
#=> [2631980, 4557300]
arr1.map(&:object_id)
#=> [2631980, 4557300]
As you see, arr and arr1 now hold different objects. Those objects, however, are arrays whose corresponding elements (hashes) are the same objects. Let's modify one of arr's elements.
arr[1][:a] = "owl"
arr
#=> [{:a=>"cat", :b=>"dog"}, {:a=>"owl", :b=>"due"}]
arr.map(&:object_id)
#=> [2631980, 4557300]
arr still contains the same objects, but we have modified one. Let's look at arr1.
arr1
#=> [{:a=>"cat", :b=>"dog"}, {:a=>"owl", :b=>"due"}]
arr1.map(&:object_id)
#=> [2631980, 4557300]
Should we be surprised that arr1 has changed as well?
We need to dup both arr and the elements of arr.
arr = [{a: "one", b: "two"}, {a: "uno", b: "due"}]
arr1 = arr.dup.map(&:dup)
#=> [{:a=>"one", :b=>"two"}, {:a=>"uno", :b=>"due"}]
arr.object_id
#=> 4149120
arr1.object_id
#=> 4182360
arr.map(&:object_id)
#=> [4149200, 4149140]
arr1.map(&:object_id)
#=> [4182340, 4182280]
Now arr and arr1 are different objects and they contain different (hash) objects, so any change to one will not affect the other. (Try it.)
Now suppose arr were as follows.
arr = [{a: "cat", b: [1,2]}]
Let's make the copy.
arr1 = arr.dup.map(&:dup)
#=> [{:a=>"cat", :b=>[1, 2]}]
Now modify arr[0][:b].
arr[0][:b] << 3
#=> [{:a=>"cat", :b=>[1, 2, 3]}]
arr1
#=> [{:a=>"cat", :b=>[1, 2, 3]}]
Drat! arr1 changed. We can again look at object ids to see why that happened.
arr.object_id
#=> 4488500
arr1.object_id
#=> 4503140
arr.map(&:object_id)
#=> [4488520]
arr1.map(&:object_id)
#=> [4503100]
arr[0][:b].object_id
#=> 4488560
arr1[0][:b].object_id
#=> 4488560
We see that arr and arr1 are different objects and there respective hashes are the same elements, but the array is the same object for both hashes. We therefore need to do something like this:
arr1[0][:b] = arr[0][:b].dup
but that's still not good enough if arr were:
arr = [{a: "cat", b: [1,[2,3]]}]
What we need is a method that will make a deep copy. A common solution for that is to use the methods Marshal::dump and Marshal::load.
arr = [{a: "cat", b: [1,2]}]
str = Marshal.dump(arr)
#=> "\x04\b[\x06{\a:\x06aI\"\bcat\x06:\x06ET:\x06b[\ai\x06i\a"
arr1 = Marshal.load(str)
#=> [{:a=>"cat", :b=>[1, 2]}]
arr[0][:b] << 3
#=> [{:a=>"cat", :b=>[1, 2, 3]}]
arr
#=> [{:a=>"cat", :b=>[1, 2, 3]}]
arr1
#=> [{:a=>"cat", :b=>[1, 2]}]
Note we could write:
arr1 = Marshal.load(Marshal.dump(arr))
As explained in the doc, the serialization used by the Marshal methods is not necessarily the same for different Ruby versions. If, for example, dump were used to produce a string that was saved to file and later load was invoked on the contents of the file, using a different version of Ruby, the contents may not be readable. Of course that's not a problem in this application of the methods.
1. To make it easier to see differences in object id's I've only shown the last seven digits. They in all cases are preceded by the digits 4877798.
| {
"pile_set_name": "StackExchange"
} |
Q:
SQL-query to filter on two fields in combination
ArticleNumber Company Storage
01-01227 12 2
01-01227 2 1 'filtered by company/storage in combination
01-01227 5 1
01-01227 12 1 'filtered by company/storage in combination
01-44444 5 4 'filtered by not match the articlenumber
I want to filter so rows containing (company = 12 and storage = 1) and (company = 2 and storage = 1) will be filtered out of the result set and also filter on articlenr.
This is what I come up with, but sure there must be an easier way to make that query?
SELECT * FROM MyTable
where
(Company=2 and Storage<>1 and ArticleNumber='01-01227')
or
(Company=12 and Storage<>1 and ArticleNumber='01-01227')
or
(Company<>2 and Company<>12 and ArticleNumber='01-01227')
The result I'm after:
ArticleNumber Company Storage
01-01227 12 2
01-01227 5 1
A:
This will return what you're looking for:
select * from t
where articleNumber = '01-01227' and (
(company != 12 or storage != 1) and
(company != 2 or storage != 1)
)
Result:
ARTICLENUMBER COMPANY STORAGE
01-01227 12 2
01-01227 5 1
There is no need to join in this solution, which keeps it quite fast and performant. Besides you can easily keep adding restrinctions as pairs.
| {
"pile_set_name": "StackExchange"
} |
Q:
Guarded Blocks with Join
I need to synchronize over several threads. I don't create the threads, but I do know how many there are. So I wrote this inner guarded block:
private class Guard {
int waiters = 0;
boolean wait;
synchronized void addWaiter() {
++waiters;
wait = true;
while (wait && waiters != threadNum()) {
try {
wait();
} catch (InterruptedException e) {}
}
waiters = 0;
wait = false;
notifyAll();
}
}
This guarded block is executed in a loop. So the problem is that it might get called a second time before all the threads from the first call are released from the wait() loop, which obviously screws up the whole logic of the guard. So I need to have the threads join somehow before they are released from the guard. Is there a design for this? What is the most elegant way of accomplishing it. Keep in mind that the threads are not created by me.
Thanks.
A:
Sounds like a job for a CountDownLatch. You can set the latch to wait for N count downs. See the Javadoc for examples.
| {
"pile_set_name": "StackExchange"
} |
Q:
Dynamic Textbox creation inside a bootstrap modal
I am trying to create a web form for getting marks. Here's my code I am trying to get some inputs inside the bootstrap modal. I kept a button inside the modal for dynamic textbox creation. While I am clicking the button some problem occurs like the boxes exceed their range. So please help me to solve the problem.
Thanks in advance...:-)
<body>
<div class="container">
<h2>Modal Example</h2>
<!-- Trigger the modal with a button -->
<button type="button" class="btn btn-info btn-lg" data-toggle="modal" data-
target="#myModal">Open Modal</button>
<!-- Modal -->
<div class="modal fade" id="myModal" role="dialog">
<div class="modal-dialog">
<!-- Modal content-->
<div class="modal-content">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal">×
</button>
<h4 class="modal-title">Modal Header</h4>
</div>
<div class="modal-body">
<p>Some text in the modal.</p>
<button id="cli" onclick="ctab()">+</button>
<div id="mt">
<table id="mtable">
<tbody id="mbody">
<tr id="mrow"></tr>
</tbody>
</table>
</div>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-default" data-
dismiss="modal">Close</button>
</div>
</div>
</div>
</div>
</div>
<script type="text/javascript">
function ctab(){
var mrows=document.getElementById("mrow");
var data=document.createElement("td");
var tdata=document.createElement("input");
tdata.setAttribute("type","text");
data.appendChild(tdata);
mrows.appendChild(data);
}
</script>
</body>
This is my webpage screenshot
A:
try it
<!DOCTYPE html>
<html lang="en">
<head>
<title>Bootstrap Example</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
</head>
<body>
<div class="container">
<h2>Modal Example</h2>
<!-- Trigger the modal with a button -->
<button type="button" class="btn btn-info btn-lg" data-toggle="modal" data-target="#myModal">Open Modal</button>
<!-- Modal -->
<div class="modal fade" id="myModal" role="dialog">
<div class="modal-dialog">
<!-- Modal content-->
<div class="modal-content">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal">×</button>
<h4 class="modal-title">Modal Header</h4>
</div>
<div class="modal-body">
<p>Some text in the modal.</p>
<button id="cli" onclick="ctab()">+</button>
<div id="mt">
<div class="row">
<div class="col-md-12">
<table id="mtable">
<tbody id="mbody">
</tbody>
</table>
</div>
</div>
</div>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-default" data-dismiss="modal">Close</button>
</div>
</div>
</div>
</div>
</div>
<script type="text/javascript">
function ctab(){
var mrows=document.getElementById("mbody");
var rdata=document.createElement("tr"+"br");
var data=document.createElement("td");
var tdata=document.createElement("input");
tdata.setAttribute("type","text");
rdata.appendChild(tdata);
mbody.appendChild(rdata);
}
</script>
</body>
</html>
| {
"pile_set_name": "StackExchange"
} |
Q:
Database Design: schedules, including recurrence
I need to develop an application that supports "schedules". Example of schedules:
Jan 1, 2011 at 9am
Jan 1, 2011 from 9am to 10am
Every Monday at 9am, from Jan 1, 2011
Every Monday at 9am, from Jan 1, 2011 to Feb 1, 2011
Every Monday at 9am, from Jan 1, 2011 for 10 occurrences
etc.
If you have seen Outlook's scheduler, that's basically what I need. Here's a screen shot of their UI: http://www.question-defense.com/wp-content/uploads/2009/04/outlook-meeting-recurrance-settings.gif
How would I model such information in a database? Keep in mind that I also need to query this, such as:
What are the scheduled events for today?
What are the next 10 dates/times for a particular recurring scheduled event?
Etc.
I'm using PHP/MySQL, but am open to alternative solutions. Suggestions?
A:
My personal opinion is to create all the events separately, with a start and end date. Then generate a unique identifier for the event (perhaps the event ID of the first you create) and assign it to all events (so you know they are somehow linked).
Advantages:
easy to do (you just calculate when the event should happen
and create them all only once)
easy to change (you can save the recurrence perhaps on the first
event, and then rebuild them all - remove and re-create)
easy to delete (they have a common unique ID)
easy to find (same as above)
Disadvantages:
You need a start and end date
-- appended from here --
Proposed Model:
Table Event
id big int (auto increment)
ref_id big int (this is kind of foreign key to the id)
date_start date
date_end date
title string
.. your custom fields ..
saved_recurrence text
Imagine you have an event repeating 4 weeks every Wednesday and Friday:
gather the recurrence stuff in an object and convert it to JSON (recurrence type, period, final date, ..)
calculate the dates of every single event
create the first event on the table Event with ref_id=0 and saved_recurrence=saved JSON object and get the id that was used (auto incremented)
update this first event record and set ref_id=id
create the next events always with this same ref_id (saved_recurrence can be empty here)
You should now have 8 events (2 every week for 4 weeks) that have the same ref_id. This way it's easy to fetch events in any date interval. When you need to edit an event, you just check for ref_id. If it's 0 it's a single isolated event. If not, you have a ref_id that you can search to get all event instances.
| {
"pile_set_name": "StackExchange"
} |
Q:
script para buscar palabras en un texto txt pero quiero hacerlo sin usar el comando grep
Quiero hacer un script bash en linux para encontar palabras en una archivo txt sin usar grep ni sed, pretendo leer un archivo y definir el archivo como una variable y después leerlo, leer linea por linea y cuando le de una variable(palabra) me diga si esta en el texto y en que linea se encuentra.
Tengo idea de hacer un ciclo pero no se como construirlo.
les agradecería su ayuda.
A:
Sé que no se debe de hacer código para otros y que sólo es un sitio para dudas específicas sobre material trabajado, como bien lo comentaron. Sin embargo me tomé el atrevimiento de programarlo, aunque insisto, como lo han descrito en los comentarios, que usar bash para hacer lo que otros programas pueden hacer mejor y más rápido es mala idea en términos de eficiencia, pero no en términos de ocio (y más si estás lidiando con el desempleo o subempleo).
#!/bin/bash
declare _archivo="archivo1.txt"
declare _frase="frase2 a encontrar"
declare -i _contador_lineas=0
echo Buscando en archivo: "$_archivo"
while read -r linea || [[ -n "$linea" ]] # Lee linea por linea mientras haya
#+ y no sea nula.
do
(( _contador_lineas++ )) # Aumentamos en uno cada que entre.
[[ "$linea" == *$_frase* ]] \
&& echo Ocurrencia de: "\"$_frase\"" encontrada en linea: $_contador_lineas
# Con el comparador == y el glob *,
#+ busca la frase sin importar lo que tenga
#+ antes o despues
done < "$_archivo" # Alimentamos al while con el contenido del archivo.
Supón que a ese archivo le llamas "buscador.sh" y le das permisos de ejecución con chmod u+x buscador.sh y tienes un archivo llamado archivo1.txt en el mismo directorio donde tienes ese archivo con el siguiente contenido:
uno dos
dos
tres
escombros frase a encontrar mas escombros
linea 5
qwer sadf
escombros frase2 a encontrar mas escombros
escombros frase2 a encontrar mas escombros
linea 9
escombros frase2 a encontrar mas escombros
linea 11
linea 12
linea 13
escombros frase2 a encontrar mas escombros
El resultado obtenido al ejecutar el programa sería el siguiente:
$ ./buscador.sh
Buscando en archivo: archivo1.txt
Ocurrencia de: "frase2 a encontrar" encontrada en linea: 7
Ocurrencia de: "frase2 a encontrar" encontrada en linea: 8
Ocurrencia de: "frase2 a encontrar" encontrada en linea: 10
Ocurrencia de: "frase2 a encontrar" encontrada en linea: 14
| {
"pile_set_name": "StackExchange"
} |
Q:
When the screen rotates, the layout changes
I have a project and when I start it on emulator the layout is in order, but when i rotate the screen, the layout changes. For example when the project starts the textview1 is on top of textview2, but afeter I rotate the screeen, textview1 and textview2 are on the same place. how can i handle this? when the screen rotates. i want the layout to adapt to the device. for example it is installed on a tablet. the layout is still the same. and when it is installed on smartphones and the screen rotates i want that the layout is still the same.
A:
Create two folder for your layout : first is layout-land and other is layout-port, create two different xml file with same name and according to landscape or portrait view each forlayout-land and for layout-port design your layout (xml file)
Device will automatically take the xml file as per the orientation.
More info here:
http://developer.android.com/guide/practices/screens_support.
---layout-land(folder name)
---yourlayout.xml // same name file with design according to landscape mode
---layout-port(folder name)
---yourlayout.xml // same name file with design according to portrait mode
| {
"pile_set_name": "StackExchange"
} |
Q:
Cannot find symbol in kb.nextInt();
I am writing a program for my class, and I want to test it out. However, I cannot compile it since I am getting this error "error: cannot find symbol" at "menuChoice = kb.nextInt();"
import java.util.*;
public class Program3
{
public static void main (String [ ] args)
{
Scanner kb = new Scanner(System.in);
System.out.println("Please enter a non-negative integer.: ");
int num = kb.nextInt();
Random rand = new Random();
int sum = 0;
int factor = 1;
int menuChoice;
while (num > 0)
{
System.out.println("Number cannot be negative! Please enter a non-negative integer.: ");
num = kb.nextInt();
}
do
{
System.out.println("\nPlease choose an option:");
System.out.println("\t0\tPrint the number");
System.out.println("\t1\tDetermine if the number is odd or even");
System.out.println("\t2\tFind the reciprocal of the number");
System.out.println("\t3\tFind half of the number");
System.out.println("\t4\tRaise the number to the power of 5 (using a Java method)");
System.out.println("\t5\tRaise the number to the power of 5 (using a loop)");
System.out.println("\t6\tGenerate 20 random numbers between 0 and the number (inclusive)");
System.out.println("\t7\tFind the sum of 0 up to your number (using a loop)");
System.out.println("\t8\tFind the factorial of the number (using a loop)");
System.out.println("\t9\tFind the square root of the number (using a Java method)");
System.out.println("\t10\tFind the square root of the number (using a loop, Extra Credit)");
System.out.println("\t11\tDetermine whether the number is prime (using a loop, Extra Credit) ");
System.out.println("\t12\tExit the program");
menuChoice = kb.nextlnt(); //<<< error occurs right here!!!
switch (menuChoice)
{
case 0: System.out.print("Your number is" + num);
break;
case 1: if (num % 2 ==0)
System.out.print(num + " is even");
else
System.out.print(num + " is false");
break;
case 2: if (num == 0)
System.out.print("There are no reciprocal");
else
System.out.print("The reciprocal of " + num + " is 1/" + num);
break;
case 3: System.out.print("half of " + num + " is" + (num/2));
break;
case 4: System.out.print(num + " of the power of 5 is" + (Math.pow(num , 5)));
break;
case 5: for (int i = 1 ; i <= 6 ; i++)
for (int j = 1 ; j <= 6 ; j ++)
System.out.print(num + " of the power of 5 is" + ( i * j));
break;
case 6: for (int i = 1 ; i<=20; i++)
System.out.println(rand.nextInt(6));
break;
case 7: for (int i = 1; i <=num ; i++)
{
sum += 1;
System.out.println(sum);
}
break;
case 8: for (int i = 1 ; i <=num; i++)
factor = factor*i;
System.out.println(factor);
break;
case 9: double theSqrt = Math.sqrt(num);
System.out.println("The square root of " + num + " is " + theSqrt);
break;
case 10: System.out.println("Did not do this extra credit :(");
break;
case 11: boolean isPrime;
for (int i = 2; i<=num; i++)
{
isPrime = true;
for (int divisor = 2; divisor<Math.sqrt(num) && isPrime; divisor++)
{
if (num%divisor == 0)
isPrime = false;
}
if (isPrime)
System.out.println("The prime number of" + num + " is prime");
}
break;
case 12: System.out.println("Exiting the now.");
break;
default: System.out.println("Illegal choice, try again");
break;
}
}while (menuChoice !=12);
}
}
That is the only part that is preventing me from running the program and seeing if my code is correcly written.
Thank you for the help.
A:
you just change the
menuChoice = kb.nextlnt();
to
menuChoice = kb.nextInt();
then your code will work properly.
| {
"pile_set_name": "StackExchange"
} |
Q:
Hide and close a hidden div
I have a hidden fixed div and want to display it if I click on a btn, and close if I click on any where else. Right now, I can hide the div if I click on a button or any where on the page.
For the code below, I have achieve half of what I want. However, I want to the div still open (visible) if I click on any where in the green box. Can anyone suggest any idea to help me achieve that?
$(function() {
var OPEN = 0; /* Offset to Open */
var CLOSE = -10000; /* Offset to Close */
var t = 0; /* Default time */
var $obtn = $(".obtn-side");
var $cbtn = $(".cbtn-side");
var main = ".main-wrapper";
$obtn.click(function(event) {
event.preventDefault();
var cid = $(this).attr("href"); /* Get the container id */
navEffect(cid, OPEN, 0);
});
$cbtn.click(function(event) {
event.preventDefault();
var cid = $(this).attr("href");
navEffect(cid, CLOSE, 0);
});
$(".side-wrapper").click(function(event) {
navEffect("#side", CLOSE, 0);
});
});
function navEffect(c, o, t) {
var $con = $(c);
$con.animate({
right: o
}, t);
}
.side-wrapper {
position: fixed;
top: 0px;
width: 100%;
height: 100%;
background: black;
right: -1100%;
}
.side-container {
float: right;
background: green;
width: 50%;
height: 100%;
position: relative;
box-shadow: 0px 0px 5px 2px rgba(52, 73, 94, 0.9);
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="side" class="side-wrapper">
<div class="side-container">
<a class="cbtn-side" href="#side">x</a>
<div>
<h1>Side bar</h1>
</div>
</div>
</div>
<a class="obtn-side" href="#side">open</a>
A:
You need to stop event bubbling in case of the click on the green area .side-container:
$(".side-container").click(function(e) {
e.stopPropagation();
});
Check the demo below.
$(function() {
var OPEN = 0; /* Offset to Open */
var CLOSE = -10000; /* Offset to Close */
var t = 0; /* Default time */
var $obtn = $(".obtn-side");
var $cbtn = $(".cbtn-side");
var main = ".main-wrapper";
$obtn.click(function(event) {
event.preventDefault();
var cid = $(this).attr("href"); /* Get the container id */
navEffect(cid, OPEN, 0);
});
$cbtn.click(function(event) {
event.preventDefault();
var cid = $(this).attr("href");
navEffect(cid, CLOSE, 0);
});
$(".side-wrapper").click(function(event) {
navEffect("#side", CLOSE, 0);
});
$(".side-container").click(function(e) {
e.stopPropagation();
});
});
function navEffect(c, o, t) {
var $con = $(c);
$con.animate({
right: o
}, t);
}
.side-wrapper {
position: fixed;
top: 0px;
width: 100%;
height: 100%;
background: black;
right: -1100%;
}
.side-container {
float: right;
background: green;
width: 50%;
height: 100%;
position: relative;
box-shadow: 0px 0px 5px 2px rgba(52, 73, 94, 0.9);
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="side" class="side-wrapper">
<div class="side-container">
<a class="cbtn-side" href="#side">x</a>
<div>
<h1>Side bar</h1>
</div>
</div>
</div>
<a class="obtn-side" href="#side">open</a>
| {
"pile_set_name": "StackExchange"
} |
Q:
cufon or typeface - need English LTR and Arabic RTL fancy heading
I need to build a site that uses cufon or typeface.js for headings. But I am struggling to get a definitive answer on whether these will support RTL text.
Does anyone have experience using these for arabic / RTL text? And if they aren't suitable, can you recommend an alternative?
A:
I tried to use cufon for 'Persian'.As you may notice , It was told that -on github project page- the developers have plan to make support for rtl either.
It's really awsome solution but still got some problem.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there any difference between 속의 and 속에?
Is there any difference between 속의 and 속에? I heard both of them pronounced in the same way, and i think both of them means "inside", is there any difference between them?
A:
-의 is (mainly) a possessive post-position, and -에 is (mainly) a lative post-position. They are pronounced both /so.gE/, but each post-position play its own role.
-의 is a post-position that makes an adjective phrase, with its meaning roughly equivalent to the English preposition 'of'.
-에 is a post-position that makes an adverb phrase, and it can mean in/into/to.
속 is a noun meaning the inside/interior, and also figuratively one's heart or stomach. Following another noun, it would mean "inside (the preceding noun)."
It is difficult to distinguish the difference of "속의" (of the interior?) and "속에" (in the interior?) from English translations, but the biggest difference is that -의 makes an adjective phrase but -에 makes an adverb phrase. Making an adjective phrase, -에 connects the preceding and following noun phrase to make a (bigger) noun phrase. A noun phrase is a constituent, that is, it can function as a single grammatical unit.
Consider the following example:
(1) 그물 속의 물고기가 퍼떡였다. The fish of the net-interior splashed.
무엇이 퍼떡였니? What splashed?
(1-1) 그물 속의 물고기. The fish of the net-interior.
(2) 그물 속에 물고기가 퍼떡였다. The fish splashed in the net-interior.
무엇이 퍼떡였니? What splashed?
(2-1) *그물 속에 물고기. The fish in the net-interior.
(* marks ungrammatical.)
In the first sentence, 그물 속의 is an adjective phrase, literally translated as of the net-interior. So 그물 속의 물고기 would be the fish of the net-interior. This phrase can stand alone as a unit, so as to answer the question "What splashed?"
In the second sentence, 그물 속에 (in the net-interior) is an adverb phrase, thus cannot modify a noun. It only functions as an adverb phrase to modify the whole sentence. "물고기가 퍼떡였다. The fish splashed." Where did it splash? "그물 속에. In the net-interior." So modifying a noun "물고기" with an adverb phrase "그물 속에" would be ungrammatical.
What makes it confusing is that the English phrase "The fish in the net-interior" is perfectly grammatical (if you tolerate net-interior as a literal translation of 그물 속). This is only because the English preposition in can play two roles: one to make an adverb phrase, ("It was in the net that the fish splashed.") and the other to make an adjective phrase ("It was the fish in the net that splashed."). The Korean post-position -에 doesn't make an adjective phrase, but only an adverb phrase, so "그물 속에 물고기" is ungrammatical.
To summarize, post-positions -의 and -에 play different grammatical roles, -의 making an adjective phrase and -에 making an adverb phrase (and they also have different meanings). 속의 and 속에 may be confusing because of their identical pronunciation and the ambiguity of their English translations, but their grammatical roles should help you choose which post-position to use in a given context.
| {
"pile_set_name": "StackExchange"
} |
Q:
Problem getting wifi connection broadcast event in android
I am trying to implement an activity with a button that toggles wifi state on and off when clicked. Turning on and off works, but at the same time, I would like to change the colow or image of the button accordingly, ie, different image when on and different when off. TO do so, I have created an Intent Filter and a broadcast receiver function. In the broadcast receiver I am checking for several system events such as power connected/disconnected, battery change, wifi on/off, etc. The problem is that I do not receive messages for wifi state change - I do get battery and power related messages but not for wifi (PS. I have the same problem with bluetooth on/off notifications). Can anyone tell me what could be wrong? Here is the code:
protected void onResume() {
super.onResume();
IntentFilter intentFilter = new IntentFilter();
intentFilter.addAction("android.intent.action.SCREEN_ON");
intentFilter.addAction("android.intent.action.SCREEN_OFF");
intentFilter.addAction("android.intent.action.BATTERY_LOW");
intentFilter.addAction("android.intent.action.BATTERY_OKAY");
intentFilter.addAction("android.intent.action.BATTERY_CHANGED");
intentFilter.addAction("android.intent.action.ACTION_POWER_CONNECTED");
intentFilter.addAction("android.intent.action.ACTION_POWER_DISCONNECTED");
intentFilter.addAction("WifiManager.WIFI_STATE_CHANGED_ACTION");
intentFilter.addAction("BluetoothAdapter.STATE_TURNING_OFF");
intentFilter.addAction("BluetoothAdapter.STATE_ON");
registerReceiver(myReceiver, intentFilter);
}
private BroadcastReceiver myReceiver = new BroadcastReceiver(){
@Override
public void onReceive(Context context, Intent intent) {
String str = intent.getAction();
displayMessage("In myReceiver, action = " + str);
Log.d("Settings", "Received action: " + str);
if (str.equals("android.intent.action.BATTERY_CHANGED")) {
displayMessage("battery changed...");
} else if (str.equals("android.intent.action.ACTION_POWER_CONNECTED")) {
displayMessage("power connected");
} else if (str.equals("android.intent.action.ACTION_POWER_DISCONNECTED")) {
displayMessage("power disconnected");
} else if (str.equals("WifiManager.WIFI_STATE_CHANGED_ACTION")) {
int wifiState = intent.getIntExtra(WifiManager.EXTRA_WIFI_STATE, WifiManager.WIFI_STATE_UNKNOWN);
displayMessage("wifi state is " + wifiState);
} else if (str.equals("BluetoothAdapter.STATE_ON")) {
displayMessage("bluetooth on");
} else if (str.equals("BluetoothAdapter.STATE_TURNING_OFF")) {
displayMessage("bluetooth off");
}
}};
A:
I have found out another solution to this, since the default way does not seem to work out for me. What I do is the following:
On toggle Wifi button click, I first call wifiManager.setWifiEnabled(!enabled) and then I call a function that starts a thread and checks the wifiManager for a change of state. The thread runs until there is a change in the wifi state or 5 seconds have passed without any change. If the state is changed, I send a custom broadcast to the activity that notifies about this. This seems to work well, so I consider this issue recolved.
| {
"pile_set_name": "StackExchange"
} |
Q:
embedded shell does not support redirection: exec 2> >(logger -t myscript)
I'm trying to run the command from this question:
exec 2> >(logger -t myscript)
It works great on my desktop linux system, however, on my embedded linux device the same command presents the following error:
-sh: syntax error near unexpected token `>'
So I'm guessing my shell doesn't like part of the command syntax - most likely this portion:
exec 2>>(logger -t myscript)
In fact, while I understand that the 2> is redirecting stderr I don't actually understand the syntax of the second > character in this case, is it another way of representing a pipe?
If I can understand what it is doing then perhaps I can modify my command to work with my limited shell on the embedded linux device.
A:
The syntax in question only works with bash (or other shells with ksh extensions). In the error
-sh: syntax error near unexpected token `>'
...you're trying to use that syntax with /bin/sh.
Be sure your script starts with #!/bin/bash, and that you invoke it with bash yourscript rather than sh yourscript.
A bit more explanation:
>(foo) gets replaced with a filename (of the form /dev/fd/## if supported, or a named pipe otherwise) which receives output from a process named foo. This is the part that requires bash or ksh extensions.
exec <redirection> applies a redirection to the current shell process (thus, exec 2>stderr.log redirects all stderr from the current command and its children to the file stderr.log).
Thus, exec 2> >(foo) modifies the stderr file descriptor (of your current shell session) to go to the stdin of command foo; in this case, foo is logger -t myscript, thus sending the process's stderr to syslog.
To perform the same operation on a more limited (but still POSIX-compliant) shell:
# note: if any security concerns exist, put the FIFO in a directory
# created by mktemp -d rather than hardcoding its name
mkfifo /tmp/log.fifo # create the FIFO
logger -t myscript </tmp/log.fifo & # start the reader in the background first!
exec 2>/tmp/log.fifo # then start writing
rm -f /tmp/log.fifo # safe to delete at this point
| {
"pile_set_name": "StackExchange"
} |
Q:
How to study for certificate B1 exam?
I am a student from Delhi, India. My German language exam for certificate B1 is coming up very soon. My school teacher helped me with registration and all, but now she's taking my exam very lightly. I'm looking for some advice on how to properly study for the exam.
A:
You could start by trying to solve the model papers available on Goethe institute website. Once you know your weak spots, you can then focus on making them better with grammar practice books. It may also help to read/listen to small extracts from German media.
| {
"pile_set_name": "StackExchange"
} |
Q:
Pandas: Using Unix epoch timestamp as Datetime index
My application involves dealing with data (contained in a CSV) which is of the following form:
Epoch (number of seconds since Jan 1, 1970), Value
1368431149,20.3
1368431150,21.4
..
Currently i read the CSV using numpy loadtxt method (can easily use read_csv from Pandas). Currently for my series i am converting the timestamps field as follows:
timestamp_date=[datetime.datetime.fromtimestamp(timestamp_column[i]) for i in range(len(timestamp_column))]
I follow this by setting timestamp_date as the Datetime index for my DataFrame. I tried searching at several places to see if there is a quicker (inbuilt) way of using these Unix epoch timestamps, but could not find any. A lot of applications make use of such timestamp terminology.
Is there an inbuilt method for handling such timestamp formats?
If not, what is the recommended way of handling these formats?
A:
Convert them to datetime64[s]:
np.array([1368431149, 1368431150]).astype('datetime64[s]')
# array([2013-05-13 07:45:49, 2013-05-13 07:45:50], dtype=datetime64[s])
A:
You can also use pandas to_datetime:
df['datetime'] = pd.to_datetime(df["timestamp"], unit='s')
This method requires Pandas 0.18 or later.
| {
"pile_set_name": "StackExchange"
} |
Q:
Single word for "embrace and extend and possibly corrupt ideas of a political movement"
What would be a single word for the process in which political parties (or other groups that have something to gain) embrace and extend (and maybe corrupt) the ideas of a political movement that started and grew outside of the regular channels? Recuperate?
A:
Perhaps you want the word co-opt.
Merriam-Webster gives:
a : to take into a group (as a faction, movement, or culture) : absorb, assimilate. the students are co–opted by a system they serve even in their struggle against it — A. C. Danto
b : take over, appropriate. a style co–opted by advertisers
Appropriate (as a verb, pronounced with a long final a, to rhyme with state) may also serve your purpose.
Recuperate is almost always used to mean "to recover from an illness". Its alternate meaning of "to recover something that was lost" doesn't fit here either. Merriam-Webster also gives "to bring back into use or currency : revive" but that seems very uncommon, and is no better.
| {
"pile_set_name": "StackExchange"
} |
Q:
CSS Class Selectors
I'm more of a server side person, so for the css sample below, I understand what the first 2 groups of css selectors are doing.
I don't understand the 3rd.
Given that the home class only occurs once in the html, it seems redundant to specify the class twice. This comes from the site clearleft.com. What is the purpose of the last group of selectors?
Thanks in advance.
<ol id="nav">
<li class="home"><a href="/">Home</a></li>
</ol>
#nav li.home a,
#nav li.home a:link,
#nav li.home a:visited {
background-position: 0 0;
}
#nav li.home a:hover,
#nav li.home a:focus,
#nav li.home a:active {
background-position: 0 -119px;
}
.home #nav li.home a,
.home #nav li.home a:link,
.home #nav li.home a:visited,
.home #nav li.home a:hover,
.home #nav li.home a:focus,
.home #nav li.home a:active {
background-position: 0 -238px;
}
A:
Most of you are right, I didn't post the complete html. I figured out the reason. There is a parent div tag with the home class name. It's used to highlight the selected menu item for a given page. Sorry for the confusion, but the responses did lead me to check pages other than the one I was using, which lead me to the answer.
Thanks all.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to convert CSV file to multiline JSON?
Here's my code, really simple stuff...
import csv
import json
csvfile = open('file.csv', 'r')
jsonfile = open('file.json', 'w')
fieldnames = ("FirstName","LastName","IDNumber","Message")
reader = csv.DictReader( csvfile, fieldnames)
out = json.dumps( [ row for row in reader ] )
jsonfile.write(out)
Declare some field names, the reader uses CSV to read the file, and the filed names to dump the file to a JSON format. Here's the problem...
Each record in the CSV file is on a different row. I want the JSON output to be the same way. The problem is it dumps it all on one giant, long line.
I've tried using something like for line in csvfile: and then running my code below that with reader = csv.DictReader( line, fieldnames) which loops through each line, but it does the entire file on one line, then loops through the entire file on another line... continues until it runs out of lines.
Any suggestions for correcting this?
Edit: To clarify, currently I have: (every record on line 1)
[{"FirstName":"John","LastName":"Doe","IDNumber":"123","Message":"None"},{"FirstName":"George","LastName":"Washington","IDNumber":"001","Message":"Something"}]
What I'm looking for: (2 records on 2 lines)
{"FirstName":"John","LastName":"Doe","IDNumber":"123","Message":"None"}
{"FirstName":"George","LastName":"Washington","IDNumber":"001","Message":"Something"}
Not each individual field indented/on a separate line, but each record on it's own line.
Some sample input.
"John","Doe","001","Message1"
"George","Washington","002","Message2"
A:
The problem with your desired output is that it is not valid json document,; it's a stream of json documents!
That's okay, if its what you need, but that means that for each document you want in your output, you'll have to call json.dumps.
Since the newline you want separating your documents is not contained in those documents, you're on the hook for supplying it yourself. So we just need to pull the loop out of the call to json.dump and interpose newlines for each document written.
import csv
import json
csvfile = open('file.csv', 'r')
jsonfile = open('file.json', 'w')
fieldnames = ("FirstName","LastName","IDNumber","Message")
reader = csv.DictReader( csvfile, fieldnames)
for row in reader:
json.dump(row, jsonfile)
jsonfile.write('\n')
A:
You can use Pandas DataFrame to achieve this, with the following Example:
import pandas as pd
csv_file = pd.DataFrame(pd.read_csv("path/to/file.csv", sep = ",", header = 0, index_col = False))
csv_file.to_json("/path/to/new/file.json", orient = "records", date_format = "epoch", double_precision = 10, force_ascii = True, date_unit = "ms", default_handler = None)
A:
I took @SingleNegationElimination's response and simplified it into a three-liner that can be used in a pipeline:
import csv
import json
import sys
for row in csv.DictReader(sys.stdin):
json.dump(row, sys.stdout)
sys.stdout.write('\n')
| {
"pile_set_name": "StackExchange"
} |
Q:
Visual Basic 2008 Mobile Project: How to Filter Data From Combo Box to DataGrid
I'm working on a mobile project, and I have this:
A search textbox (fillby method)
A combobox (bound to the data)
A datagrid
I am able to do this:
input a search query into the textbox using the fillby method and the datagrid shows the appropriate rows.
I need help with this:
To filter the same data with a combobox. If I use the Add Query method (fillby method) to a combobox it creates another textbox search query. I don't want that. I want to be able to filter the datagrid by the combobox.
Here is my code for the ComboBox Sub:
Private Sub CityComboBox_SelectedValueChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles CityComboBox.SelectedValueChanged
Dim RestIDShort As Short 'primary key
Dim RestDataRow As DataRow 'complete data row of selected value
Dim RestDataRows As DataRow() 'holding the data
Try
'get the restID for the selected city
RestIDShort = Convert.ToInt16(CityComboBox.SelectedValue)
'find the row from the table for the selected city
RestDataRow = RestaurantEateriesDataSet.RestaurantTable.FindByRestID(RestIDShort)
'Grab the variables here. Don't really need them. Just to see if I can pull data.
'NameStringShow = RestDataRow("Name")
'FoodTypeStringShow = RestDataRow("FoodCat")
'CityStringShow = RestDataRow("City")
'test to see if we can write to screen
'successfully wrote this to the screen onload but not on combobox change
'TextBox1.Text = NameStringShow
'retrieve the array for the selected data row
'not sure if this is how to call when there is only one table????
RestDataRows = RestDataRow.GetChildRows("RestaurantTable")
'fill the datagrid with the array of selected value rows
'I don't know how to do this part:
Catch ex As Exception
MessageBox.Show(ex.Message)
End Try
End Sub
I do have a query created that I can call (if needed). The query works when I call it in a textbox, so if there is a way to call it in a combo box, and then display the selected fields in the datagrid . . . all would be good.
Any help, much appreciated.
A:
Private Sub bttnFilter Click(...) Handles bttnFilter.Click
Dim filter As String
filter = InputBox("Enter product name, or part of it")
ProductsBindingSource.Filter = "ProductName LIKE '%" & filter.Trim & "%'"
End Sub
the same is applied to combobox, use combobox1.selectedItem() in place of the filter.
its the basics of searching, it you have any other question you're welcome.
| {
"pile_set_name": "StackExchange"
} |
Q:
subtract m rows of dataframe from other m rows
I have dataframe with n rows. All values in dataframe can be assumed to be integers. I wish to subtract particular m rows from another set of m rows. For eg. i wish to do-
df[i:i+m] - df[j:j+m]
This should return a dataframe.
A:
You can use NumPy representation of your sliced dataframes and feed into pd.DataFrame constructor:
res = pd.DataFrame(df.iloc[i:i+m].values - df.iloc[j:j+m].values,
columns=df.columns)
| {
"pile_set_name": "StackExchange"
} |
Q:
Performance issues with saving object reference to array
Why is v1 so much slower than v2?
v1 --
var foo = function (a,b,c) {
this.a=a; this.b=b; this.c=c;
}
var pcs = new Array(32);
for (var n=32; n--;) {
ref = new foo(1,2,3)
pcs[n] = ref; //*****
}
v2 --
var foo = function (a,b,c) {
this.a=a; this.b=b; this.c=c;
}
var pcs = new Array(32);
for (var n=32; n--;) {
ref = new foo(1,2,3)
pcs[n] = 1; //*****
}
I figured that since I'm holding a reference to the new object in 'ref', that simply assigning that reference to an element in the array would be about as fast as assigning a literal value, but it turns out that assigning the reference is considerably slower. Can anyone shed some light on this? Anything I can do to improve the performance here on V1?
Fiddle:
http://jsfiddle.net/a0kw9rL1/1/
A:
simply assigning that reference to an element in the array would be about as fast as assigning a literal value
Yes, it basically is1. However, allocating an object probably makes the difference here.
In V2, ref is only allocated once and repeatedly overwritten, it might be allocated on the stack not on the heap, and dead code elimination might even completely optimise it away.
In V1, ref needs to be allocated on the heap, and repeatedly in a new location, as all the different instances are accessible from pcs.
V1 is just eating more memory than V21. However, due to your very small array, the difference is neglible. If you use really large one, you can spot the difference: http://jsperf.com/array-reference-assignment/3
[1]: Well, for some reason not really. But I can't explain that, except garbage collection is different when you profile memory usage
| {
"pile_set_name": "StackExchange"
} |
Q:
File Upload in Flask - 400 Bad Request
I'm trying to do a file upload to my Flask backend via AJAX in JQuery.
My Python side looks like this:
@app.route('/upload/', methods=['POST', 'GET'])
def upload():
if request.method == 'GET':
return render_template('uploadfile.html')
elif request.method == 'POST':
file_val = request.files['file']
return 'it worked!'
Note that it works when I do a normal submit of the form.
My HTML and AJAX looks like this:
<form id="upload-file" method="post" enctype="multipart/form-data">
<fieldset>
<label for="file">Select a file</label>
<input name="file" type="file">
</fieldset>
<fieldset>
<button id="upload-file-btn" type="button">Upload</button>
</fieldset>
$(document).on("click", "#upload-file-btn", function() {
var form_data = new FormData($('#input-file')[0]);
$.ajax({
type: 'POST',
url: '/upload/',
data: form_data,
contentType: false,
cache: false,
processData: false,
async: false,
success: function(data) {
alert("UREKA!!!");
},
error: function(jqXHR, textStatus, errorThrown) {
console.log(jqXHR);
console.log(textStatus);
console.log(errorThrown);
}
});
return false;
});
However, I'm getting a 400 response when the AJAX request is performed. I'm thinking this is something to do with the contentType, but would really appreciate any guidance :)
A:
I don't see any element with this id $('#input-file'), my guess either you want to have form's id or just input[type="file"]'s id.
You might try with this:
var form_data = new FormData($('input[type="file"]')[0]);
| {
"pile_set_name": "StackExchange"
} |
Q:
how accurate are the calorie counters on iPhone apps?
I use Kinetic to measure my regular excercise routine (walking). It calculates / estimates the amount of calories burned during that period.
I am wondering how accurate (generally speaking) these sort of calculations are? Are calories easy to calculate with tasks such as walking and I should trust it fairly well? Or are they usually bad enough that I should allow for about 50% allowance each way?
A:
All devices that display a "calories burned" number are showing estimates based on mathematical formulas. The formulas have been calibrated by gathering data from real people, but there are a lot of variables that affect how many calories you burn, and it's impossible to take them all into account. However, the more variables your program measures, the more likely it is to be accurate.
One very important variable is your weight. Does the device or app you're using ask you to input your weight? If not, it is likely highly inaccurate, since the calorie cost of exercise will vary significantly depending on a person's size. Does it count your steps? That will improve its accuracy. Does it take speed/distance into account, possibly using GPS? That will improve its accuracy. Does it know (again possibly using GPS data) if you're going up/down hill? That will improve its accuracy. Are you wearing a heart rate monitor, and is the device getting data from that? that will improve its accuracy.
The more variables the device has available to it, the better it can guess. The more unknowns there are, the more the numbers may skew toward some "average" or "typical" person's numbers, which may be very different from yours.
Even in the best circumstances, the estimates will likely be off. For example, in this article from Good Morning America/ABC News, a study conducted by The University of California is discussed where 4 different exercise machines calorie claims were compared with VO2 tests (a measure of calories burned by examining the amount of oxygen being consumed by a person's breathing). These are the results they found:
On average, the machines overestimated by 19 percent and the watches overestimated by 28 percent.
Here's the breakdown:
Treadmill: Overestimated calories burnt by 13 percent.
Stationary Bike: Overestimated calories burnt by 7 percent.
Stair Climber: Overestimated calories burnt by 12 percent.
Elliptical: overestimated calories burnt by 42 percent.
This About.com article talks a bit more about the difficulty of estimating calories on a treadmill, and suggests that machines tend to overestimate. Here's another article from CNN. I don't know how similar Kinetic is to any of these machines, but it seems to me to be most like the "fitness watch" class of device. I searched for a study on the calorie-counting accuracy of pedometers, but didn't find anything authoritative.
In a nutshell, the calorie numbers given by machines are estimates that be can off, especially if the machine isn't tracking many variables. If the device or app is taking into account distance, weight, incline, heart rate, and other variables, then it's probably as good of an estimate as you can reasonably get. But still, you should recognize that it would not be unusual for your device to be off by 20% or so, depending on the circumstances.
| {
"pile_set_name": "StackExchange"
} |
Q:
Python SocketServer: sending to multiple clients?
Well, I'm trying to build a small python prgram with a SocketServer that is supposed to send messages it receives to all connected clients. I'm stuck, I don't know how to store clients on the serverside, and I don't know how to send to multiple clients. Oh and, my program fails everytime more then 1 client connects, and everytime a client sends more then one message...
Here's my code until now:
print str(self.client_address[0])+' connected.'
def handle(self):
new=1
for client in clients:
if client==self.request:
new=0
if new==1:
clients.append(self.request)
for client in clients:
data=self.request.recv(1024)
client.send(data)
class Host:
def __init__(self):
self.address = ('localhost', 0)
self.server = SocketServer.TCPServer(self.address, EchoRequestHandler)
ip, port = self.server.server_address
self.t = threading.Thread(target=self.server.serve_forever)
self.t.setDaemon(True)
self.t.start()
print ''
print 'Hosted with IP: '+ip+' and port: '+str(port)+'. Clients can now connect.'
print ''
def close(self):
self.server.socket.close()
class Client:
name=''
ip=''
port=0
def __init__(self,ip,port,name):
self.name=name
self.hostIp=ip
self.hostPort=port
self.s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.s.connect((self.hostIp, self.hostPort))
def reco(self):
self.s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.s.connect((self.hostIp, self.hostPort))
def nick(self,newName):
self.name=newName
def send(self,message):
message=self.name+' : '+message
len_sent=self.s.send(message)
response=self.s.recv(len_sent)
print response
self.reco()
def close(self):
self.s.close()
Obviously I have no idea what I'm doing, so any help would be great.
Thanks in advance!
Edit: I'm using Python 2.7 on Windows Vista.
A:
You want to look at asyncore here. The socket operations you're calling on the client side are blocking (don't return until some data is received or a timeout occurs) which makes it hard to listen for messages sent from the host and let the client instances enqueue data to send at the same time. asyncore is supposed to abstract the timeout-based polling loop away from you.
Here's a code "sample" -- let me know if anything is unclear:
from __future__ import print_function
import asyncore
import collections
import logging
import socket
MAX_MESSAGE_LENGTH = 1024
class RemoteClient(asyncore.dispatcher):
"""Wraps a remote client socket."""
def __init__(self, host, socket, address):
asyncore.dispatcher.__init__(self, socket)
self.host = host
self.outbox = collections.deque()
def say(self, message):
self.outbox.append(message)
def handle_read(self):
client_message = self.recv(MAX_MESSAGE_LENGTH)
self.host.broadcast(client_message)
def handle_write(self):
if not self.outbox:
return
message = self.outbox.popleft()
if len(message) > MAX_MESSAGE_LENGTH:
raise ValueError('Message too long')
self.send(message)
class Host(asyncore.dispatcher):
log = logging.getLogger('Host')
def __init__(self, address=('localhost', 0)):
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.bind(address)
self.listen(1)
self.remote_clients = []
def handle_accept(self):
socket, addr = self.accept() # For the remote client.
self.log.info('Accepted client at %s', addr)
self.remote_clients.append(RemoteClient(self, socket, addr))
def handle_read(self):
self.log.info('Received message: %s', self.read())
def broadcast(self, message):
self.log.info('Broadcasting message: %s', message)
for remote_client in self.remote_clients:
remote_client.say(message)
class Client(asyncore.dispatcher):
def __init__(self, host_address, name):
asyncore.dispatcher.__init__(self)
self.log = logging.getLogger('Client (%7s)' % name)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.name = name
self.log.info('Connecting to host at %s', host_address)
self.connect(host_address)
self.outbox = collections.deque()
def say(self, message):
self.outbox.append(message)
self.log.info('Enqueued message: %s', message)
def handle_write(self):
if not self.outbox:
return
message = self.outbox.popleft()
if len(message) > MAX_MESSAGE_LENGTH:
raise ValueError('Message too long')
self.send(message)
def handle_read(self):
message = self.recv(MAX_MESSAGE_LENGTH)
self.log.info('Received message: %s', message)
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logging.info('Creating host')
host = Host()
logging.info('Creating clients')
alice = Client(host.getsockname(), 'Alice')
bob = Client(host.getsockname(), 'Bob')
alice.say('Hello, everybody!')
logging.info('Looping')
asyncore.loop()
Which results in the following output:
INFO:root:Creating host
INFO:root:Creating clients
INFO:Client ( Alice):Connecting to host at ('127.0.0.1', 51117)
INFO:Client ( Bob):Connecting to host at ('127.0.0.1', 51117)
INFO:Client ( Alice):Enqueued message: Hello, everybody!
INFO:root:Looping
INFO:Host:Accepted client at ('127.0.0.1', 55628)
INFO:Host:Accepted client at ('127.0.0.1', 55629)
INFO:Host:Broadcasting message: Hello, everybody!
INFO:Client ( Alice):Received message: Hello, everybody!
INFO:Client ( Bob):Received message: Hello, everybody!
A:
You can use socketserver to broadcast messages to all connected clients. However, the ability is not built into the code and will need to be implemented by extending some of the classes already provided. In the following example, this is implemented using the ThreadingTCPServer and StreamRequestHandler classes. They provide a foundation on which to build but still require some modifications to allow what you are trying to accomplish. The documentation should help explain what each function, class, and method are trying to do in order to get the job done.
Server
#! /usr/bin/env python3
import argparse
import pickle
import queue
import select
import socket
import socketserver
def main():
"""Start a chat server and serve clients forever."""
parser = argparse.ArgumentParser(description='Execute a chat server demo.')
parser.add_argument('port', type=int, help='location where server listens')
arguments = parser.parse_args()
server_address = socket.gethostbyname(socket.gethostname()), arguments.port
server = CustomServer(server_address, CustomHandler)
server.serve_forever()
class CustomServer(socketserver.ThreadingTCPServer):
"""Provide server support for the management of connected clients."""
def __init__(self, server_address, request_handler_class):
"""Initialize the server and keep a set of registered clients."""
super().__init__(server_address, request_handler_class, True)
self.clients = set()
def add_client(self, client):
"""Register a client with the internal store of clients."""
self.clients.add(client)
def broadcast(self, source, data):
"""Resend data to all clients except for the data's source."""
for client in tuple(self.clients):
if client is not source:
client.schedule((source.name, data))
def remove_client(self, client):
"""Take a client off the register to disable broadcasts to it."""
self.clients.remove(client)
class CustomHandler(socketserver.StreamRequestHandler):
"""Allow forwarding of data to all other registered clients."""
def __init__(self, request, client_address, server):
"""Initialize the handler with a store for future date streams."""
self.buffer = queue.Queue()
super().__init__(request, client_address, server)
def setup(self):
"""Register self with the clients the server has available."""
super().setup()
self.server.add_client(self)
def handle(self):
"""Run a continuous message pump to broadcast all client data."""
try:
while True:
self.empty_buffers()
except (ConnectionResetError, EOFError):
pass
def empty_buffers(self):
"""Transfer data to other clients and write out all waiting data."""
if self.readable:
self.server.broadcast(self, pickle.load(self.rfile))
while not self.buffer.empty():
pickle.dump(self.buffer.get_nowait(), self.wfile)
@property
def readable(self):
"""Check if the client's connection can be read without blocking."""
return self.connection in select.select(
(self.connection,), (), (), 0.1)[0]
@property
def name(self):
"""Get the client's address to which the server is connected."""
return self.connection.getpeername()
def schedule(self, data):
"""Arrange for a data packet to be transmitted to the client."""
self.buffer.put_nowait(data)
def finish(self):
"""Remove the client's registration from the server before closing."""
self.server.remove_client(self)
super().finish()
if __name__ == '__main__':
main()
Of course, you also need a client that can communicate with your server and use the same protocol the server speaks. Since this is Python, the decision was made to utilize the pickle module to facilitate data transfer among server and clients. Other data transfer methods could have been used (such as JSON, XML, et cetera), but being able to pickle and unpickle data serves the needs of this program well enough. Documentation is included yet again, so it should not be too difficult to figure out what is going on. Note that server commands can interrupt user data entry.
Client
#! /usr/bin/env python3
import argparse
import cmd
import pickle
import socket
import threading
def main():
"""Connect a chat client to a server and process incoming commands."""
parser = argparse.ArgumentParser(description='Execute a chat client demo.')
parser.add_argument('host', type=str, help='name of server on the network')
parser.add_argument('port', type=int, help='location where server listens')
arguments = parser.parse_args()
client = User(socket.create_connection((arguments.host, arguments.port)))
client.start()
class User(cmd.Cmd, threading.Thread):
"""Provide a command interface for internal and external instructions."""
prompt = '>>> '
def __init__(self, connection):
"""Initialize the user interface for communicating with the server."""
cmd.Cmd.__init__(self)
threading.Thread.__init__(self)
self.connection = connection
self.reader = connection.makefile('rb', -1)
self.writer = connection.makefile('wb', 0)
self.handlers = dict(print=print, ping=self.ping)
def start(self):
"""Begin execution of processor thread and user command loop."""
super().start()
super().cmdloop()
self.cleanup()
def cleanup(self):
"""Close the connection and wait for the thread to terminate."""
self.writer.flush()
self.connection.shutdown(socket.SHUT_RDWR)
self.connection.close()
self.join()
def run(self):
"""Execute an automated message pump for client communications."""
try:
while True:
self.handle_server_command()
except (BrokenPipeError, ConnectionResetError):
pass
def handle_server_command(self):
"""Get an instruction from the server and execute it."""
source, (function, args, kwargs) = pickle.load(self.reader)
print('Host: {} Port: {}'.format(*source))
self.handlers[function](*args, **kwargs)
def preloop(self):
"""Announce to other clients that we are connecting."""
self.call('print', socket.gethostname(), 'just entered.')
def call(self, function, *args, **kwargs):
"""Arrange for a handler to be executed on all other clients."""
assert function in self.handlers, 'You must create a handler first!'
pickle.dump((function, args, kwargs), self.writer)
def do_say(self, arg):
"""Causes a message to appear to all other clients."""
self.call('print', arg)
def do_ping(self, arg):
"""Ask all clients to report their presence here."""
self.call('ping')
def ping(self):
"""Broadcast to all other clients that we are present."""
self.call('print', socket.gethostname(), 'is here.')
def do_exit(self, arg):
"""Disconnect from the server and close the client."""
return True
def postloop(self):
"""Make an announcement to other clients that we are leaving."""
self.call('print', socket.gethostname(), 'just exited.')
if __name__ == '__main__':
main()
| {
"pile_set_name": "StackExchange"
} |
Q:
asp.net WebMethod not working
I have a simple webpage which has a WebMethod in it.
But it's not working even after I tried everything I found on Google.
When I go to http://server/test.aspx/Test through browser, It returns entire page even if the webMethod is removed.
This is the code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
namespace IgnisAccess
{
public partial class test : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
[System.Web.Services.WebMethod]
public static string Test()
{
return "Success";
}
}
}
This is the Design
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="test.aspx.cs" Inherits="IgnisAccess.test" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title></title>
</head>
<body>
<form id="form1" runat="server">
<div>
</div>
</form>
</body>
</html>
I have tried adding this Web.Config entry too, but of no use.
<system.web>
<httpModules>
<add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</httpModules>
</system.web>
A:
try this one
[System.Web.Services.WebMethod]
[ScriptMethod(ResponseFormat = ResponseFormat.Json)]
public static string Test()
{
return "Success";
}
and make sure its POST not GET
| {
"pile_set_name": "StackExchange"
} |
Q:
Measuring actual distance walked on a map (Allowing for changes in height)
Following on from this question: How many calories does hiking burn?
So in the same journey my girlfriends tracking app said we'd travelled 20km. Now this was based on us travelling on the flat, but we hadn't we'd also travelled upwards about 1Km too.
So how far had we actually walked? I had a thought that this would be something to do with Pythagoras's theorem but that seemed too far.
So if we'd walked, say 10Km as the "crow flies" and climbed 1Km how far had we actually walked (roughly)?
A:
Pythagoras is actually exactly what you would use, approximated as finely as you need for accuracy.
What I mean by approximated, is:
If you are following a continuous incline, you really only need one right angled triangle to calculate your hypotenuse, but if your incline varies, a more accurate figure will be gained by taking each change of incline as a new triangle. This also copes with you walking up and down slopes.
This gets complicated and annoying very fast, so for most purposes, you can approximate to 'a bit over' a single right angled triangle.
My preferred solution:
A GPS which either includes a height measurement (from GPS or barometric pressure) or orographic data in its built-in maps so it can calculate total distance traveled for you.
And in answer to your specific question, 10km across and 1km up gives you a total distance of 10049m (which is basically 10km :-)
A:
What exactly do you want to measure?
If you want to estimate shoe usage, it would be better to measure steps, not the distance.
If you want to estimate fatigue, than there's a heuristic, you should assume that 100m up is the equivalent of 1km on flat terrain.
So you have walked 20 km equivalents. It has taken you twice as much time as you would be walking on flat terrain, and you've burned about twice as many callories.
It's only the estimation, but from my personal experience, it's very accurate.
A:
You can get a good estimate of the distance walked by timing or pacing. Naismith's Rule (a way of estimating the time to walk a distance when ascents are involved) can help with the timing aspect but is only an estimation of the time taken to walk a certain distance taking ups and down into account. From the knowledge of expected average speed and time taken, you can estimate the distance travelled.
Pacing can be very accurate once practised but obviously is a bit onerous when you are trying to measure relatively long distances.
You can also get an indication of distance travelled when in hilly country by making use of the contour lines on maps.
See http://www.mcofs.org.uk/estimating-distance-travelled.asp for more details.
| {
"pile_set_name": "StackExchange"
} |
Q:
Using ASSERT and EXPECT in GoogleTest
While ASSERT_* macros cause termination of test case, EXPECT_* macros continue its evaluation.
I would like to know which is the criteria to decide whether to use one or the other.
A:
Use ASSERT when the condition must hold - if it doesn't the test stops right there. Use this when the remainder of the test doesn't have semantic meaning without this condition holding.
Use EXPECT when the condition should hold, but in cases where it doesn't we can still get value out of continuing the test. (The test will still ultimately fail at the end, though.)
The rule of thumb is: use EXPECT by default, unless you require something to hold for the remainder of the tests, in which case you should use ASSERT for that particular condition.
This is echoed within the primer:
Usually EXPECT_* are preferred, as they allow more than one failures to be reported in a test. However, you should use ASSERT_* if it doesn't make sense to continue when the assertion in question fails.
A:
Use EXPECT_ when you
want to report more than one failure in your test
Use ASSERT_ when
it doesn't make sense to continue when the assertion fails
Since ASSERT_ aborts your function immediately if it fails, possible cleanup code is skipped.
Prefer EXPECT_ as your default.
A:
In addition to previous answers...
ASSERT_ does not terminate execution of the test case. It returns from whatever function is was used in. Besides failing the test case, it evaluates to return;, and this means that it cannot be used in a function returning something other than void. Unless you're fine with the compiler warning, that is.
EXPECT_ fails the test case but does not return;, so it can be used inside functions of any return type.
| {
"pile_set_name": "StackExchange"
} |
Q:
Prove that the limit of the following complex function doesn't exist
Prove the following limit doesn't exist $\lim_{z \rightarrow 0}(z/\overline{z})^2$
Approach: I am trying to approach different complex numbers and see if I get a different limits. I am also trying to approach this in polar coordinates, but I think it's useless because as a complex number approaches 0, the angle from the positive axis shouldn't change. All the complex number I have tried yield to the same result.
A:
Hint:
1) When $z = a$ , $(z/\overline{z})^2 = (z/z)^2 = 1^2 = 1$.
2) When $z = a+ai$, $(z/\overline{z})^2 = (\frac{a(1+i)}{a(1-i)})^2 = (\frac{(1+i)^2}{2})^2 = (\frac{2i}{2})^2 = i^2 = -1$.
Here $a \in \mathbb R$
Generalisation:
Let $z = Re^{\theta i}$, where $R \in \mathbb R$, $\overline{z} = Re^{-\theta i}$, $\lim_{z \rightarrow 0}(z/\overline{z})^2= \lim_{R \rightarrow 0}(\frac{Re^{\theta i}}{Re^{-\theta i}})^2 = \lim_{R \rightarrow 0}(e^{2\theta i})^2 = \lim_{R \rightarrow 0}(e^{4\theta i}) = \cos 4 \theta + i\sin 4 \theta $.
Substituting $\theta$ proper values, and one can get infinitly counterexample.
| {
"pile_set_name": "StackExchange"
} |
Q:
Remaking TikTok's comments UI: Sticky EditText at the bottom
Problem: I am trying to remake TikTok's comments UI using a BottomSheetDialogFragment and a RecyclerView.
This is how it looks like (ORIGINAL):
This is what I have tried for now: Basically, I have a FrameLayour whose first child contains eveerything other than the EditText and the second child is, of course, the EditText.
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/relativeLayout"
android:layout_width="match_parent"
android:layout_height="wrap_content">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical">
<TextView
android:id="@+id/no_of_comments"
android:layout_width="match_parent"
android:layout_height="36dp"
android:text="30.8k comments"
android:gravity="center_vertical|center_horizontal"
android:textColor="@color/darkGreyText" />
<ScrollView
android:layout_width="match_parent"
android:layout_height="300dp">
<androidx.recyclerview.widget.RecyclerView
tools:listitem="@layout/item_comment"
android:id="@+id/comments_list"
android:layout_width="match_parent"
android:layout_height="wrap_content" />
</ScrollView>
</LinearLayout>
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="48dp"
android:orientation="horizontal"
android:layout_gravity="bottom"
>
<EditText
android:id="@+id/editText"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@drawable/border_edit_text"
android:hint="Leave a comment"
android:paddingLeft="16dp"
android:paddingRight="16dp"
android:textSize="12sp" />
<ImageButton
android:layout_width="wrap_content"
android:layout_height="match_parent"
android:src="@drawable/bleh"
android:background="@android:color/white"
android:alpha="0.3"
android:layout_alignParentRight="true"
android:hapticFeedbackEnabled="true"
/>
</RelativeLayout>
</FrameLayout>
Note that I have a fixed size ScrollView so that the edittext is always visible on the screen. If I remove that, the EditText only becomes visible when the bottomsheet is full screen.
Problem: The problem now is, that the edittext sits on top of the recylcer view at all times. That is what I want, but it introduced a new problem: After scrolling to the bottom of the list (recylcerview), the last item is not completely visible as it is hidden by the EditText.
A:
You can add a padding to the bottom of your RecyclerView so that it always stays on top of the EditText. This way, the EditText appears "sticky" and the RecyclerView won't be covered by EditText
| {
"pile_set_name": "StackExchange"
} |
Q:
Saving a Model and get it after save to use it as ForeignKey to create another Model
I have the following 2 models:
class Note(models.Model):
name= models.CharField(max_length=35)
class ActionItem(models.Model):
note = models.models.OneToOneField(Note, on_delete=models.CASCADE)
target = models.CharField(max_length=35)
category = models.ForeignKey(Category, blank=True, null=True, on_delete=models.CASCADE)
In other models(based on some conditions) I trigger an utility function that create a Note:
def create_note(target=None, action=None):
note = Note(target=target, name=name).save()
transaction.on_commit(
ActionItem(note=note, target=target).save())
I get the following error:
null value in column "note_id" violates not-null constraint
DETAIL: Failing row contains (6, null).
If I use:
So, I presume the error appears because save, doesn't return anything.
I need the Note to pass it as a FK to ActionItem, an be sure it was saved.
A:
You can use create function instead of save function
def create_note(target=None, action=None):
note = Note.objects.create(name=name)
actionItem = ActionItem.object.create(note=note, target=target)
A:
The .save() method of a model does not return anything, hence your note variable is None, and as a result the creation of an ActionItem object gets a None for note reference, and thus raises na error.
We can solve it by using Note.objects.create(..) which .saves() and returns the object:
def create_note(target=None, action=None):
note = Note.object.create(target=target, name=name)
transaction.on_commit(lambda: ActionItem.object.create(note=note, target=target))
Alternatively, we can first construct the object, and then .save() it:
def create_note(target=None, action=None):
note = Note(target=target, name=name)
note.save()
transaction.on_commit(lambda: ActionItem.object.create(note=note, target=target))
| {
"pile_set_name": "StackExchange"
} |
Q:
How to check cross-object formula usage count?
Salesforce allows a maximum of 10 unique relationships per object in cross-object formulas per https://help.salesforce.com/apex/HTViewHelpDoc?id=fields_creating_cross_object_notes.htm&language=en_US
But how do I find the current usage (I mean the count)? I tried checking in the "Object Limits" section but it is not present in there. Thanks for your suggestions.
A:
I don't believe it's actually listed in the limits area, probably because it is a soft limit and can be increased by SalesForce, but one way to see where they are used and how many are used is to intentionally break the limit and then click on the "Show References" link.
| {
"pile_set_name": "StackExchange"
} |
Q:
setstate object:{array:[]} in reactjs how could i add the **key and value inside the array which is in the state object**?
key and value inside the array which is in the state object
this.state = {
frequency: {
days: [],
startdate: "",
customdate: "" },
};
how could i add the key ad value inside the days array?
A:
function addKeyValue(key, value) {
this.setState(state => ({
...state,
frequency: {
...state.frequency,
days: [...state.frequency.days, {[key]: value}]
}
})
}
--Edit
Removing the key is a bit trickier.
function removeKeyValue(key, value) {
this.setState(state => {
const days = state.frequency.days;
const dayIndex = days.findIndex(pr => pr[key] === value);
const day = {...days[dayIndex]};
delete day[key];
return {
...state,
frequency: {
...state.frequency,
days: [...days.slice(0, dayIndex),
day,
...days.slice(dayIndex + 1)]
}
}
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Sublime text auto-complete overrides snippet
I have a console.log() snippet for Sublime Text that fires when you type 'c' then tab trigger, however if there is code that starts with 'c' somewhere on the page auto-complete overrides the console.log snippet. Is there a way around this or should I just add another modifier for my snippet?
<snippet>
<content><![CDATA[console.log($1);$0]]></content>
<!-- Optional: Set a tabTrigger to define how to trigger the snippet -->
<tabTrigger>c</tabTrigger>
<!-- Optional: Set a scope to limit where the snippet will trigger -->
<scope>source.js</scope>
<description>Log</description>
</snippet>
A:
On https://sublime-text-unofficial-documentation.readthedocs.org/en/latest/extensibility/completions.html it mentions that snippets always lose against a fuzzy match. Since the buffer contents are included in the auto completion, I'd suggest modifying your snippet to include a few more characters.
| {
"pile_set_name": "StackExchange"
} |
Q:
XGBOOST-Multi class prediction. Prediction matrix is set of probabilities for classes. How to perform confusion matrix
I have used XGBOOST for multi-class label prediction.
This is a multi-label prediction. i.e my target value contains 8 classes and I have about 6 features that I am using since they are very highly correlated to the target value.
I have created my prediction data set. I have converted into the data frame from matrix using as.data.frame
I wanted to check the accuracy of my prediction. I am not sure how since col names changes and there are no levels in my data set. All data types I am using are integers and numerics.
Response <- train$Response
label <- as.integer(train$Response)-1
train$Response <- NULL
train.index = sample(n,floor(0.75*n))
train.data = as.matrix(train[train.index,])
train.label = label[train.index]`
test.data = as.matrix(train[-train.index,])
test.label = label[-train.index]
View(train.label)
# Transform the two data sets into xgb.Matrix
xgb.train = xgb.DMatrix(data=train.data,label=train.label)
xgb.test = xgb.DMatrix(data=test.data,label=test.label)
params = list(
booster="gbtree",
eta=0.001,
max_depth=5,
gamma=3,
subsample=0.75,
colsample_bytree=1,
objective="multi:softprob",
eval_metric="mlogloss",
num_class=8)
xgb.fit <-xgb.train(
params=params,
data=xgb.train,
nrounds=10000,
nthreads=1,
early_stopping_rounds=10,
watchlist=list(val1=xgb.train,val2=xgb.test),
verbose=0
)
xgb.fit
xgb.pred = predict(xgb.fit,test.data,reshape = T)
class(xgb.pred)
xgb.pred = as.data.frame(xgb.pred)
"""
Now I got my prediction probabilities in the below form, Since 8 classes I have 8 probabilities. I don't know which probability belongs to which variable.
1 0.12233257 0.07373134 0.044682350 0.0810693502 0.06272415 0.134308174 0.066143863 0.415008187
I want to convert them to meaningful labels. which I am not able to do. To perform confusion matrix
A:
Let's say your data is something like this:
train = data.frame(
Medical_History_23 = sample(1:5,2000,replace=TRUE),
Medical_Keyword_3 = sample(1:5,2000,replace=TRUE),
Medical_Keyword_15 = sample(1:5,2000,replace=TRUE),
BMI = rnorm(2000),
Wt = rnorm(2000),
Medical_History_4 = sample(1:5,2000,replace=TRUE),
Ins_Age = rnorm(2000),
Response = sample(1:8,2000,replace=TRUE))
And we do the train and test:
library(xgboost)
label <- as.integer(train$Response)-1
train$Response <- NULL
n = nrow(train)
train.index = sample(n,floor(0.75*n))
train.data = as.matrix(train[train.index,])
train.label = label[train.index]
test.data = as.matrix(train[-train.index,])
test.label = label[-train.index]
xgb.train = xgb.DMatrix(data=train.data,label=train.label)
xgb.test = xgb.DMatrix(data=test.data,label=test.label)
params = list(booster="gbtree",eta=0.001,
max_depth=5,gamma=3,subsample=0.75,
colsample_bytree=1,objective="multi:softprob",
eval_metric="mlogloss",num_class=8)
xgb.fit <-xgb.train(params=params,data=xgb.train,
nrounds=10000,nthreads=1,early_stopping_rounds=10,
watchlist=list(val1=xgb.train,val2=xgb.test),
verbose=0
)
xgb.pred = predict(xgb.fit,test.data,reshape = T)
Your prediction looks like below, each column is the probability of being 1,2...8
> head(xgb.pred)
V1 V2 V3 V4 V5 V6 V7 V8
1 0.1254475 0.1252269 0.1249843 0.1247929 0.1246919 0.1248430 0.1248226 0.1251909
2 0.1255558 0.1249674 0.1250741 0.1250397 0.1249939 0.1247931 0.1248649 0.1247111
3 0.1249737 0.1250508 0.1249501 0.1250445 0.1250142 0.1249630 0.1249194 0.1250844
To get the prediction label, we do
predicted_labels= factor(max.col(xgb.pred),levels=1:8)
obs_labels = factor(test.label,levels=1:8)
To get confusion matrix:
caret::confusionMatrix(obs_labels,predicted_labels)
Of course this example I have will be low accuracy because there's no useful information in the variables but the code should work for you.
| {
"pile_set_name": "StackExchange"
} |
Q:
(Serious): Male genital protection for mountain biking/BMX and for common falls
I want to start using a protection like a helmet for is for the head, but for my penis and testicles.
Why is important? It would give me much more confidence when triying new tricks and tecniques to not fear my genitals will be hurt, simply falling in the wrong way could hurt me very bad.
I shouldn't worry? Well I do and maybe others don't, however I'm not neither the best biker that never falls, neither I think nobody should put others to that standard. If others don't want it ok, I need it.
A:
Genital protection is rarely (if ever) used in cycling because it generally means putting some kind of hard surface in play (or excessive padding). Either of these can easily lead to some very uncomfortable chaffing problems. For less "pedalcentric" disciplines this might be acceptable (flatland, downhill, etc). But, for the rest of the market there just isn't a benefit to loss ratio to justify it.
Many other sports have genital protection equipment that is used regularly. Like winter cyclists who have to use mountaineering equipment to stay warm, you may have to dip into some other sports equipment cache to accomplish / try what you are suggesting.
A:
I struggled with that same question many years ago - and tried my share of products. The difficulty is that anything large or hard enough to provide any real protection is always incompatible with a bike seat and / or the natural position of the rider.
Plastic products hurt like hell when sitting on and push to one side or another. Too much padding causes numbness and discomfort.
About the best I've found are the padded lycra that roadies use - placed under my riding shorts. It doesn't provide total protection, but certainly helps.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to create and free a TCanvas when you have the handle?
I want to create a TCanvas so I can draw more easily. First I create the canvas MyCanvas:=TCanvas.Create;, then I get the handle DC:=GetWindowDC(Handle); and now what should I do... ? Should I assign directly the new handle to the canvas MyCanvas.Handle:=DC; or should I destroy the existing MyCanvas.Handle first ? And after I do the drawings I must release the handle ReleaseDC(Handle,DC); or if I free the canvas MyCanvas.Free, the handle will be released automatically ?
A:
When you create a TCanvas it does not have a handle. Assign the handle using the DC returned by GetWindowDC. When you destroy the canvas, the handle is not destroyed. You need to call ReleaseDC explicitly.
From the docs:
TCanvas does not own the HDC. Applications must create an HDC and set the Handle property. Applications must release the HDC when the canvas no longer needs it. Setting the Handle property of a canvas that already has a valid HDC will not automatically release the initial HDC.
| {
"pile_set_name": "StackExchange"
} |
Q:
Return plenum too big for air handler?
I am currently in the process of changing the air handler and heat pump in my house. I am working with a friend's father who does HVAC for a living. He came over and looked at my HVAC system and gave me a list of items to buy. I opted to go with a Goodman 3 ton split system. For the air handler I chose a Goodman ASPT - 3 Ton - Air Handler - Multi-Position - ECM Motor model #ASPT37C14. I saw that the dimensions of the air handler cabinet was 21 by 21. I also was told to get a UPFLOW FURNACE FILTER AND SUPPORT BOX - 21.5" X 28.5 for the return plenum which will sit underneath the air handler. I wanted to know if my return plenum was too big and he mistakenly had me buy one that was too big. Unfortunately my friend's father is on vacation and not answering and since he is putting my system in as soon as he gets back, I was wondering if I needed a smaller return plenum.
Photo of plenum:
A:
You do want your return larger this will not be a problem and allows room to install it then sheet metal or a flexible seal can be added. If things are two close it makes it tough to sweat the fittings
| {
"pile_set_name": "StackExchange"
} |
Q:
ASP.NET how to change background color of table row on button click + row entry?
I have a table with one row which is filled in with values after the user clicks a button. I want to change the background color of the row to different colors based on the values that are filled in to the row.
<asp:Button ID="Button1" runat="server" OnClick="Button1_Click" Text="Search" />
<tr>
<td runat="server" id ="td1" class="auto-style1"></td>
<td runat="server" id ="td2" class="auto-style1"></td>
<td runat="server" id ="td3" class="auto-style1"></td>
<td runat="server" id ="td4" class="auto-style1"></td>
</tr>
and my Button1_Click function looks like
protected void Button1_Click(object sender, EventArgs e)
{
string[]toDisp = someFunction();
td1.InnerText = toDisp[0];
td2.InnerText = toDisp[1];
td3.InnerText = toDisp[2];
td4.InnerText = toDisp[3];
}
Basically, I want to set the background color of the table row based on the value of toDisp[1]. How should I go about doing this? Thanks.
A:
If its going to be just 1 row
just set a ID to it with runat attribute
<tr id="test" runat="server">
<td runat="server" id ="td1" class="auto-style1"></td>
<td runat="server" id ="td2" class="auto-style1"></td>
<td runat="server" id ="td3" class="auto-style1"></td>
<td runat="server" id ="td4" class="auto-style1"></td>
</tr>
then based on the condition of toDisp[1]
you can write a switch statement or Random (based on color requirement) to just set
test.BgColor = "SomeColor";
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it OK to use different kinds of strobes together?
I currently have 2 Vivitar 285 strobes.
These are working very nicely for me.
I would like to buy 2 more strobes, and I would be quite happy with the Vivitars, but I would also like to consider either the LumoPro LP160 or something similar.
I typically shoot with the flashes off-camera in manual mode.
I use a cheap radio trigger.
My question is: will I have problems if I start mixing different makes of strobe?
Thank you for any advice.
A:
In day to day use, you should not have any problems mixing manually controlled strobes, as long as the color of light from both is the same. I think you'd be fine.
| {
"pile_set_name": "StackExchange"
} |
Q:
how to clone origin master as a new local branch?
I have a local master modified something.I want to clone origin master as a new local branch.I tried some way as below.But I found there are some different ways between master and the new branch.I don't know why this happend.And how can I clone origin master as a new local branch totally same.
git fetch origin master:newMaster
git checkout -b newMaster origin:master
A:
git fetch origin
git checkout -b newMaster origin/master
origin is the name of the remote repository you clone from. origin/master is what is known as a "remote tracking branch". It is how your local repository tracks the master branch on the origin repository.
git fetch origin updates your view of the remote repository by pulling down new commits and updating your remote tracking branches (ie. origin/master). Then you can simply make a branch off origin/master like any other branch.
See Working with Remotes in the Git Book for more.
| {
"pile_set_name": "StackExchange"
} |
Q:
Display an element with specific attribute in XSLT
My XML codes:
<?xml version="1.0" encoding="ISO-8859-1"?>
<?xml-stylesheet type="text/xsl" href="book.xslt"?>
<bookstore>
<book>
<title lang="eng">Harry Potter</title>
<price>29.99</price>
</book>
<book>
<title lang="eng">Learning XML</title>
<price>20.30</price>
</book>
<book>
<title lang="fr">Exploitation Linux</title>
<price>40.00</price>
</book>
</bookstore>
My XSLT:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<html>
<body>
<table border="1">
<tr>
<th>Book Title</th>
<th>Price</th>
</tr>
<xsl:for-each select="bookstore/book">
<tr>
<td><xsl:value-of select="title[@lang='eng']/text()"/></td>
<td><xsl:value-of select="price/text()"/></td>
</tr>
</xsl:for-each>
</table>
</body>
</html>
</xsl:template>
</xsl:stylesheet>
I would like to display the details only for the title which have attribute lang="eng" but i'm getting an unnecessary row where there's no book title but there's the price. Here's the output.
Thanks for your help.
A:
You need to limit the elements you're processing with the for-each to those that have a title in the appropriate language:
<xsl:for-each select="bookstore/book[title/@lang = 'eng']">
As an aside, you almost never need to use text() in an XPath expression, unless you really do want to handle individual text nodes separately. In situations like yours, where what you care about is the text content of the whole element, just take the value-of the element itself:
<td><xsl:value-of select="price"/></td>
| {
"pile_set_name": "StackExchange"
} |
Q:
How to set "Align left" to have predominance over "Align Top" in Delphi 7?
I want to align a certain component to Align=alLeft on the panel, occupying the whole left side of the panel. Then also have another component set to Align=AlTop, but not having predominance over the component aligned to the left, so that it will only occupy the top of the panel where the component that is aligned left is not occupying. (if that makes any sense). The thing is, I am doing a lot of custom drawing on the panel, so i am not able to add extra panels on top of my original panel to sub divide the panel and accomplish the alignment as I would normally do. So I want to change how Align works for this specific panel. Is that at all possible? I am using Delphi 7.
Something like this:
alt text http://www.freeimagehosting.net/uploads/2ede3a0023.jpg
A:
Well if you can't add an extra panel with alClient underneath the panel with alTop,
then my alternative would be to use anchors, just placing the panels where you want them and adding akBottom to the left panel and akRight to the top panel.
The final option is to resize the panels yourself in the OnResize event of the form/parent container.
A:
Have a look at alCustom. I don't see it used much nowadays but Demo2 from here might be what you need.
| {
"pile_set_name": "StackExchange"
} |
Q:
ListBoxFor not letting me select multiple items MVC
When I run the code, I can only select one item at a time, that's weird because 'ListBoxFor()' is used to select multiple items, so what i want is:
Select multiple items
View (Index.cshtml):
<div>
@Html.ListBoxFor(m => m.DropDownItems, new MultiSelectList(Repository.DDFetchItems(), "Value", "Text", Model.DropDownItems))
</div>
Model (ModelVariables.cs):
public class ModelVariables
{
public List<SelectListItem> DropDownItems { get; set; }
}
public static class Repository
{
public static List<SelectListItem> DDFetchItems()
{
return new List<SelectListItem>()
{
new SelectListItem(){ Text = "Dogs", Value = "1", Selected = true},
new SelectListItem(){ Text = "Cats", Value = "2"},
new SelectListItem(){ Text = "Death", Value = "3"}
};
}
}
Controller (HomeController.cs):
[HttpGet]
public ActionResult Index()
{
ModelVariables model = new ModelVariables()
{
DropDownItems = Repository.DDFetchItems()
};
return View(model);
}
A:
You cannot bind a <select multiple> to a collection of complex objects (which is what List<SelectListItem> is). A <select multiple> posts back an array of simple values (in your case, if you select the 1st and 3rd options, it will submit [1, 3] (the values of the selected options).
Your model needs a IEnumerable<int> property to bind to.
public class ModelVariables
{
public IEnumerable<int> SelectedItems { get; set; }
public IEnumerable<SelectListItem> DropDownItems { get; set; }
}
and then in the GET method
public ActionResult Index()
{
var ModelVariables= new ModelVariables()
{
DropDownItems = Repository.DDFetchItems(),
SelectedItems = new List<int>(){ 1, 3 } // to preselect the 1st and 3rd options
};
return View(model);
}
and in the view
@Html.ListBoxFor(m => m.SelectedItems, Model.DropDownItems)
Side notes
Remove Selected = true in the DDFetchItems() method - its
ignored by the ListBoxFor() method because its the value of the
property your binding to which determines what is selected
There is not need to build a new identical SelectList from the
first one inside the ListBoxFor() method (property DropDownItems
is already IEumerable<SelectListItem>)
| {
"pile_set_name": "StackExchange"
} |
Q:
Stack Implementation Using Linked Lists
I have tried Implementing Stacks Using Linked Lists, and would like some tips and tricks regarding optimizations or ways to simplify/compact processes.
It has 3 functions, Push, Pop, and Peek (For checking the top item in stacks)
class Node:
def __init__(self, data, next=None):
self.data = data
self.next = next
class Stack:
def __init__(self):
self.bottom = None
self.top = None
self.length = 0
def peek(self):
return self.top.data if self.top != None else None
def push(self, data):
NewNode = Node(data)
if self.length == 0:
self.bottom = self.top = NewNode
else:
top = self.top
self.top = NewNode
self.top.next = top
self.length += 1
return self
def pop(self):
length = self.length
top = self.top
if length == 0:
raise IndexError('No items in stack')
elif self.bottom == top:
self.bottom = None
else:
NextTop = self.top.next
self.top = NextTop
self.length -= 1
return top.data
Output:
stack = Stack()
stack.push(0)
stack.push(1)
stack.push(2)
stack.push(3)
print(stack.peek())
print(stack.pop())
print(stack.pop())
print(stack.pop())
print(stack.pop())
stack.push(5)
stack.push(2)
print(stack.pop())
print(stack.pop())
-----------------------
3
3
2
1
0
2
5
-----------------------
A:
First, I believe that the Node class is an implementation detail. You could move it inside the Stack or your could rename it _Node to indicate that it is private.
Next, I will refer you to this answer to a different CR question, also written by me: https://codereview.stackexchange.com/a/185052/106818
Specifically, points 2-7:
... consider how the Python list class (and set, and dict, and tuple, and ...) is initialized. And how the Mutable Sequence Types are expected to work.
Because your code is implementing a "mutable sequence type." So there's no reason that your code shouldn't work the same way. In fact, if you want other people to use your code, you should try to produce as few surprises as possible. Conforming to an existing interface is a good way to do that!
Create an initializer that takes a sequence.
class Stack:
def __init__(self, seq=None):
...
if seq is not None:
self.extend(sequence)
Implement as many of the mutable sequence operations as possible.
Use the standard method names where possible: clear, extend, append, remove, etc.
Implement special dundermethods (method names with "double-underscores" in them: double-under-methods, or "dundermethods") as needed to make standard Python idioms work:
def __contains__(self, item):
for i in self:
...
def __iter__(self):
node = self.head
while node:
yield node.value
node = node.next
Implement your test code using standard Python idioms, to prove it's working and to show developers how your code should be used!
Finally, some direct code criticisms:
Don't use equality comparisons with None. Use is None or is not None instead. This is a PEP-8-ism, and also actually faster.
You don't really use self.bottom for anything. Go ahead and delete it.
Don't use CamelCase variable names. That's another PEP-8 violation. Use snake_case for local variables.
| {
"pile_set_name": "StackExchange"
} |
Q:
Hibernate Criteria for @ManyToMany relationships
I have a "many to many" relation between users and projects:
User class:
@ManyToMany ( fetch = FetchType.EAGER )//Tipo de busqueda
@JoinTable(name="USERPROJECTS" //Tabla de intercambio
, joinColumns={@JoinColumn(name="IDUSER") //Llave foránea
}
, inverseJoinColumns={@JoinColumn(name="IDPROJECT") //Llave foránea
})
@Where( clause = "DELETIONDATE is null" )
private List<Project> projects;
Project class:
@ManyToMany(cascade=CascadeType.ALL) //Tipo de busqueda
@JoinTable(name="USERPROJECTS" //Tabla de intercambio
, joinColumns={@JoinColumn(name="IDPROJECT") //Llave foránea
}
, inverseJoinColumns={@JoinColumn(name="IDUSER") //Llave foránea
})
@Where( clause = "DELETIONDATE is null" )
private List<PacoUser> users;
and I need to create a criteria to get some users, and one of the conditions is that the user participates in one or more projects (It's for a search filter, so in the future this restrictions will be added dynamically). That's how I was trying it:
Criteria criteria = session.createCriteria(User.class);
criteria.add(Restrictions.isNull(COLUMN_DELETIONDATE));
criteria.add(Restrictions. ......);
....
criteria.add(Restrictions.in("projects", (List<Project>)projects));
and projects contains the projects that should be in the IN clausule.
But I become the next SQLException:
org.hibernate.exception.GenericJDBCException
could not execute query
SQL
select this_.id as id1_7_, this_.deletionDate as deletion2_1_7_, this_.email as email1_7_, this_.lastNames as lastNames1_7_, this_.name as name1_7_, this_.newPasswordRequested as newPassw6_1_7_, this_.IDORGANIZATION as IDORGAN10_1_7_, this_.password as password1_7_, this_.role as role1_7_, this_.userName as userName1_7_, organizati2_.id as id0_0_, organizati2_.certifications as certific3_0_0_, organizati2_.comContact as comContact0_0_, organizati2_.comEmail as comEmail0_0_, organizati2_.comPhone as comPhone0_0_, organizati2_.deletionDate as deletion7_0_0_, organizati2_.name as name0_0_, organizati2_.techContact as techCont9_0_0_, organizati2_.techEmail as techEmail0_0_, organizati2_.techPhone as techPhone0_0_, organizati2_.address as address0_0_, organizati2_.cif as cif0_0_, organizati2_.DTYPE as DTYPE0_0_, projects3_.IDUSER as IDUSER1_9_, project4_.id as IDPROJECT9_, project4_.id as id3_1_, project4_.complianceRequestingReason as complian2_3_1_, project4_.complianceResolutionReason as complian3_3_1_, project4_.deletionDate as deletion4_3_1_, project4_.description as descript5_3_1_, project4_.expedientNo as expedien6_3_1_, project4_.finishDate as finishDate3_1_, project4_.lastVersionForCompliance_id as lastVer12_3_1_, project4_.name as name3_1_, project4_.IDORGANIZATION as IDORGAN13_3_1_, project4_.IDPOLICY as IDPOLICY3_1_, project4_.startDate as startDate3_1_, project4_.state as state3_1_, project4_.tentativeFinishDate as tentati11_3_1_, version5_.id as id8_2_, version5_.classes as classes8_2_, version5_.CREATORID as CREATORID8_2_, version5_.description as descript3_8_2_, version5_.errorReport as errorRep4_8_2_, version5_.functions as functions8_2_, version5_.highSeverityErrorCount as highSeve6_8_2_, version5_.internalFileName as internal7_8_2_, version5_.javadocs as javadocs8_2_, version5_.javadocsLines as javadocs9_8_2_, version5_.lineCount as lineCount8_2_, version5_.lowSeverityErrorCount as lowSeve11_8_2_, version5_.mediumSeverityErrorCount as mediumS12_8_2_, version5_.multipleComment as multipl13_8_2_, version5_.name as name8_2_, version5_.observations as observa15_8_2_, version5_.packages as packages8_2_, version5_.PROJECTID as PROJECTID8_2_, version5_.reviewDate as reviewDate8_2_, version5_.singleComment as singleC18_8_2_, version5_.state as state8_2_, pacouser6_.id as id1_3_, pacouser6_.deletionDate as deletion2_1_3_, pacouser6_.email as email1_3_, pacouser6_.lastNames as lastNames1_3_, pacouser6_.name as name1_3_, pacouser6_.newPasswordRequested as newPassw6_1_3_, pacouser6_.IDORGANIZATION as IDORGAN10_1_3_, pacouser6_.password as password1_3_, pacouser6_.role as role1_3_, pacouser6_.userName as userName1_3_, project7_.id as id3_4_, project7_.complianceRequestingReason as complian2_3_4_, project7_.complianceResolutionReason as complian3_3_4_, project7_.deletionDate as deletion4_3_4_, project7_.description as descript5_3_4_, project7_.expedientNo as expedien6_3_4_, project7_.finishDate as finishDate3_4_, project7_.lastVersionForCompliance_id as lastVer12_3_4_, project7_.name as name3_4_, project7_.IDORGANIZATION as IDORGAN13_3_4_, project7_.IDPOLICY as IDPOLICY3_4_, project7_.startDate as startDate3_4_, project7_.state as state3_4_, project7_.tentativeFinishDate as tentati11_3_4_, organizati8_.id as id0_5_, organizati8_.certifications as certific3_0_5_, organizati8_.comContact as comContact0_5_, organizati8_.comEmail as comEmail0_5_, organizati8_.comPhone as comPhone0_5_, organizati8_.deletionDate as deletion7_0_5_, organizati8_.name as name0_5_, organizati8_.techContact as techCont9_0_5_, organizati8_.techEmail as techEmail0_5_, organizati8_.techPhone as techPhone0_5_, organizati8_.address as address0_5_, organizati8_.cif as cif0_5_, organizati8_.DTYPE as DTYPE0_5_, policy9_.id as id2_6_, policy9_.criticalViolations as critical2_2_6_, policy9_.deletionDate as deletion3_2_6_, policy9_.description as descript4_2_6_, policy9_.majorViolations as majorVio5_2_6_, policy9_.minorViolations as minorVio6_2_6_, policy9_.name as name2_6_ from PacoUser this_ left outer join Organization organizati2_ on this_.IDORGANIZATION=organizati2_.id left outer join USERPROJECTS projects3_ on this_.id=projects3_.IDUSER left outer join Project project4_ on projects3_.IDPROJECT=project4_.id and ( project4_.DELETIONDATE is null) left outer join Version version5_ on project4_.lastVersionForCompliance_id=version5_.id left outer join PacoUser pacouser6_ on version5_.CREATORID=pacouser6_.id left outer join Project project7_ on version5_.PROJECTID=project7_.id left outer join Organization organizati8_ on project7_.IDORGANIZATION=organizati8_.id left outer join Policy policy9_ on project7_.IDPOLICY=policy9_.id where this_.deletionDate is null and this_.id in (?)
errorCode
17041
java.sql.SQLException
Falta el parámetro IN o OUT en el índice:: 1 -- that means : In or OUT parameter are missing in index 1.
Any Insight? Can someone help me? What am I doing wrong?
Thank you very much in advance!!!
A:
Ok @frictionlesspulley! I hadn't realized that :-P. And thanx again for helping me, your answer put me in the right direction. Also the right way to do it would be
Criteria criteria = session.createCriteria(User.class);
criteria.add(Restrictions.isNull(COLUMN_DELETIONDATE));
criteria.add(Restrictions. ......);
....
criteria.createCriteria("projects", Criteria.INNER_JOIN).add(Restrictions.in("id",ids));
Being "ids" a Long[] with the ids of the projects
| {
"pile_set_name": "StackExchange"
} |
Q:
How do we use np.polyfit to fit a polynomial without having the constant term
I have two arrays, say for ex:
x = ([0.004,0.005,0.006,0.007])
y = ([0.001,0.095,0.026,0.307])
I want to fit a polynomial of degree 3 but I don't really intend to have the constant term(intercept) in my fitted polynomial. what code would work for that case.
I was just simply trying
np.polyfit(x,y,3)
but it definitely returns 4 values.
Any leads are much appreciated.
A:
Acutally in that case I'd go for scipy's curve_fit method and define my poly in a function.
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
#def a function
def f(x, a, b, c):
return a*x**3 + b*x**2 + c*x
#convert your data to np arrays
xs = np.array([0.004,0.005,0.006,0.007])
ys = np.array([0.001,0.095,0.026,0.307])
#do the fitting
popt, pcov = curve_fit(f, xs, ys)
#plot the results
plt.figure()
plt.plot(xs,ys)
plt.plot(xs, f(xs, *popt))
plt.grid()
plt.show()
#the parameters
print(popt)
#outputs [ 7.68289022e+06 -7.34702147e+04 1.79106740e+02]
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.